Huawei V2&V3 Server RAID Controller Card User Guide 49
Huawei V2&V3 Server RAID Controller Card User Guide 49
Huawei V2&V3 Server RAID Controller Card User Guide 49
User Guide
Issue 49
Date 2022-01-24
and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd.
All other trademarks and trade names mentioned in this document are the property of their respective
holders.
Notice
The purchased products, services and features are stipulated by the contract made between Huawei and
the customer. All or part of the products, services and features described in this document may not be
within the purchase scope or the usage scope. Unless otherwise specified in the contract, all statements,
information, and recommendations in this document are provided "AS IS" without warranties, guarantees
or representations of any kind, either express or implied.
The information in this document is subject to change without notice. Every effort has been made in the
preparation of this document to ensure accuracy of the contents, but all statements, information, and
recommendations in this document do not constitute a warranty of any kind, express or implied.
Website: https://e.huawei.com
Purpose
This document describes the appearances and features of the redundant array of
independent disks (RAID) controller cards, how to configure RAID arrays, and how
to install drivers. The RAID controller cards include the LSI SAS2208, LSI SAS2308,
LSI SAS3008IR, LSI SAS3008IT, LSI SAS3108, LSI SoftRAID, PM8060, and PM8068.
Intended Audience
This document is intended for:
The server maintenance personnel must have adequate knowledge about the
server products and service skills to avoid injury to human body or damage to
devices during maintenance.
Symbol Conventions
The symbols that may be found in this document are defined as follows.
Symbol Description
Symbol Description
Change History
Changes between document issues are cumulative. The latest document issue
contains all changes made in previous issues.
Contents
4 LSI SAS2208............................................................................................................................ 25
4.1 Overview.................................................................................................................................................................................. 25
4.2 Functions.................................................................................................................................................................................. 30
4.2.1 RAID 0, 1, 5, 6, 10, 50, and 60.......................................................................................................................................30
4.2.2 Drive Hot Spares................................................................................................................................................................ 31
4.2.3 Drive Hot Swap.................................................................................................................................................................. 32
4.2.4 Copyback.............................................................................................................................................................................. 33
4.2.5 Drive Striping...................................................................................................................................................................... 33
4.2.6 RAID Level Migration....................................................................................................................................................... 34
4.2.7 Initialization......................................................................................................................................................................... 35
4.2.8 Capacity Expansion........................................................................................................................................................... 35
4.2.9 Secure Data Erasure......................................................................................................................................................... 36
4.2.10 Consistency Check...........................................................................................................................................................36
4.2.11 Cache Data Read/Write................................................................................................................................................ 36
4.2.12 Power Failure Protection.............................................................................................................................................. 37
4.11.2.25 Querying RAID Controller Card, RAID Array, or Physical Drive Information......................................442
4.11.2.26 Restoring Frn-Bad Drives..................................................................................................................................... 448
4.11.2.27 Query Supercapacitor Information................................................................................................................... 449
4.11.2.28 Upgrading the Drive Firmware.......................................................................................................................... 451
4.11.2.29 Rebuilding the RAID Array Manually............................................................................................................... 452
4.11.2.30 Setting the Cache Status of a Drive................................................................................................................. 454
4.11.2.31 Viewing, Importing, and Deleting Foreign Configurations.......................................................................455
7 LSI SAS3008IT.......................................................................................................................727
7.1 Overview................................................................................................................................................................................ 727
7.2 Functions............................................................................................................................................................................... 729
7.2.1 High-Speed Ports and Modules................................................................................................................................. 729
7.2.2 Support for Multiple Extended Devices................................................................................................................... 729
7.2.3 Automatic Fault Recovery............................................................................................................................................ 730
7.2.4 Drive Passthrough........................................................................................................................................................... 730
7.2.5 Low-Level Formatting....................................................................................................................................................730
7.2.6 Drive Power Saving........................................................................................................................................................ 730
7.2.7 Drive Hot Swap................................................................................................................................................................ 730
7.3 Initial Configuration (Legacy/Dual Mode)................................................................................................................ 731
7.3.1 Logging In to the Configuration Utility................................................................................................................... 731
7.3.2 Setting Boot Devices...................................................................................................................................................... 736
7.3.2.1 Setting Boot Devices for a RAID Controller Card............................................................................................. 736
7.3.2.2 Setting Boot Devices for a Server.......................................................................................................................... 738
7.4 Initial Configuration (EFI/UEFI Mode)........................................................................................................................ 738
7.4.1 Logging In to the Management Screen.................................................................................................................. 739
7.4.2 Setting Boot Devices...................................................................................................................................................... 740
7.4.2.1 Setting Boot Devices for a RAID Controller Card............................................................................................. 740
7.4.2.2 Setting Boot Devices for a Server.......................................................................................................................... 740
7.5 Troubleshooting.................................................................................................................................................................. 740
7.5.1 Drive Fault......................................................................................................................................................................... 741
7.5.2 RAID Controller Card Fault.......................................................................................................................................... 741
7.6 Management Screens (Legacy/Dual Mode)............................................................................................................. 742
7.6.1 SAS Topology.................................................................................................................................................................... 742
7.6.2 Advanced Adapter Properties..................................................................................................................................... 743
7.6.3 Exit........................................................................................................................................................................................747
7.7 Configuration Utility (EFI/UEFI Mode)....................................................................................................................... 748
9 SoftRAID.............................................................................................................................. 1136
9.1 Overview............................................................................................................................................................................. 1136
9.2 Functions............................................................................................................................................................................. 1137
9.2.1 RAID 0, 1, 5, and 10..................................................................................................................................................... 1137
9.2.2 Initialization.................................................................................................................................................................... 1138
9.2.3 Consistency Check........................................................................................................................................................ 1138
9.2.4 RAID Rebuild.................................................................................................................................................................. 1139
9.2.5 Mixed Use of Drives..................................................................................................................................................... 1139
9.2.6 Drive Hot Spares........................................................................................................................................................... 1139
9.2.7 Drive Hot Swap............................................................................................................................................................. 1139
9.3 Initial Configurations...................................................................................................................................................... 1140
9.3.1 Configuration process................................................................................................................................................. 1140
9.3.2 Logging In to the Configuration Utility................................................................................................................ 1140
9.3.3 Creating RAID 0............................................................................................................................................................. 1143
9.3.4 Creating RAID 1............................................................................................................................................................. 1149
9.3.5 Creating RAID 5............................................................................................................................................................. 1155
9.3.6 Creating RAID 10.......................................................................................................................................................... 1161
9.3.7 Setting Boot Device......................................................................................................................................................1167
9.4 Common Tasks.................................................................................................................................................................. 1168
9.4.1 Configuring a Hot Spare Drive.................................................................................................................................1168
9.4.2 Deleting a Hot Spare Drive....................................................................................................................................... 1171
9.4.3 Configuring Controller Properties........................................................................................................................... 1173
9.4.4 Configuring RAID Properties..................................................................................................................................... 1175
9.4.5 Configuring Drive Properties.................................................................................................................................... 1178
9.4.6 Rebuilding RAID............................................................................................................................................................ 1179
9.4.7 Checking Consistency.................................................................................................................................................. 1181
9.4.8 Clearing a RAID Array................................................................................................................................................. 1183
10 PM8060............................................................................................................................. 1220
10.1 Overview........................................................................................................................................................................... 1220
10.2 Functions.......................................................................................................................................................................... 1224
10.2.1 Support for Multiple RAID Levels......................................................................................................................... 1224
10.2.2 Drive Hot Spares.........................................................................................................................................................1225
10.2.3 Drive Hot Swap........................................................................................................................................................... 1225
10.2.4 Rebuild and Copyback.............................................................................................................................................. 1226
10.2.5 Drive Striping............................................................................................................................................................... 1226
10.2.6 Drive Passthrough...................................................................................................................................................... 1226
10.2.7 Initialization................................................................................................................................................................. 1227
10.2.8 Capacity Expansion.................................................................................................................................................... 1227
10.2.9 Cache Data Read/Write........................................................................................................................................... 1227
10.2.10 Power Failure Protection....................................................................................................................................... 1228
10.2.11 Import of Foreign Configurations...................................................................................................................... 1228
10.3 Initial Configuration (Legacy/Dual Mode)............................................................................................................1228
10.10.2.12 Changing the RAID Strip Size, Capacity, and Level................................................................................ 1440
10.10.2.13 Setting Consistency Check Parameters.......................................................................................................1441
10.10.2.14 Setting the Enable Status of NCQ................................................................................................................1442
10.10.2.15 Setting a Drive as a Pass-Through Drive....................................................................................................1443
10.10.2.16 Setting the Status of a Drive UID Indicator.............................................................................................. 1443
10.10.2.17 Querying Device Information........................................................................................................................ 1444
10.10.2.18 Querying the Status of a Drive..................................................................................................................... 1445
11 PM8068............................................................................................................................. 1447
11.1 Overview........................................................................................................................................................................... 1447
11.2 Functions.......................................................................................................................................................................... 1450
11.2.1 Support for Multiple RAID Levels......................................................................................................................... 1450
11.2.2 Drive Hot Spares.........................................................................................................................................................1451
11.2.3 Rebuild and Copyback.............................................................................................................................................. 1451
11.2.4 Drive Striping............................................................................................................................................................... 1451
11.2.5 Drive Passthrough...................................................................................................................................................... 1452
11.2.6 Capacity Expansion.................................................................................................................................................... 1452
11.2.7 Import of Foreign Configurations......................................................................................................................... 1452
11.3 Initial Configuration (Legacy/Dual Mode)............................................................................................................1453
11.3.1 Configuration Process............................................................................................................................................... 1453
11.3.2 Logging In to the Configuration Utility.............................................................................................................. 1453
11.3.3 Creating RAID 0.......................................................................................................................................................... 1455
11.3.4 Creating RAID 1.......................................................................................................................................................... 1459
11.3.5 Creating RAID 5.......................................................................................................................................................... 1463
11.3.6 Creating RAID 10........................................................................................................................................................ 1467
11.3.7 Configuring a Logical Drive as the Boot Device.............................................................................................. 1471
11.4 Initial Configuration (EFI/UEFI Mode)................................................................................................................... 1473
11.4.1 Logging In to the Management Screen..............................................................................................................1473
11.4.2 Creating RAID 0.......................................................................................................................................................... 1474
11.4.3 Creating RAID 1.......................................................................................................................................................... 1481
11.4.4 Creating RAID 5.......................................................................................................................................................... 1487
11.4.5 Creating RAID 10........................................................................................................................................................ 1493
11.4.6 Setting Boot Devices................................................................................................................................................. 1499
11.5 Common Tasks (Legacy/Dual Mode)..................................................................................................................... 1499
11.5.1 Configuring a Hot Spare Drive.............................................................................................................................. 1499
11.5.2 Deleting a Hot Spare Drive..................................................................................................................................... 1502
11.5.3 Configuring Boot Drives........................................................................................................................................... 1504
11.5.4 Adding logical drives to an Array......................................................................................................................... 1506
11.5.5 Deleting an Array....................................................................................................................................................... 1509
11.6 Common Tasks (EFI/UEFI Mode)............................................................................................................................. 1511
11.6.1 Configuring a Hot Spare Drive.............................................................................................................................. 1511
11.6.2 Deleting a Hot Spare Drive..................................................................................................................................... 1514
11.6.3 Deleting an Array....................................................................................................................................................... 1517
A Appendix.............................................................................................................................1576
A.1 Common Tasks................................................................................................................................................................. 1576
A.1.1 Logging In to the RAID Controller Card Management Screen in EFI/UEFI Mode (Brickland Platform)
....................................................................................................................................................................................................... 1576
A.1.2 Logging In to the RAID Controller Card Management Screen in EFI/UEFI Mode (Grantley Platform)
....................................................................................................................................................................................................... 1577
A.1.3 Setting the Legacy Mode...........................................................................................................................................1580
A.1.4 Setting the EFI/UEFI Mode........................................................................................................................................1583
A.1.5 Downloading and Installing the RAID Controller Card Driver......................................................................1586
A.1.6 Managing the RAID Controller Card Through the iBMC................................................................................ 1588
A.2 RAID Levels........................................................................................................................................................................ 1588
A.2.1 RAID 0.............................................................................................................................................................................. 1588
A.2.2 RAID 1.............................................................................................................................................................................. 1589
A.2.3 RAID 5.............................................................................................................................................................................. 1590
A.2.4 RAID 6.............................................................................................................................................................................. 1590
A.2.5 RAID 10............................................................................................................................................................................ 1591
A.2.6 RAID 1E............................................................................................................................................................................ 1592
A.2.7 RAID 50............................................................................................................................................................................ 1593
A.2.8 RAID 60............................................................................................................................................................................ 1593
A.2.9 Fault Tolerance Capabilities...................................................................................................................................... 1594
A.2.10 I/O Performance......................................................................................................................................................... 1595
A.2.11 Storage Capacity........................................................................................................................................................ 1596
A.3 Common Boot Error Messages for RAID Controller Cards................................................................................ 1597
This section describes RAID controller card types and naming rules.
Table 1-1 describes the controller cards supported by Huawei servers.
Various RAID controller cards support different types of servers and OSs. Obtain
details from Computing Product Compatibility Checker.
Table 1-2 describes the meanings of suffixes in RAID controller card names.
IT No Yes No No
IR Yes No No No
2 Technical Specifications
Port 6 6 12 12 12 6 12 12
rate (Only
(Gbit/s SATA
) drives
are
suppor
ted.)
Numbe 64 2 2 N/A 64 8 64 64
r of
suppor
ted
RAID
arrays
a: The LSI SAS3108 supports two types of RAID keys, which support 32 and 240
drives respectively. For details, see Table 8-4 in 8.2.1 RAID 0, 1, 5, 6, 10, 50,
and 60.
b: The LSI SAS2208 supports two types of RAID keys, which support 16 and 32
drives respectively. For details, see Table 4-3 in 4.2.1 RAID 0, 1, 5, 6, 10, 50,
and 60.
c: The LSI SAS3108 PCB version .B and later support out-of-band management.
The LSI SAS3108 PCB version .A does not support out-of-band management.
d: Maximum number of drives in all RAID arrays (for an LSI SAS2308 or LSI
SAS3008 RAID controller card) = Number of hot spare drives + Number of drives
in RAID arrays
Maximum number of drives in all RAID arrays (for other RAID controller cards)
= Number of hot spare drives + Number of idle drives (drives in Unconfigured
Good state) + Number of drives in RAID arrays
e: To obtain the code of the RAID controller card that supports the CacheCade,
use Computing Product Compatibility Checker.
NOTE
Table 2-2 lists the reliability, read/write performance, and drive utilization of RAID
levels supported by RAID controller cards.
Table 2-3 lists the interface types of drives supported by each RAID controller
card.
NOTE
For details about the drive models supported by each RAID controller card, see the
compatibility list. For details, use Computing Product Compatibility Checker.
3 RAID Features
A virtual drive is a continuous data storage unit divided from a drive group. A
virtual drive can be considered an independent drive. After configured, a virtual
drive can provide higher capacity, security, and data redundancy than a physical
drive.
Related conventions:
● A drive group (DG for short) also refers to an array or RAID array.
● A virtual drive also refers to a virtual disk (VD for short), volume, or a logical
device (LD for short).
In RAID 1, data images are stored on a pair of drives, and errors or faults on one
of the drives do not cause data loss. Employing the same working principle, RAID
5 allows one faulty drive, and RAID 6 allows two faulty drives.
Consisting of multiple spans, RAID 10 and RAID 50 allow the number of faulty
drives to be the same as the number of spans but require each span to contain
only one faulty drive. RAID 60 allows the number of faulty drives to be twice that
of spans and requires each span to contain a maximum of two faulty drives.
NOTE
RAID 0 does not support fault tolerance. When a drive in RAID 0 becomes faulty, the RAID
array fails and data gets lost.
The fault tolerance feature improves the system availability. When a drive
becomes faulty, the system can still work properly. Fault tolerance is of great
importance for fault recovering.
For RAID arrays without redundancy (RAID 0), consistency checks are not
supported.
NOTE
Emergency Spare
If no hot spare drive is specified for a RAID array with redundancy, emergency
spare allows an idle drive managed by the RAID controller card to automatically
replace a failed member drive and rebuild data to avoid data loss.
The idle drive used for emergency spare must be of the same medium type as that
of the failed member drive and have at least the capacity of the failed member
drive.
an available hot spare drive, and one of its member drives becomes faulty, the hot
spare drive automatically replaces the faulty drive and rebuilds data. If the RAID
array has no available hot spare drive, data can be rebuilt only after the faulty
drive is replaced with a new drive. When the hot spare drive starts rebuilding data,
the faulty member drive enters the removable state. If the system is powered off
during the data rebuild, the RAID controller card continues the data rebuild task
after the system restart.
The data rebuild rate indicates the proportion of CPU resources occupied by a data
rebuild task to the overall CPU resources. The data rebuild rate can be set to 0%–
100%. The value 0% indicates that the RAID controller card starts the data rebuild
task only when no other task is running in the system. The value 100% indicates
that the data rebuild task occupies all CPU resources. You can customize the data
rebuild rate. It is recommended that you set it to an appropriate value based on
the site requirements.
State Description
Available (AVL) The physical drive may not be ready, and is not suitable
for use in a logical drive or hot spare pool.
Degraded (DGD) The physical drive is a part of the logical drive and is in
the degraded state.
Missing (MIS) When a drive in the Online state is removed, the drive
enters the Missing state.
State Description
Out of Sync (OSY) As a part of the IR logical drive, the physical drive is not
synchronized with other physical drives in the IR logical
drive.
Predictive Failure The drive is about to Failed. You need to back up the
data on the drive and replace the drive.
Ready (RDY) This state applies to the RAID/Mixed mode of the RAID
controller card. The drives can be used to configure a
RAID array. In RAID mode, the drives in the Ready state
are not reported to the operating system (OS). In Mixed
mode, the drives in the Ready state are reported to the
OS.
Spare The drive is a hot spare drive and is in the normal state.
Unconfigured Good The drive is in a normal state but is not a member drive
(ugood/ucfggood) of a virtual drive or hot spare drive.
Table 3-2 describes the states of virtual drives created under the RAID controller
card.
Degrade/ The virtual drive is available but abnormal, and certain member
Degraded drives are faulty or offline. User data is not protected.
(DGD)
Inactive The virtual drive is inactive and can be used only after being
activated.
Inc RAID The virtual drive does not support the SMART or SATA expansion
command.
Max Dsks The number of drives in the RAID array has reached the upper
limit. The drive cannot be added to the RAID array.
Not Syncd The data on the physical drive is not synchronized with the data
on other physical drives in the logical drive.
Normal The virtual drive is in the normal state, and all member drives
are online.
State Description
Optimal The virtual drive is in a sound state, and all member drives are
online.
Okay (OKY) The virtual drive is active and the physical drives are running
properly. If the current RAID level provides data protection, user
data is protected.
Primary The drive is the primary drive in RAID 1 and is in normal state.
Secondary The drive is the secondary drive in RAID 1 and is in normal state.
Tool Small The drive capacity is insufficient and cannot be used as the hot
spare of the current RAID array.
Wrg Intfc The drive interface is different from that in the current RAID
array.
Wrg Type The drive cannot be used as a member drive of the RAID array.
The drive may be incompatible or faulty.
RAID controller card caches the data that follows the data being read for
faster access. This policy reduces drive seeks and shortens read time.
The read-ahead policy is applicable only when the RAID controller card
supports power failure protection for data. If the supercapacitor of the RAID
controller that adopts the read-ahead policy is abnormal, data may get lost.
● Non-read-ahead policy: If this policy is used, the RAID controller card does not
read data ahead. Instead, it reads data from a virtual drive only when it
receives a data read command.
Drive striping is to divide the space of a drive into multiple strips based on the
specified strip size. When data is written into the drive, the data is divided into
data blocks based on the strip size.
For example, RAID 0 consists of four member drives. The first data block is written
into the first member drive, the second data block is written into the second
member drive, and so on, as shown in Figure 3-1. In this way, multiple drives are
concurrently written, significantly improving system performance. However, data
redundancy is not guaranteed by drive striping.
You can process detected foreign configuration as required. For example, if the
RAID configuration existing on the newly installed drive does not meet
requirements, you can delete it. If you want to use the RAID configuration of a
RAID controller card that has been replaced, you can import the RAID
configuration and make it take effect on the new RAID controller card.
Enabling this feature puts drives in the idle state and idle hot spare drives into the
power saving state. Operations, such as RAID array creation, hot spare drive
creation, dynamic capacity expansion, and rebuild, will wake the drives from
power saving.
With this function, the RAID controller card allows user commands to be directly
transmitted to connected drives, facilitating drive access and control by upper-
layer services or management software.
For example, you can install an OS on the drives mounted to a RAID controller
card. But if the RAID controller card does not support the passthrough feature, you
can install the OS only on the virtual drives configured under the RAID controller
card.
4 LSI SAS2208
4.1 Overview
The LSI SAS2208 controller card is a 6 Gbit/s SAS controller with the MegaRAID
architecture. Equipped with PCIe 3.0 x8 ports and a powerful I/O storage engine,
the controller card handles data protection, verification, and restoration.
In addition to better system performance, the controller card supports fault-
tolerant data storage in multiple drive partitions and read/write operations on
multiple drives at the same time. This makes accessing data on drives faster.
The LSI SAS2208 supports boot and configuration in legacy and UEFI modes.
The built-in cache improves performance as follows:
● Data is directly written to the cache. The RAID controller card updates data to
drives after data is accumulated to some extent in the cache. This implements
data writing in batches. In addition, the cache improves the overall data write
speed due to its higher speed than a drive.
● Data is directly read from the cache, reducing the response time from 6 ms to
less than 1 ms.
NOTE
● The supercapacitor or integrated battery backup unit (iBBU) connected to the controller
card provides power failure protection for the cache. For details about the
supercapacitor or iBBU, see the maintenance guide or troubleshooting guide of the
server.
● The supercapacitor must be used with a trans flash module (TFM). Table 4-1 describes
the indicators on the TFM.
The LSI SAS2208 has two structures to provide ease of connection in different
servers:
The LSI SAS2208 for the DH320 V2 node does not provide the mini-SAS ports.
● LSI SAS2208 for a blade server or X6000 server
The LSI SAS2208 connects to the mainboard through two XCede connectors,
as shown in Figure 4-2.
NOTE
The LSI SAS2208 can also be integrated into the mainboard of a server, as in the XH321 V2
and DH321 V2 nodes.
Indicators
Table 4-2 describes the indicators on the LSI SAS2208.
4.2 Functions
● 16 or 32 physical drives (The LSI SAS2208 supports two types of RAID keys,
which support 16 and 32 physical drives respectively).
NOTE
The physical drives supported include the number of hot spare drives, idle drives
(drives in Unconfigured Good state), and the number of drives in RAID arrays.
● 64 virtual drives, specified by Virtual Drive on the Configuration Utility.
Table 4-3 lists the supported RAID levels and quantity of drives.
RAID 5 3 to 32 1
RAID 6 3 to 32 2
NOTE
Hot Spare
After RAID configuration of the drives of a server, configuration of hot spare drives
increases security and reduces impact on services from drive faults.
● Global hot spare drive: shared by all RAID arrays of LSI SAS2208, which can be
configured with one or more global hot spare drives. A global hot spare drive
automatically takes over services of a failed member drive in any RAID array.
For details, see 4.5.1.1 Configuring a Global Hot Spare Drive.
● Dedicated hot spare drive: replaces a failed drive only in a specified RAID
array of LSI SAS2208. One or more dedicated hot spare drives can be
configured for each RAID array. The hot spare drive automatically takes over
services of a failed member drive only in a specified RAID array.
For details, see 4.5.1.2 Configuring a Dedicated Hot Spare Drive.
The hot spare drive must have a capacity greater than or equal to that of a
member drive.
NOTE
● The HDDs and SSDs cannot be used as the hot spare drives of each other.
● The HDDs include SAS HDDs and SATA HDDS. If the member drives of a RAID array are
SAS drives, the SATA drives can be used as dedicated hot spare drives. If the member
drives are SATA drives, the SAS drives cannot be used as dedicated hot spare drives.
● An idle drive can be configured as a hot spare drive, but a RAID member drive cannot
be configured as a hot spare drive.
● The type of hot spare drives must be the same as that of the member drives in the RAID
array, and the capacity of hot spare drives must be greater than or equal to the
maximum capacity of the member drives in the RAID array.
● All RAID levels except RAID 0 support hot spare drives.
● You cannot directly change a global hot spare drive to a dedicated hot spare drive or
vice versa. You need to set the drive to idle state, and then set it as a global or
dedicated hot spare drive as required.
Emergency Spares
After the emergency spare function is enabled for a RAID that supports
redundancy and has no hot spare drive specified, a spare drive in the fail or
prefail state will automatically replace a failed member drive of the same type
and rebuild data to avoid data loss.
The capacity of the idle drive used to rebuild data must be greater than or equal
to that of a member drive.
To enable or disable the emergency spare function, set the Emergency Spare
parameter on the Figure 4-223 screen.
NOTE
● After removing a drive, install it after at least 30 seconds. Otherwise, the drive cannot
be identified.
● Before removing and inserting a drive, check the logical status of the drive and the
number of faulty drives allowed by the RAID level.
● If you remove and insert a pass-through drive without powering off the OS, the drive
letter in the system may change. Before removing and inserting a pass-through drive,
record the drive letter in the system.
4.2.4 Copyback
If a member drive of a RAID array becomes faulty, a hot spare drive automatically
replaces the failed drive and starts data synchronization. Once the faulty drive has
been replaced with a newly installed data drive, data is copied from the hot spare
drive to the new data drive. Once the data copyback is complete, the hot spare
drive is restored to the hot spare state.
NOTE
● Different types of hot spare copyback have the same performance. During the copyback,
the RAID and drive status changes as follows:
● The RAID array status remains Optimal.
● The status of the hot spare drive changes from Online to Hot Spare.
● The status of the newly added drive changes from Copyback or Copybacking to
Online.
● You can use the command line tool to pause or resume copyback. For details, see
4.11.2.15 Querying and Setting RAID Rebuild, Copyback, and Patrolread Functions.
● If the server is restarted during the copyback, the progress is saved.
Striping
Multiple processes accessing a drive at the same time may cause drive conflicts.
Most drives are specified with thresholds for the access count (I/O operations per
second) and data transmission rate (data volume transmitted per second). If the
thresholds are reached when multiple processes concurrently access a drive, new
access requests will be suspended, which causes drive conflicts.
The striping technology evenly distributes I/O loads to multiple physical drives. It
divides continuous data into multiple blocks and saves them to different drives.
This allows multiple processes to access these data blocks concurrently without
causing any drive conflicts. Striping also optimizes concurrent processing
performance in sequential access to data.
Stripes
The storage space of each member drive in a RAID array is striped based on the
strip size. The data written to the drives is also sliced based on the strip size.
The LSI SAS2208 supports multiple strip sizes, including 8 KB, 16 KB, 32 KB, 64 KB,
128 KB, 256 KB, 512 KB and 1 MB. The default value is 256 KB.
The RAID controller card supports the following RAID level migration options:
Table 4-4 lists the minimum numbers of drives to be added for RAID level
migration.
4.2.7 Initialization
Virtual Drive Initialization
After you create a virtual drive (VD), initialize it for use with an OS. After a VD
with the redundancy function is initialized, the data relationships between its
member drives comply with their RAID level requirements. The three types of
initialization are:
● Fast initialization: In this foreground process, the firmware writes zeros to the
first 100 MB of the VD. During this process, the state of the VD is Optimal.
● Slow initialization: In this foreground process, the firmware writes zeros to the
entire VD During this process, the state of the VD is Optimal.
● Background initialization:
– For RAID 1 and RAID 10: When data is inconsistent between primary and
secondary member drives, data will be copied from the primary drive to
the secondary drive during background initialization to overwrite the
original data on the secondary drive.
– For RAID 5, RAID 6, RAID 50, and RAID 60: Reads and checks parity of
data on all member drives. If the result is inconsistent with the parity
drive, the newly generated data will overwrite the original data on the
parity drive
NOTE
Drive Initialization
Drive initialization is a process of formatting a drive by repeatedly writing zeros to
it.
● Only RAID 0, RAID 1, RAID 5, and RAID 6 support capacity expansion by adding drives.
● RAID 10, 50, and 60 do not support capacity expansion by adding drives.
● If a RAID array contains two or more VDs, its capacity cannot be expanded by adding
drives.
● During capacity expansion, you need to add two drives to RAID 1 each time, and only
one drive to RAID 0, RAID 5, or RAID 6 each time.
To achieve optimal drive performance, set the policy to Always Read Ahead or Read
Ahead for HDDs and No Read Ahead for SSDs.
● Write Back: When the cache receives host data, the LSI SAS2208 signals the
host that the data transmission is complete.
Data is directly written to the cache. The RAID controller card updates data to
drives after data is accumulated to some extent in the cache. This implements
data writing in batches. In addition, the cache improves the overall data write
speed due to its higher speed than a drive.
You can set the read/write policies on the 4.8.9.2 Virtual Drives screen.
● Enabling caching greatly improves the server write performance. If the server
write pressure decreases, or if the cache is nearly full, data is migrated from
the cache to drives.
● Enabling caching also increases the data loss risk. If a server is powered off
unexpectedly, data in the cache gets lost.
Batteries or supercapacitors start supplying power to the cache only when the
cache voltage is lower than the preset value. If the cache voltage is higher than
the preset value, RAID controller cards supply power to the cache. This ensures
data security of the cache.
The LSI SAS2208 provides the following power failure protection modes:
● Intelligent battery backup unit (iBBU): If a system power failure occurs, the
iBBU provides power supply for the cache. After the system power supply
recovers, the system writes the data in the cache to drives. The iBBU, however,
provides a limited backup time for 12 hours, 24 hours, 48 hours, or 72 hours.
If the system power is not recovered within the limited period, data may get
lost.
● If a power failure occurs, the supercapacitor supplies power to store cached
data in the NAND flash of the supercapacitor protection module. The flash is
a non-volatile storage medium. Therefore, the supercapacitor can provide
approximately permanent protection for cache data.
The system allocates resources for patrol read based on the I/O workload. For
example, if the I/O workload is high, the system allocates fewer resources for
patrol read to ensure higher priority of I/O operations.
Patrol read cannot be performed on a drive that has any of the following
operations in progress:
media. When potential problems are detected, the controller card reports alarms
to users promptly to help avoid data loss.
The controller card supports periodic SMART scanning of managed drives. You can
set the interval period, which is 300 seconds by default. Detected SMART errors
are logged.
● If the function is enabled, the fault indicator will be on when a drive is faulty.
● If the function is disabled, the fault indicator remains off even if a drive is
faulty.
By default, this function is enabled for the LSI SAS2208. You can enable or disable
it by setting Maintain PD Fail History on the Controller Properties screen of the
CU.
All configurations described in this document about the LSI SAS2208 are
performed on the Configuration Utility, which can be accessed only after you
restart the server. To monitor controller card status and obtain configuration
information when the system is running, use the StorCLI tool.
If the boot type is changed after the OS has been installed in Legacy or UEFI
mode, the OS will be inaccessible. To access the OS, you need to change the boot
type to that used when the OS is installed. If the OS needs to be reinstalled, select
the Legacy or UEFI mode based on actual situation.
If multiple boot devices are configured, you are advised to set Boot Type to UEFI
Boot Type because certain boot devices may fail to boot if Boot Type is set to
Legacy Boot Type. If you still want to set Boot Type to Legacy Boot Type, then
disable redirection for certain serial ports or disable PXE for certain NICs based on
the services in use. For details, see "Setting PXE for a NIC" and "Setting Serial Port
Redirection" in the respective BIOS Parameter Reference.
Scenarios
The LSI SAS2208 BIOS Configuration Utility is used to configure and manage the
LSI SAS2208 RAID controller card. Configuration Utility is embedded in the BIOS of
the controller and runs independently from the OS. It simplifies the procedures for
configuring and managing RAID properties.
Procedure
Step 1 Set the Legacy mode. For details, see A.1.3 Setting the Legacy Mode.
Step 2 Log in to the Configuration Utility.
1. During the server startup, press Ctrl+H when the message "Press <Ctrl><H>
for WebBIOS or press <Ctrl><Y> for Preboot CLI" shown in Figure 4-4 is
displayed.
3. Select the LSI SAS2208 RAID controller card and click Start.
The Configuration Utility main screen is displayed, as shown in Figure 4-6.
Table 4-6 describes the parameters.
Drivers Views the physical drive properties and operates the drive,
such as creating a hot spare drive.
Parameter Description
----End
Additional Information
Related Tasks
None
Related Concepts
Icon Function
Disables the onboard sound alarm for the RAID controller card.
● Data on a drive will be deleted after the drive is added to a RAID array. Before creating
a RAID array, check that there is no data on drives or that the data on drives is not
required. If the drive data needs to be retained, back up the data first.
● The LSI SAS2208 supports SAS/SATA HDDs and SSDs. Drives in one RAID array must be
of the same type, but can have different capacities or be provided by different vendors.
● 4.2.1 RAID 0, 1, 5, 6, 10, 50, and 60 lists the number of drives required by each RAID
level.
Workflow
Procedure
Step 1 Back up data on drives and access the Configuration Utility main screen. For
details, see 4.3.1 Logging In to the Configuration Utility.
NOTICE
Exercise caution when selecting New Configuration. If you select it, the
existing configuration of the selected physical drives will be deleted. Add
Configuration is recommended if you want to create a RAID array.
Parameter Description
▪ Disabled
▪ Enabled
Default value: Disabled
NOTE
– If the drives in the same group have different sizes, the capacity of the RAID array
to be created is affected and an alarm will be generated when you click Accept
DG. It is recommended that you use drives with the same specifications to create a
RAID array.
– To release a selected drive, select the drive in Drive Groups and click Reclaim.
– A RAID 0 array supports 1 to 32 drives
– If the total number of drives in all RAID arrays under a RAID controller card
exceeds 16 or 32 (depending on the RAID key specifications), no drive can be
added to RAID arrays.
3. Click Accept DG to accept the drive group.
The screen shown in Figure 4-11 is displayed.
4. Click Next.
The Span Definition screen is displayed.
6. Click Add to SPAN. The drive group is added to the Span pane on the right.
The screen shown in Figure 4-13 is displayed.
Parameter Description
Parameter Description
IO Policy Options for data I/O of special virtual drives. This policy does
not affect cache prefetch.
– Direct:
Drive Cache Cache policy of physical drives (valid only for drives with
cache).
– Enable: Data is cached on drives to improve write
performance. However, if no protection mechanism is
available when the system is powered off unexpectedly,
data in the cache will be lost.
– Disable: Data is not cached on drives during a write
process. Data will not be lost when the system is powered
off unexpectedly.
– Unchanged: The current drive cache policy remains
unchanged.
Default value: Unchanged
If only one virtual drive needs to be configured, click Update Size and then Accept.
1. On the screen shown in Figure 4-248, click Back and repeat Step 3 to Step 5
for each virtual drive to be created.
In this example, three virtual drives are created.
3. Click Accept.
A confirmation dialog box is displayed.
4. Click Yes.
The dialog box shown in Figure 4-18 is displayed.
NOTICE
Initializing virtual drives will damage data on the virtual drives. If the original
data on the drives needs to be retained, select No.
----End
● Data on a drive will be deleted after the drive is added to a RAID array. Before creating
a RAID array, check that there is no data on drives or that the data on drives is not
required. If the drive data needs to be retained, back up the data first.
● The LSI SAS2208 supports SAS/SATA HDDs and SSDs. Drives in one RAID array must be
of the same type, but can have different capacities or be provided by different vendors.
● 4.2.1 RAID 0, 1, 5, 6, 10, 50, and 60 lists the number of drives required by each RAID
level.
Workflow
Procedure
Step 1 Back up data on drives and access the Configuration Utility main screen. For
details, see 4.3.1 Logging In to the Configuration Utility.
NOTICE
Exercise caution when selecting New Configuration. If you select it, the
existing configuration of the selected physical drives will be deleted. Add
Configuration is recommended if you want to create a RAID array.
Parameter Description
▪ Disabled
▪ Enabled
Default value: Disabled
NOTE
– If the drives in the same group have different sizes, the capacity of the RAID array
to be created is affected and an alarm will be generated when you click Accept
DG. It is recommended that you use drives with the same specifications to create a
RAID array.
– To release a selected drive, select the drive in Drive Groups and click Reclaim.
– A RAID 1 array supports an even number of drives, ranging from 2 to 32.
– If the total number of drives in all RAID arrays under the RAID controller card
exceeds 16 or 32 (depending on the RAID key specifications), no drive can be
added to RAID arrays.
3. Click Accept DG to accept the drive group.
The screen shown in Figure 4-25 is displayed.
4. Click Next.
The Span Definition screen is displayed.
6. Click Add to SPAN. The drive group is added to the Span pane on the right.
The screen shown in Figure 4-27 is displayed.
Parameter Description
Parameter Description
IO Policy Options for data I/O of special virtual drives. This policy does
not affect cache prefetch.
– Direct:
Drive Cache Cache policy of physical drives (valid only for drives with
cache).
– Enable: Data is cached on drives to improve write
performance. However, if no protection mechanism is
available when the system is powered off unexpectedly,
data in the cache will be lost.
– Disable: Data is not cached on drives during a write
process. Data will not be lost when the system is powered
off unexpectedly.
– Unchanged: The current drive cache policy remains
unchanged.
Default value: Unchanged
If only one virtual drive needs to be configured, click Update Size and then Accept.
1. On the screen shown in Figure 4-29, click Back and repeat Step 3 to Step 5
for each virtual drive to be created.
In this example, three virtual drives are created.
3. Click Accept.
A confirmation dialog box is displayed.
4. Click Yes.
The dialog box shown in Figure 4-32 is displayed.
NOTICE
Initializing virtual drives will damage data on the virtual drives. If the original
data on the drives needs to be retained, select No.
----End
● Data on a drive will be deleted after the drive is added to a RAID array. Before creating
a RAID array, check that there is no data on drives or that the data on drives is not
required. If the drive data needs to be retained, back up the data first.
● The LSI SAS2208 supports SAS/SATA HDDs and SSDs. Drives in one RAID array must be
of the same type, but can have different capacities or be provided by different vendors.
● 4.2.1 RAID 0, 1, 5, 6, 10, 50, and 60 lists the number of drives required by each RAID
level.
Workflow
Procedure
Step 1 Back up data on drives and access the Configuration Utility main screen. For
details, see 4.3.1 Logging In to the Configuration Utility.
NOTICE
Exercise caution when selecting New Configuration. If you select it, the
existing configuration of the selected physical drives will be deleted. Add
Configuration is recommended if you want to create a RAID array.
Parameter Description
▪ Disabled
▪ Enabled
Default value: Disabled
NOTE
– If the drives in the same group have different sizes, the capacity of the RAID array
to be created is affected and an alarm will be generated when you click Accept
DG. It is recommended that you use drives with the same specifications to create a
RAID array.
– To release a selected drive, select the drive in Drive Groups and click Reclaim.
– A RAID 5 array supports 3 to 32 drives.
– If the total number of drives in all RAID arrays under the RAID controller card
exceeds 16 or 32 (depending on the RAID key specifications), no drive can be
added to RAID arrays.
3. Click Accept DG to accept the drive group.
The screen shown in Figure 4-39 is displayed.
4. Click Next.
The Span Definition screen is displayed.
6. Click Add to SPAN. The drive group is added to the Span pane on the right.
The screen shown in Figure 4-41 is displayed.
Parameter Description
Parameter Description
IO Policy Options for data I/O of special virtual drives. This policy does
not affect cache prefetch.
– Direct:
Drive Cache Cache policy of physical drives (valid only for drives with
cache).
– Enable: Data is cached on drives to improve write
performance. However, if no protection mechanism is
available when the system is powered off unexpectedly,
data in the cache will be lost.
– Disable: Data is not cached on drives during a write
process. Data will not be lost when the system is powered
off unexpectedly.
– Unchanged: The current drive cache policy remains
unchanged.
Default value: Unchanged
If only one virtual drive needs to be configured, click Update Size and then Accept.
1. On the screen shown in Figure 4-43, click Back and repeat Step 3 to Step 5
for each virtual drive to be created.
In this example, three virtual drives are created.
3. Click Accept.
A confirmation dialog box is displayed.
4. Click Yes.
The dialog box shown in Figure 4-46 is displayed.
NOTICE
Initializing virtual drives will damage data on the virtual drives. If the original
data on the drives needs to be retained, select No.
----End
● Data on a drive will be deleted after the drive is added to a RAID array. Before creating
a RAID array, check that there is no data on drives or that the data on drives is not
required. If the drive data needs to be retained, back up the data first.
● The LSI SAS2208 supports SAS/SATA HDDs and SSDs. Drives in one RAID array must be
of the same type, but can have different capacities or be provided by different vendors.
● 4.2.1 RAID 0, 1, 5, 6, 10, 50, and 60 lists the number of drives required by each RAID
level.
Workflow
Procedure
Step 1 Back up data on drives and access the Configuration Utility main screen. For
details, see 4.3.1 Logging In to the Configuration Utility.
NOTICE
Exercise caution when selecting New Configuration. If you select it, the
existing configuration of the selected physical drives will be deleted. Add
Configuration is recommended if you want to create a RAID array.
Parameter Description
▪ Disabled
▪ Enabled
Default value: Disabled
– If the drives in the same group have different sizes, the capacity of the RAID array
to be created is affected and an alarm will be generated when you click Accept
DG. It is recommended that you use drives with the same specifications to create a
RAID array.
– To release a selected drive, select the drive in Drive Groups and click Reclaim.
– A RAID 6 array supports 3 to 32 drives.
– If the total number of drives in all RAID arrays under the RAID controller card
exceeds 16 or 32 (depending on the RAID key specifications), no drive can be
added to RAID arrays.
4. Click Next.
The Span Definition screen is displayed.
6. Click Add to SPAN. The drive group is added to the Span pane on the right.
The screen shown in Figure 4-55 is displayed.
Parameter Description
Parameter Description
IO Policy Options for data I/O of special virtual drives. This policy does
not affect cache prefetch.
– Direct:
Drive Cache Cache policy of physical drives (valid only for drives with
cache).
– Enable: Data is cached on drives to improve write
performance. However, if no protection mechanism is
available when the system is powered off unexpectedly,
data in the cache will be lost.
– Disable: Data is not cached on drives during a write
process. Data will not be lost when the system is powered
off unexpectedly.
– Unchanged: The current drive cache policy remains
unchanged.
Default value: Unchanged
Step 6 Set other parameters on the Virtual Drive Definition screen as required.
NOTE
If only one virtual drive needs to be configured, click Update Size and then Accept.
1. On the screen shown in Figure 4-57, click Back and repeat Step 3 to Step 7
for each virtual drive to be created.
In this example, three virtual drives are created.
3. Click Accept.
A confirmation dialog box is displayed.
4. Click Yes.
The dialog box shown in Figure 4-60 is displayed.
NOTICE
Initializing virtual drives will damage data on the virtual drives. If the original
data on the drives needs to be retained, select No.
----End
● Data on a drive will be deleted after the drive is added to a RAID array. Before creating
a RAID array, check that there is no data on drives or that the data on drives is not
required. If the drive data needs to be retained, back up the data first.
● The LSI SAS2208 supports SAS/SATA HDDs and SSDs. Drives in one RAID array must be
of the same type, but can have different capacities or be provided by different vendors.
● 4.2.1 RAID 0, 1, 5, 6, 10, 50, and 60 lists the number of drives required by each RAID
level.
Workflow
Procedure
Step 1 Log in to the Configuration Utility. For details, see 4.3.1 Logging In to the
Configuration Utility.
NOTICE
Exercise caution when selecting New Configuration. If you select it, the
existing configuration of the selected physical drives will be deleted. Add
Configuration is recommended if you want to create a RAID array.
Parameter Description
▪ Disabled
▪ Enabled
Default value: Disabled
NOTE
– Each drive group of a RAID 10 array supports an even number of drives, for
example, 2, 4, 6, 8 or 16.
– If the total number of drives in all RAID arrays under the RAID controller card
exceeds 16 or 32 (depending on the RAID key specifications), no drive can be
added to RAID arrays.
– If the drives in the same group have different sizes, the capacity of the RAID array
to be created is affected and an alarm will be generated when you click Accept
DG. It is recommended that you use drives with the same specifications to create a
RAID array.
– To release a selected drive, select the drive in Drive Groups and click Reclaim.
4. Repeat Step 3.1 to Step 3.3 to create Drive Group 1, as shown in Figure
4-68.
NOTE
5. Click Next.
The Span Definition screen is displayed. See Figure 4-69.
6. In the Array With Free Space area, select Drive Group:0 and click Add To
SPAN, as shown in Figure 4-70. Drive Group 0 is added to the Span pane in
the right, as shown in Figure 4-71.
7. Repeat Step 3.6 to add more drive groups to the span. See Figure 4-72.
Parameter Description
Parameter Description
IO Policy Options for data I/O of special virtual drives. This policy does
not affect cache prefetch.
– Direct:
Drive Cache Cache policy of physical drives (valid only for drives with
cache).
– Enable: Data is cached on drives to improve write
performance. However, if no protection mechanism is
available when the system is powered off unexpectedly,
data in the cache will be lost.
– Disable: Data is not cached on drives during a write
process. Data will not be lost when the system is powered
off unexpectedly.
– Unchanged: The current drive cache policy remains
unchanged.
Default value: Unchanged
If you do not need to configure multiple VDs, click Update Size. If you want to divide the
drive group into multiple VDs, manually set the VD capacity.
6. Click Accept.
A confirm dialog box is displayed.
7. Click Yes.
The dialog box shown in Figure 4-79 is displayed.
NOTICE
Initializing virtual drives will damage data on the virtual drives. If the original
data on the drives needs to be retained, select No.
----End
● Data on a drive will be deleted after the drive is added to a RAID array. Before creating
a RAID array, check that there is no data on drives or that the data on drives is not
required. If the drive data needs to be retained, back up the data first.
● The LSI SAS2208 supports SAS/SATA HDDs and SSDs. Drives in one RAID array must be
of the same type, but can have different capacities or be provided by different vendors.
● 4.2.1 RAID 0, 1, 5, 6, 10, 50, and 60 lists the number of drives required by each RAID
level.
Workflow
Procedure
Step 1 Log in to the RAID controller card Configuration Utility. For details, see 4.3.1
Logging In to the Configuration Utility.
NOTICE
– If you select New Configuration, the existing data of the selected physical
drives will be cleared. Exercise caution when performing this operation.
– Add Configuration is recommended if you want to create a RAID array.
Parameter Description
▪ Disabled
▪ Enabled
Default value: Disabled
NOTE
4. Repeat Step 3.1 to Step 3.3 to create Drive Group 1, as shown in Figure
4-87.
NOTE
For details about the number of drive groups required for creating a RAID 50 array, see
4.2.1 RAID 0, 1, 5, 6, 10, 50, and 60.
5. Click Next.
6. In the Array With Free Space area, select Drive Group:0 and click Add To
SPAN, as shown in Figure 4-89. Drive Group 0 is added to the Span pane in
the right, as shown in Figure 4-90.
Parameter Description
Parameter Description
IO Policy Options for data I/O of special virtual drives. This policy does
not affect cache prefetch.
– Direct:
Drive Cache Cache policy of physical drives (valid only for drives with
cache).
– Enable: Data is cached on drives to improve write
performance. However, if no protection mechanism is
available when the system is powered off unexpectedly,
data in the cache will be lost.
– Disable: Data is not cached on drives during a write
process. Data will not be lost when the system is powered
off unexpectedly.
– Unchanged: The current drive cache policy remains
unchanged.
Default value: Unchanged
If you do not need to configure multiple VDs, click Update Size. If you want to divide the
drive group into multiple VDs, manually set the VD capacity.
6. Click Accept.
A confirm dialog box is displayed.
7. Click Yes.
The dialog box shown in Figure 4-98 is displayed.
NOTICE
Initializing virtual drives will damage data on the virtual drives. If the original
data on the drives needs to be retained, select No.
----End
● Data on a drive will be deleted after the drive is added to a RAID array. Before creating
a RAID array, check that there is no data on drives or that the data on drives is not
required. If the drive data needs to be retained, back up the data first.
● The LSI SAS2208 supports SAS/SATA HDDs and SSDs. Drives in one RAID array must be
of the same type, but can have different capacities or be provided by different vendors.
● 4.2.1 RAID 0, 1, 5, 6, 10, 50, and 60 lists the number of drives required by each RAID
level.
Workflow
Procedure
Step 1 Log in to the Configuration Utility. For details, see 4.3.1 Logging In to the
Configuration Utility.
NOTICE
– If you select New Configuration, the existing data of the selected physical
drives will be cleared. Exercise caution when performing this operation.
– Add Configuration is recommended if you want to create a RAID array.
Parameter Description
▪ Disabled
▪ Enabled
Default value: Disabled
NOTE
4. Repeat Step 3.1 to Step 3.3 to create Drive Group 1, as shown in Figure
4-106.
NOTE
For details about the number of drive groups required for creating a RAID 60 array, see
4.2.1 RAID 0, 1, 5, 6, 10, 50, and 60.
5. Click Next.
6. In the Array With Free Space area, select Drive Group:0 and click Add To
SPAN, as shown in Figure 4-108. Drive Group 0 is added to the Span pane in
the right, as shown in Figure 4-109.
Parameter Description
Parameter Description
IO Policy Options for data I/O of special virtual drives. This policy does
not affect cache prefetch.
– Direct:
Drive Cache Cache policy of physical drives (valid only for drives with
cache).
– Enable: Data is cached on drives to improve write
performance. However, if no protection mechanism is
available when the system is powered off unexpectedly,
data in the cache will be lost.
– Disable: Data is not cached on drives during a write
process. Data will not be lost when the system is powered
off unexpectedly.
– Unchanged: The current drive cache policy remains
unchanged.
Default value: Unchanged
If you do not need to configure multiple VDs, click Update Size. If you want to divide the
drive group into multiple VDs, manually set the VD capacity.
6. Click Accept.
A confirm dialog box is displayed.
7. Click Yes.
The dialog box shown in Figure 4-117 is displayed.
NOTICE
Initializing virtual drives will damage data on the virtual drives. If the original
data on the drives needs to be retained, select No.
----End
NOTE
● Data will be cleared from the drives added to a RAID array. Before creating an array,
check that the drives to be added have no data or that the data does not need to be
retained.
● An automatically created RAID array may not meet your requirements. Pay attention to
the properties of the automatically created RAID array.
Workflow
Procedure
Step 1 Back up data on drives and access the Configuration Utility main screen. For
details, see 4.3.1 Logging In to the Configuration Utility.
Step 2 Select the RAID array configuration mode.
1. On the Configuration Utility main screen, click Configuration Wizard in the
navigation tree.
The Configuration Wizard screen is displayed, as shown in Figure 4-120.
– Clear Configuration: clears the existing RAID configuration.
– New Configuration: clears the existing RAID configuration and creates a
RAID array.
– Add Configuration: adds new drives to the existing configuration. The
procedure for adding drives is the same as you add drives when creating
a RAID array. This option is not displayed if no drives are available for
creating a RAID array.
NOTICE
Exercise caution when selecting New Configuration. If you select it, the
existing configuration of the selected physical drives will be deleted. Add
Configuration is recommended if you want to create a RAID array.
▪ Disabled
▪ Enabled
Default value: Disabled
5. Click Accept.
6. The "Save this Configuration" message is displayed.
Step 3 Initialize virtual drives.
1. Click Yes.
The message "All data on the new Virtual Drives will be lost. Want to
Initialize?" is displayed.
NOTICE
Initializing virtual drives will damage data on the virtual drives. If the original
data on the drives needs to be retained, select No.
2. Click Yes.
RAID initialization starts.
After the initialization, Optimal is displayed in the Virtual Drives pane. See
Figure 4-123.
----End
NOTICE
Procedure
Step 1 Log in to the server through a remote virtual console (for example, iMana 200)
and can manage the server on a real-time basis. Access the Configuration Utility
main screen. For details, see 4.3.1 Logging In to the Configuration Utility.
For details, see 4.3.2 Manually Creating a RAID 0 Array to 4.3.9 Automatically
Creating a RAID Array.
2. Choose a virtual drive to be configured as the boot drive, select Set Boot
Drive, and click Go.
Set the virtual drive to Boot Drive.
After the configuration, current=None following Set Boot Drive changes to
current=N (N indicates the virtual drive ID).
----End
Related Operations
If a server is configured with drive controllers of different chips, set the drive boot
device on the BIOS.
● For details about the settings on the Grantley platform, see "Setting the Boot
Device" in the Huawei Server Grantley Platform BIOS Parameter
Reference.
● For details about the settings on the Brickland platform, see "Setting the Boot
Device" in the Huawei Server Brickland Platform BIOS Parameter
Reference.
If the boot type is changed after the OS has been installed in Legacy or UEFI
mode, the OS will be inaccessible. To access the OS, you need to change the boot
type to that used when the OS is installed. If the OS needs to be reinstalled, select
the Legacy or UEFI mode based on actual situation.
If multiple boot devices are configured, you are advised to set Boot Type to UEFI
Boot Type because certain boot devices may fail to boot if Boot Type is set to
Legacy Boot Type. If you still want to set Boot Type to Legacy Boot Type, then
disable redirection for certain serial ports or disable PXE for certain NICs based on
the services in use. For details, see "Setting PXE for a NIC" and "Setting Serial Port
Redirection" in the respective BIOS Parameter Reference.
● Huawei Server Brickland Platform BIOS Parameter Reference
● Huawei Server Grantley Platform BIOS Parameter Reference
Procedure
Step 1 Set the EFI mode. For details, see A.1.4 Setting the EFI/UEFI Mode.
Step 2 Log in to the controller card management screen.
The position of the RAID controller card management interface in EFI/UEFI mode
varies with the BIOS platform.
● Brickland platform: The interface is integrated into the BIOS Setup. For details,
see A.1.1 Logging In to the RAID Controller Card Management Screen in
EFI/UEFI Mode (Brickland Platform).
● Grantley platform: The interface is integrated into the Device Manager. For
details, see A.1.2 Logging In to the RAID Controller Card Management
Screen in EFI/UEFI Mode (Grantley Platform).
Step 3 Select the LSI SAS2208 controller card and press Enter.
The screen shown in Figure 4-127 is displayed. Table 4-30 describes the
parameters on the screen.
----End
● Data on a drive will be deleted after the drive is added to a RAID array. Before creating
a RAID array, check that there is no data on drives or that the data on drives is not
required. If the drive data needs to be retained, back up the data first.
● The LSI SAS2208 supports SAS/SATA HDDs and SSDs. Drives in one RAID array must be
of the same type, but can have different capacities or be provided by different vendors.
● 4.2.1 RAID 0, 1, 5, 6, 10, 50, and 60 lists the number of drives required by each RAID
level.
Procedure
Step 1 Back up data on drives and access the Configuration Utility main screen. For
details, see 4.4.1 Logging In to the Management Screen.
Step 2 Log in to the Create Virtual Drive screen.
1. On the main screen, select Main Menu and press Enter.
2. Select Configuration Management and press Enter.
3. Select Create Virtual Drive and press Enter.
The RAID array configuration screen is displayed, as shown in Figure 4-128.
Table 4-31 describes the parameters on the screen.
Virtual Drive Size Capacity of the virtual drive. The default value is the
current largest capacity supported.
Read Policy Read policy of the virtual drive. The options are as
follows:
– No Read Ahead: disables the Read Ahead function.
– Read Ahead: enables the Read Ahead function. The
controller pre-reads sequential data or the data
predicted to be used and saves it in the cache.
Default value: Read Ahead
NOTE
To achieve optimal disk performance, set the policy to Read
Ahead for HDDs and No Read Ahead for SSDs.
Parameter Description
Write Policy Default cache write policy of the RAID array. The values
of this parameter vary depending on the firmware
version.
– If the firmware version of the LSI SAS2208 is
3.460.165-8277, the cache write policies are as
follows:
Parameter Description
NOTE
I/O Policy I/O policy of the virtual drive. The policy does not affect
the Read Ahead function. The options are as follows:
– Direct:
Parameter Description
Drive Cache Cache policy of physical drives (valid only for drives
with cache).
– Unchanged: uses the current cache policy.
– Enable: Data is cached on drives to improve write
performance. However, if no protection mechanism is
available when the system is powered off
unexpectedly, data in the cache will be lost.
– Disable: Data is not cached on drives during a write
process. Data will not be lost when the system is
powered off unexpectedly.
Default value: Unchanged
1. Use the ↑ and ↓ arrow keys to select Select Drives From and press Enter.
Parameter Description
----End
● Data on a drive will be deleted after the drive is added to a RAID array. Before creating
a RAID array, check that there is no data on drives or that the data on drives is not
required. If the drive data needs to be retained, back up the data first.
● The LSI SAS2208 supports SAS/SATA HDDs and SSDs. Drives in one RAID array must be
of the same type, but can have different capacities or be provided by different vendors.
● 4.2.1 RAID 0, 1, 5, 6, 10, 50, and 60 lists the number of drives required by each RAID
level.
Procedure
Step 1 Back up data on drives and access the Configuration Utility main screen. For
details, see 4.4.1 Logging In to the Management Screen.
Parameter Description
Parameter Description
Virtual Drive Size Capacity of the virtual drive. The default value is the
current largest capacity supported.
Read Policy Read policy of the virtual drive. The options are as
follows:
– No Read Ahead: disables the Read Ahead function.
– Read Ahead: enables the Read Ahead function. The
controller pre-reads sequential data or the data
predicted to be used and saves it in the cache.
Default value: Read Ahead
NOTE
To achieve optimal disk performance, set the policy to Read
Ahead for HDDs and No Read Ahead for SSDs.
Parameter Description
Write Policy Default cache write policy of the RAID array. The values
of this parameter vary depending on the firmware
version.
– If the firmware version of the LSI SAS2208 is
3.460.165-8277, the cache write policies are as
follows:
Parameter Description
NOTE
I/O Policy I/O policy of the virtual drive. The policy does not affect
the Read Ahead function. The options are as follows:
– Direct:
Parameter Description
Drive Cache Cache policy of physical drives (valid only for drives
with cache).
– Unchanged: uses the current cache policy.
– Enable: Data is cached on drives to improve write
performance. However, if no protection mechanism is
available when the system is powered off
unexpectedly, data in the cache will be lost.
– Disable: Data is not cached on drives during a write
process. Data will not be lost when the system is
powered off unexpectedly.
Default value: Unchanged
1. Use the ↑ and ↓ arrow keys to select Select Drives From and press Enter.
Parameter Description
----End
● Data on a drive will be deleted after the drive is added to a RAID array. Before creating
a RAID array, check that there is no data on drives or that the data on drives is not
required. If the drive data needs to be retained, back up the data first.
● The LSI SAS2208 supports SAS/SATA HDDs and SSDs. Drives in one RAID array must be
of the same type, but can have different capacities or be provided by different vendors.
● 4.2.1 RAID 0, 1, 5, 6, 10, 50, and 60 lists the number of drives required by each RAID
level.
Procedure
Step 1 Back up data on drives and access the Configuration Utility main screen. For
details, see 4.4.1 Logging In to the Management Screen.
Parameter Description
Virtual Drive Size Capacity of the virtual drive. The default value is the
current largest capacity supported.
Read Policy Read policy of the virtual drive. The options are as
follows:
– No Read Ahead: disables the Read Ahead function.
– Read Ahead: enables the Read Ahead function. The
controller pre-reads sequential data or the data
predicted to be used and saves it in the cache.
Default value: Read Ahead
NOTE
To achieve optimal disk performance, set the policy to Read
Ahead for HDDs and No Read Ahead for SSDs.
Parameter Description
Write Policy Default cache write policy of the RAID array. The values
of this parameter vary depending on the firmware
version.
– If the firmware version of the LSI SAS2208 is
3.460.165-8277, the cache write policies are as
follows:
Parameter Description
NOTE
I/O Policy I/O policy of the virtual drive. The policy does not affect
the Read Ahead function. The options are as follows:
– Direct:
Parameter Description
Drive Cache Cache policy of physical drives (valid only for drives
with cache).
– Unchanged: uses the current cache policy.
– Enable: Data is cached on drives to improve write
performance. However, if no protection mechanism is
available when the system is powered off
unexpectedly, data in the cache will be lost.
– Disable: Data is not cached on drives during a write
process. Data will not be lost when the system is
powered off unexpectedly.
Default value: Unchanged
1. Use the ↑ and ↓ arrow keys to select Select Drives From and press Enter.
Parameter Description
----End
● Data on a drive will be deleted after the drive is added to a RAID array. Before creating
a RAID array, check that there is no data on drives or that the data on drives is not
required. If the drive data needs to be retained, back up the data first.
● The LSI SAS2208 supports SAS/SATA HDDs and SSDs. Drives in one RAID array must be
of the same type, but can have different capacities or be provided by different vendors.
● 4.2.1 RAID 0, 1, 5, 6, 10, 50, and 60 lists the number of drives required by each RAID
level.
Procedure
Step 1 Back up data on drives and access the Configuration Utility main screen. For
details, see 4.4.1 Logging In to the Management Screen.
Parameter Description
Virtual Drive Size Capacity of the virtual drive. The default value is the
current largest capacity supported.
Read Policy Read policy of the virtual drive. The options are as
follows:
– No Read Ahead: disables the Read Ahead function.
– Read Ahead: enables the Read Ahead function. The
controller pre-reads sequential data or the data
predicted to be used and saves it in the cache.
Default value: Read Ahead
NOTE
To achieve optimal disk performance, set the policy to Read
Ahead for HDDs and No Read Ahead for SSDs.
Parameter Description
Write Policy Default cache write policy of the RAID array. The values
of this parameter vary depending on the firmware
version.
– If the firmware version of the LSI SAS2208 is
3.460.165-8277, the cache write policies are as
follows:
Parameter Description
NOTE
I/O Policy I/O policy of the virtual drive. The policy does not affect
the Read Ahead function. The options are as follows:
– Direct:
Parameter Description
Drive Cache Cache policy of physical drives (valid only for drives
with cache).
– Unchanged: uses the current cache policy.
– Enable: Data is cached on drives to improve write
performance. However, if no protection mechanism is
available when the system is powered off
unexpectedly, data in the cache will be lost.
– Disable: Data is not cached on drives during a write
process. Data will not be lost when the system is
powered off unexpectedly.
Default value: Unchanged
1. Use the ↑ and ↓ arrow keys to select Select Drives From and press Enter.
Parameter Description
----End
● Data on a drive will be deleted after the drive is added to a RAID array. Before creating
a RAID array, check that there is no data on drives or that the data on drives is not
required. If the drive data needs to be retained, back up the data first.
● The LSI SAS2208 supports SAS/SATA HDDs and SSDs. Drives in one RAID array must be
of the same type, but can have different capacities or be provided by different vendors.
● 4.2.1 RAID 0, 1, 5, 6, 10, 50, and 60 lists the number of drives required by each RAID
level.
Procedure
Step 1 Back up data on drives and access the Configuration Utility main screen. For
details, see 4.4.1 Logging In to the Management Screen.
Parameter Description
Parameter Description
Virtual Drive Size Capacity of the virtual drive. The default value is the
current largest capacity supported.
Read Policy Read policy of the virtual drive. The options are as
follows:
– No Read Ahead: disables the Read Ahead function.
– Read Ahead: enables the Read Ahead function. The
controller pre-reads sequential data or the data
predicted to be used and saves it in the cache.
Default value: Read Ahead
NOTE
To achieve optimal disk performance, set the policy to Read
Ahead for HDDs and No Read Ahead for SSDs.
Parameter Description
Write Policy Default cache write policy of the RAID array. The values
of this parameter vary depending on the firmware
version.
– If the firmware version of the LSI SAS2208 is
3.460.165-8277, the cache write policies are as
follows:
Parameter Description
NOTE
I/O Policy I/O policy of the virtual drive. The policy does not affect
the Read Ahead function. The options are as follows:
– Direct:
Parameter Description
Drive Cache Cache policy of physical drives (valid only for drives
with cache).
– Unchanged: uses the current cache policy.
– Enable: Data is cached on drives to improve write
performance. However, if no protection mechanism is
available when the system is powered off
unexpectedly, data in the cache will be lost.
– Disable: Data is not cached on drives during a write
process. Data will not be lost when the system is
powered off unexpectedly.
Default value: Unchanged
Parameter Description
– RAID 10 supports 2 to 8 spans. Each span supports an even number of drives, for
example, 2, 4, 6, 8 or 16. The number of drives in each span must be the same.
– A RAID 10 array supports an even number of drives, for example, 4, 6...16 or 32.
– If the total number of drives in all RAID arrays under a RAID controller card
exceeds 16 or 32 (depending on the RAID key specifications), no drive can be
added to RAID arrays.
5. Select Apply Changes and press Enter.
Configure multiple spans for RAID 10. Figure 4-138 shows the configuration
screen.
At least two spans must be created for a RAID 10 array. A maximum of eight spans can be
created.
----End
● Data on a drive will be deleted after the drive is added to a RAID array. Before creating
a RAID array, check that there is no data on drives or that the data on drives is not
required. If the drive data needs to be retained, back up the data first.
● The LSI SAS2208 supports SAS/SATA HDDs and SSDs. Drives in one RAID array must be
of the same type, but can have different capacities or be provided by different vendors.
● 4.2.1 RAID 0, 1, 5, 6, 10, 50, and 60 lists the number of drives required by each RAID
level.
Procedure
Step 1 Back up data on drives and access the Configuration Utility main screen. For
details, see 4.4.1 Logging In to the Management Screen.
Step 2 Log in to the Create Virtual Drive screen.
1. On the main screen, select Main Menu and press Enter.
2. Select Configuration Management and press Enter.
3. Select Create Virtual Drive and press Enter.
The RAID array configuration screen is displayed, as shown in Figure 4-139.
Table 4-41 describes the parameters on the screen.
Virtual Drive Size Capacity of the virtual drive. The default value is the
current largest capacity supported.
Parameter Description
Read Policy Read policy of the virtual drive. The options are as
follows:
– No Read Ahead: disables the Read Ahead function.
– Read Ahead: enables the Read Ahead function. The
controller pre-reads sequential data or the data
predicted to be used and saves it in the cache.
Default value: Read Ahead
NOTE
To achieve optimal disk performance, set the policy to Read
Ahead for HDDs and No Read Ahead for SSDs.
Parameter Description
Write Policy Default cache write policy of the RAID array. The values
of this parameter vary depending on the firmware
version.
– If the firmware version of the LSI SAS2208 is
3.460.165-8277, the cache write policies are as
follows:
Parameter Description
NOTE
I/O Policy I/O policy of the virtual drive. The policy does not affect
the Read Ahead function. The options are as follows:
– Direct:
Parameter Description
Drive Cache Cache policy of physical drives (valid only for drives
with cache).
– Unchanged: uses the current cache policy.
– Enable: Data is cached on drives to improve write
performance. However, if no protection mechanism is
available when the system is powered off
unexpectedly, data in the cache will be lost.
– Disable: Data is not cached on drives during a write
process. Data will not be lost when the system is
powered off unexpectedly.
Default value: Unchanged
Configure multiple spans for RAID 50. Figure 4-141 shows the configuration
screen.
At least two spans must be created for a RAID 50 array. A maximum of eight spans can be
created.
----End
● Data on a drive will be deleted after the drive is added to a RAID array. Before creating
a RAID array, check that there is no data on drives or that the data on drives is not
required. If the drive data needs to be retained, back up the data first.
● The LSI SAS2208 supports SAS/SATA HDDs and SSDs. Drives in one RAID array must be
of the same type, but can have different capacities or be provided by different vendors.
● 4.2.1 RAID 0, 1, 5, 6, 10, 50, and 60 lists the number of drives required by each RAID
level.
Procedure
Step 1 Back up data on drives and access the Configuration Utility main screen. For
details, see 4.4.1 Logging In to the Management Screen.
Step 2 Log in to the Create Virtual Drive screen.
1. On the main screen, select Main Menu and press Enter.
2. Select Configuration Management and press Enter.
3. Select Create Virtual Drive and press Enter.
The RAID array configuration screen is displayed, as shown in Figure 4-142.
Table 4-43 describes the parameters on the screen.
Virtual Drive Size Capacity of the virtual drive. The default value is the
current largest capacity supported.
Parameter Description
Read Policy Read policy of the virtual drive. The options are as
follows:
– No Read Ahead: disables the Read Ahead function.
– Read Ahead: enables the Read Ahead function. The
controller pre-reads sequential data or the data
predicted to be used and saves it in the cache.
Default value: Read Ahead
NOTE
To achieve optimal disk performance, set the policy to Read
Ahead for HDDs and No Read Ahead for SSDs.
Parameter Description
Write Policy Default cache write policy of the RAID array. The values
of this parameter vary depending on the firmware
version.
– If the firmware version of the LSI SAS2208 is
3.460.165-8277, the cache write policies are as
follows:
Parameter Description
NOTE
I/O Policy I/O policy of the virtual drive. The policy does not affect
the Read Ahead function. The options are as follows:
– Direct:
Parameter Description
Drive Cache Cache policy of physical drives (valid only for drives
with cache).
– Unchanged: uses the current cache policy.
– Enable: Data is cached on drives to improve write
performance. However, if no protection mechanism is
available when the system is powered off
unexpectedly, data in the cache will be lost.
– Disable: Data is not cached on drives during a write
process. Data will not be lost when the system is
powered off unexpectedly.
Default value: Unchanged
Configure multiple spans for RAID 60. Figure 4-144 shows the configuration
screen.
At least two spans must be created for a RAID 60 array. A maximum of eight spans can be
created.
----End
NOTE
● The HDDs and SSDs cannot be used as the hot spare drives of each other.
● The HDDs include SAS HDDs and SATA HDDS. If the member drives of a RAID array are
SAS drives, the SATA drives can be used as dedicated hot spare drives. If the member
drives are SATA drives, the SAS drives cannot be used as dedicated hot spare drives.
● An idle drive can be configured as a hot spare drive, but a RAID member drive cannot
be configured as a hot spare drive.
● The type of hot spare drives must be the same as that of the member drives in the RAID
array, and the capacity of hot spare drives must be greater than or equal to the
maximum capacity of the member drives in the RAID array.
● All RAID levels except RAID 0 support hot spare drives.
● You cannot directly change a global hot spare drive to a dedicated hot spare drive or
vice versa. You need to set the drive to idle state, and then set it as a global or
dedicated hot spare drive as required.
Prerequisites
Conditions
The following requirements must be met before you configure hot spare drives:
● The server has idle drives.
● You have logged in to the server through the Remote Virtual Console and can
manage the server on a real-time basis.
● You have logged in to the Configuration Utility. For details, see 4.3.1 Logging
In to the Configuration Utility.
Data
Data preparation is not required for this operation.
Procedure
Step 1 In the Logical View area on the right of the CU main screen, select an idle drive
to be configured as a hot spare drive, as shown in Figure 4-145.
NOTE
Idle drives are displayed in blue and the status is Unconfigured Good.
Step 2 Select Make Global HSP on the screen shown in Figure 4-146.
NOTE
The hot spare drives are displayed in pink and the upper-level node of the global hot spare
drives is Global Hot Spares.
----End
Additional Information
Related Tasks
To delete a hot spare drive, see 4.5.1.3 Deleting a Hot Spare Drive.
Related Concepts
None.
Prerequisites
Conditions
The following requirements must be met before you configure hot spare drives:
● The server has idle drives.
● You have logged in to the server through the Remote Virtual Console and can
manage the server on a real-time basis.
● You have logged in to the Configuration Utility. For details, see 4.3.1 Logging
In to the Configuration Utility.
Data
Procedure
NOTE
If SAS drives are grouped as a RAID array, a SATA drive can be used as a dedicated hot
spare drive. If SATA drives are grouped as a RAID array, the SAS drive cannot be used as the
dedicated hot spare drive.
Step 1 In Logical View on the right of the CU main screen, select an idle drive to be
configured as a hot spare drive, as shown in Figure 4-148.
NOTE
Idle drives are displayed in blue and the status is Unconfigured Good.
Step 2 Select Make Dedicated HSP on the screen shown in Figure 4-149.
Step 3 In the Drive Groups area on the right, select a RAID array.
Step 4 Click Go. The hot spare drive is configured.
Step 5 Tap Home to return to the CU main screen, as shown in Figure 4-150.
The configured hot spare drives are displayed in the Logical View area.
NOTE
The hot spare drives are displayed in pink and the upper-level node of the dedicated hot
spare drives is Dedicated Hot Spares.
----End
Scenarios
If the number of member drives in a RAID array is insufficient, you can delete a
hot spare drive to enable it to function as a common drive.
Procedure
Step 1 Log in to the Configuration Utility main screen. For details, see 4.3.1 Logging In
to the Configuration Utility.
Step 2 Delete a hot spare drive.
Step 3 Choose Physical View on the left of the Configuration Utility main page.
The drive list is displayed, as shown in Figure 4-151.
Step 4 On the right of the page, click the hot spare drive that you want to delete.
The hot spare drive property screen is displayed, as shown in Figure 4-152.
----End
Scenarios
The capacity of a RAID array can be expanded using either of the following
methods:
NOTICE
● Only RAID 0, RAID 1, RAID 5, and RAID 6 support capacity expansion through
drive addition.
● RAID 10, 50, and 60 do not support capacity expansion through drive addition.
● If a RAID array contains two or more VDs, its capacity cannot be expanded
through drive addition.
● If a faulty drive exists and the RAID array does not contain redundant data
(for example, a faulty drive exists during RAID 0 array expansion), the RAID
array fails.
● If a faulty drive exists and the RAID array still contains redundant data (for
example, a faulty drive exists during RAID 1 array expansion), the expansion
continues. When the expansion is complete, replace the faulty drive and
rebuild the RAID array.
Perform capacity expansion through drive addition with caution.
Prerequisites
Conditions
The following conditions must be met before you add a drive for RAID array
capacity expansion on a server:
● The server has drives that have not been added to a RAID array.
● You have logged in to the server through the Remote Virtual Console and can
manage the server on a real-time basis.
● You have logged in to the Configuration Utility. For details, see 4.3.1 Logging
In to the Configuration Utility.
Data
Drive data has been backed up.
Procedure
Step 1 On the left of the Configuration Utility main screen, select Virtual Drives.
The virtual drive selection screen is displayed.
Step 4 Select Change RAID Level and Add Drive, choose a drive from the area below,
and click Go.
The Confirm Page screen is displayed.
NOTICE
● The RAID level must be the same as the original RAID level.
● Only one drive can be added for a capacity expansion operation.
Step 8 On the right of the Configuration Utility main screen, view the RAID configuration
and check the capacity expansion result.
----End
Prerequisites
Conditions
The following conditions must be met before you increase the available space of a
RAID array:
Data
Data preparation is not required for this operation.
Procedure
Step 1 On the left of the Configuration Utility main screen, select Virtual Drives.
The virtual drive selection screen is displayed.
Step 4 Enter the percentage of the available space to be increased, and click OK.
Step 6 On the right of the Configuration Utility main screen, view the RAID configuration
and check the capacity expansion result.
----End
Prerequisites
Conditions
The following conditions must be met before you migrate a RAID level:
● You have logged in to the server through the Remote Virtual Console and can
manage the server on a real-time basis.
● The current number of drives meets the requirements of the target RAID level.
● You have logged in to the Configuration Utility. For details, see 4.3.1 Logging
In to the Configuration Utility.
LSI SAS2208 controller cards support the following RAID level migration modes:
● Migrate RAID 0 to RAID 1, 5, or 6.
● Migrate RAID 1 to RAID 0, 5, or 6.
● Migrate RAID 5 to RAID 0 or 6.
● Migrate RAID 6 to RAID 0 or 5.
● If a RAID array contains two or more VDs, it does not support RAID level
migration.
Table 4-45 lists the minimum numbers of drives to be added for RAID level
migration.
Table 4-45 Minimum numbers of drives to be added for RAID level migration
RAID Level Migration Number of Drives Before Minimum Number of
Migration Drives to Be Added
Data
To avoid data loss, back up data in the current RAID array before RAID level
migration.
Procedure
Step 1 On the left of the Configuration Utility main screen, select Virtual Drives.
The virtual drive selection screen is displayed. See Figure 4-162.
Step 4 Select a method for RAID level migration and click Go.
● To perform RAID level migration without adding drives, select Change RAID
Level and choose the target RAID level from the drop-down list box below.
● To perform RAID level migration with drives added, select Change RAID Level
and Add Drive and choose drives from the area below and the target RAID
level from the drop-down list box above.
Step 6 On the right of the Configuration Utility main screen, view the RAID configuration
and check the RAID level migration result.
NOTE
The duration required for RAID level migration varies depending on the data volume in the
RAID array. Wait for a moment if the data volume is large.
----End
● After you click Physical View, Logical View will be displayed in the left pane
and Physical View will be displayed in the right pane.
● After you click Logical View, Physical View will be displayed in the left pane
and Logical View will be displayed in the right pane.
Drive The format Click Drive Group. The screen shown in 4.8.9.1 Drive
Group is "Drive Group is displayed.
Group xx,
RAID level".
Virtual The format Click a RAID under Virtual Drives. The screen shown
Drives is "Virtual in 4.8.9.2 Virtual Drives is displayed.
Drives xxx".
Drives The format Click a drive under Drives. The screen shown in
is 4.8.9.3 Drives is displayed.
"Backplane
Slot:drive
slot number,
type,
capacity,
status".
Global The format Click a drive under Global Hot Spares. The screen
Hot is shown in 4.8.9.4 Global/Dedicated Hot Spares is
Spares "Backplane displayed.
Slot:drive
slot number,
type,
capacity,
status".
Dedicat The format Click a drive under Dedicated Hot Spares. The screen
ed Hot is shown in 4.8.9.4 Global/Dedicated Hot Spares is
Spares "Backplane displayed.
Slot:drive
slot number,
type,
capacity,
status".
Unconfi The format Click a drive under Unconfigured Drives. The screen
gured is shown in 4.8.9.5 Unconfigured Drives is displayed.
Drives "Backplane
Slot:drive
slot number,
type,
capacity,
status".
----End
In the Backplane area, drives are listed in the format of "Slot:drive slot number,
type, capacity, status".
Online The drive Click a drive in Online state. The screen shown in
has been 4.8.9.1 Drive Group is displayed.
added to a
RAID array.
Unconfi The drive is Click a drive in Unconfigured Good state. The screen
gured idle and not shown in 4.8.9.5 Unconfigured Drives is displayed.
Good added to
any RAID
array or
configured
as a hot
spare drive.
----End
NOTICE
● To restore a RAID array after a member drive is hot swapped, set the drive to
UGOOD and import the foreign configuration.
● If the number of faulty or missing hard disks exceeds the maximum number
allowed by the RAID group, the RAID group cannot be imported.
● To avoid configuration import failure, replace the original RAID controller card
with a new card of the same type.
Procedure
Logging In to the Configuration Utility Screen
1. Log in to the server through a remote virtual console and can manage the
server on a real-time basis. Access the Configuration Utility main screen. For
details, see 4.3.1 Logging In to the Configuration Utility.
Importing Foreign Configurations
2. If a foreign configuration is detected, information shown in Figure 4-167 is
displayed on the Configuration Utility screen.
3. Click Preview.
The Foreign Configuration Preview screen is displayed. See Figure 4-168.
4. Click Import.
5. After the import is complete, check the imported configuration in the Logical
View area on the right of the Configuration Utility main screen.
NOTICE
If you replace a RAID controller card when a RAID array has been configured on
the server, the RAID configuration will be considered as a foreign configuration. If
you clear a foreign configuration, the RAID configuration will be lost. Exercise
caution when doing this operation.
Procedure
Step 1 Access the server using the Remote Virtual Console. Log in to the Configuration
Utility. For details, see 4.3.1 Logging In to the Configuration Utility.
Step 2 Clear a foreign configuration.
If a foreign configuration is detected, information shown in Figure 4-169 is
displayed on the Configuration Utility.
1. Click Clear.
2. In the Logical View area on the right of the Configuration Utility main
screen, check the unconfigured drives.
You can see that the drive whose configuration has been cleared is in the
Unconfigured Good state.
----End
Scenarios
If a RAID array fails, discard preserved cache before logging in to the
Configuration Utility again.
Procedure
Step 1 Access the Configuration Utility main screen. For details, see 4.3.1 Logging In to
the Configuration Utility.
Step 2 In the window shown in Figure 4-170, select the RAID controller card whose
preserved cache is to be discarded and click Discard Cache.
NOTICE
● Before clearing the cache of the LSI SAS2208 controller card, disconnect the
RAID controller card from the drive by removing the cable or drive. Otherwise,
the configuration information of the failed RAID array will be deleted.
● The cache of the RAID controller card contains data that is being read or
written. Clearing the preserved cache may cause data loss.
NOTE
If the preserved cache still exists, see 4.7.4 Failed to Clear Preserved Cache.
----End
Procedure
Step 1 Access the server using the Remote Virtual Console.
Step 2 Log in to the Configuration Utility. For details, see 4.3.1 Logging In to the
Configuration Utility.
Step 3 Create a RAID array.
Step 4 Check the consistency.
1. On the left of the Configuration Utility main screen, select Virtual Drives.
The virtual drive option screen is displayed, as shown in Figure 4-172.
NOTICE
When detecting that data or check value mistakes exist in redundant data,
the Configuration Utility modifies these mistakes automatically. Back up all
data before performing a consistency check.
----End
Scenarios
If the server does not require an array or you need to reconfigure an array, delete
the existing array configuration to release drives.
Procedure
Step 1 Access the server using the Remote Virtual Console.
Step 2 Log in to the Configuration Utility. For details, see 4.3.1 Logging In to the
Configuration Utility.
5. After the RAID array is deleted, the Configuration Utility main screen is
displayed.
----End
● Global hot spare drive: shared by all RAID arrays of a controller, which can be
configured with one or more global hot spare drives. A global hot spare drive
automatically replaces a failed drive of the same type as the hot spare drive
in any RAID array.
● Dedicated hot spare drive: replaces a failed drive only in a specified RAID
array of a controller. One or more dedicated hot spare drives can be
configured for each RAID array. The hot spare drive automatically takes over
services of a failed drive of the same type as the hot spare drive only in a
specified RAID array.
NOTE
● The HDDs and SSDs cannot be used as the hot spare drives of each other.
● The HDDs include SAS HDDs and SATA HDDS. If the member drives of a RAID array are
SAS drives, the SATA drives can be used as dedicated hot spare drives. If the member
drives are SATA drives, the SAS drives cannot be used as dedicated hot spare drives.
● An idle drive can be configured as a hot spare drive, but a RAID member drive cannot
be configured as a hot spare drive.
● The type of hot spare drives must be the same as that of the member drives in the RAID
array, and the capacity of hot spare drives must be greater than or equal to the
maximum capacity of the member drives in the RAID array.
● All RAID levels except RAID 0 support hot spare drives.
● You cannot directly change a global hot spare drive to a dedicated hot spare drive or
vice versa. You need to set the drive to idle state, and then set it as a global or
dedicated hot spare drive as required.
Procedure
Step 1 Access the Configuration Utility main screen. For details, see 4.4.1 Logging In to
the Management Screen.
Procedure
NOTE
If SAS drives are grouped as a RAID array, a SATA drive can be used as a dedicated hot
spare drive. If SATA drives are grouped as a RAID array, the SAS drive cannot be used as the
dedicated hot spare drive.
Step 1 Log in to the Configuration Utility. For details, see 4.4.1 Logging In to the
Management Screen.
Step 2 Access the Drive Management screen.
1. On the main screen, select Drive Management and press Enter.
2. Select a drive and press Enter. The drive detail screen is displayed, as shown
in Figure 4-178.
Scenarios
If the number of member drives in a RAID array is insufficient, you can delete a
hot spare drive to enable it to function as a common drive.
Procedure
Step 1 Log in to the Configuration Utility main screen. For details, see 4.4.1 Logging In
to the Management Screen.
----End
NOTICE
● Only RAID 0, RAID 1, RAID 5, and RAID 6 support capacity expansion through
drive addition.
● RAID 10, 50, and 60 do not support capacity expansion through drive addition.
● If a RAID array contains two or more virtual drives, its capacity cannot be
expanded through drive addition.
● The RAID controller card does not allow you to reconfigure two RAID arrays
(that is, reconfigure virtual drives, including adding drives or migrating RAID
levels) at the same time. Perform operations on the next RAID array after the
current process is complete.
Prerequisites
Conditions
The following conditions must be met before you add a drive for RAID array
capacity expansion on a server:
● The server has drives that have not been added to a RAID array.
● You have logged in to the server through the Remote Virtual Console and can
manage the server on a real-time basis.
● You have logged in to the Configuration Utility. For details, see 4.4.1 Logging
In to the Management Screen.
● Drive data has been backed up.
Data
None
Procedure
Step 1 Access the Virtual Drive Management screen.
1. Select Main Menu and press Enter.
2. Select Virtual Drive Management and press Enter.
The Virtual Drive Management screen is displayed. See Figure 4-180.
NOTICE
The RAID level must be the same as the original RAID level.
– The types and specifications of the drive to be added must be the same as those of
the member drives in the RAID array. The capacity of the drive must be greater
than or equal to the capacity of the smallest drive in the RAID array.
– During capacity expansion, you need to add two drives to RAID 1 each time, and
only one drive to RAID 0, RAID 5, or RAID 6 each time.
– A RAID controller card cannot be used to expand the capacity of two or more RAID
arrays at the same time.
8. Select Confirm and press Enter.
9. Select Yes and press Enter.
The system displays a message indicating that the operation is successful.
----End
NOTE
The capacity expansion will be interrupted by a server restart, but it continues after the
server is restarted.
Scenarios
If a virtual drive does not occupy the entire capacity of all its member drives, you
can expand the virtual drive capacity by adjusting the available virtual drive space.
Procedure
Step 1 Back up data on drives and access the Configuration Utility main screen. For
details, see 4.4.1 Logging In to the Management Screen.
Parameter Description
Parameter Description
----End
----End
Screen Introduction
On the Main Menu screen, select a drive under the Drive Management node and
press Enter to display the drive operation menu, as shown in Figure 4-188. Table
4-49 describes the operations that are available.
Mark drive as Deletes the offline drive from the RAID array.
Missing
Rebuilding RAID
Step 1 On the menu, choose Rebuild and press Enter.
The Rebuild menu is displayed.
Step 2 Select an operation type and press Enter.
A confirmation dialog box is displayed.
----End
Procedure
Step 1 Log in to the Configuration Utility main screen. For details, see 4.4.1 Logging In
to the Management Screen.
Step 2 Access the Controller Management screen.
1. On the main screen, select Main Menu and press Enter.
2. Select Controller Management and press Enter.
The basic information about the RAID controller card is displayed, as shown in
Figure 4-189. Table 4-50 describes the parameters on the screen.
PCI Slot Number PCI slot number of the RAID controller card.
Cache and Displays cache and memory information about the RAID
Memory controller card.
Parameter Description
Boot Mode Specifies the action to be taken when the BIOS detects an
exception.
● Stop on errors (default): stops startup and continues with
startup only when user confirms.
● Pause on errors: suspends startup and continues with
startup even if the user does not confirm after a set
period.
● Ignore errors: continues startup. This option is usually for
system diagnosis.
● Safe mode on errors: enters safe startup mode.
----End
NOTICE
● To restore a RAID array after a member drive is hot swapped, set the drive to
UGOOD and import the foreign configuration.
● If you replace a RAID controller card when a RAID array has been configured
on the server, the RAID configuration will be considered as Foreign
Configuration. If you clear a foreign configuration, the RAID configuration will
be lost. Exercise caution when doing this operation.
● If the number of faulty or missing drives exceeds the maximum number
allowed by the RAID array, the RAID array cannot be imported.
● To avoid configuration import failure, replace the original RAID controller card
with a new card of the same type.
Step 2 On the main screen, select Main Menu and press Enter.
----End
----End
Step 1 Choose Main Menu > Configuration Management > Create Virtual Drive. The
Create Virtual Drive screen is displayed, as shown in Figure 4-196.
Step 2 Set Select Drives From to Free Capacity, as shown in Figure 4-197.
Step 4 Select the RAID array for which you want to create multiple virtual drives and
press Enter, as shown in Figure 4-199.
Select Drives From Specifies the source of member drives of the virtual drive.
Member drive sources are as follows:
● Unconfigured Capacity: idle drives that are not added
to any virtual drives
● Free Capacity: space that is not used as virtual drives in
drive groups
Virtual Drive Size Capacity of the virtual drive. The default value is the
current largest capacity supported.
Parameter Description
Read Policy Read policy of the virtual drive. The options are as follows:
● No Read Ahead: disables the Read Ahead function.
● Read Ahead: enables the Read Ahead function. The
controller pre-reads sequential data or the data
predicted to be used and saves it in the cache.
Default value: Read Ahead
NOTE
To achieve optimal disk performance, set the policy to Read Ahead
for HDDs and No Read Ahead for SSDs.
Parameter Description
Write Policy Default cache write policy of the RAID array. The values of
this parameter vary depending on the firmware version.
● If the firmware version of the LSI SAS2208 is
3.460.165-8277, the cache write policies are as follows:
– Write Through: When the drive subsystem receives
all data, the RAID controller card signals the host
that the data transmission is complete.
– Write Back: If the supercapacitor is not configured or
the supercapacitor is faulty, the RAID controller card
automatically switches to the Write Through mode.
– Always Write Back: When the cache receives all
data, the RAID controller card signals the host that
the data transmission is complete.
● If the firmware version of the LSI SAS2208 is
3.400.95-4061, the cache write policies are as follows:
– Write Through: When the drive subsystem receives
all data, the RAID controller card signals the host
that the data transmission is complete.
– Write Back: If the supercapacitor is not configured or
the supercapacitor is faulty, the RAID controller card
automatically switches to the Write Through mode.
– Force Write Back: When the cache receives all data,
the RAID controller card signals the host that the
data transmission is complete.
NOTE
– Default value: Write Back
– The RAID controller card does not have a supercapacitor.
– The supercapacitor is being charged and discharged.
– The supercapacitor is damaged.
– Pinned or preserved cache exists.
– If this parameter is set to Always Write Back or Force
Write Back, the DDR write data of the RAID controller card
will be lost when:
– The server is powered off unexpectedly.
– The supercapacitor is not installed.
– The supercapacior is being charged.
This mode is not recommended.
Parameter Description
I/O Policy I/O policy of the virtual drive. The policy does not affect
the Read Ahead function. The options are as follows:
● Direct:
– In a read scenario, data is directly read from drives.
(If Read Policy is set to Read Ahead, data is read
from the cache.)
– In a write scenario, data is written into the RAID
cache. (If Write Policy is set to Write Through, data
is directly written into drives.)
● Cached: Data is read from or written to the cache. Use
this option only when configuring CacheCade 1.1.
Default value: Direct
NOTE
The LSI SAS2208 does not support the CacheCade function.
Drive Cache Cache policy of physical drives (valid only for drives with
cache).
● Unchanged: uses the current cache policy.
● Enable: Data is cached on drives to improve write
performance. However, if no protection mechanism is
available when the system is powered off unexpectedly,
data in the cache will be lost.
● Disable: Data is not cached on drives during a write
process. Data will not be lost when the system is
powered off unexpectedly.
Default value: Unchanged
Parameter Description
----End
NOTE
A RAID array supports a maximum of 16 virtual drives. To create another virtual drive,
repeat Step 1 to Step 11.
4.7 Troubleshooting
This section describes solutions to drive faults, RAID controller card faults, and
battery or capacitor faults. For other situations, see the Huawei Server
Maintenance Guide.
Solution
Step 1 Determine the slot number of the faulty drive.
● Locate the faulty drive based on the fault indicator, which is steady orange.
For details, see the drive numbering section in the user guide of the server
you use.
● Locate the faulty drive based on the iMana/iBMC drive alarm information. For
details, see iMana/iBMC Alarm Handling.
● Locate the faulty drive using the RAID controller card GUI. For details, see
4.8.7 Drives or 4.9.5 Drive Management.
● Locate the faulty drive using the RAID controller card CLI tool. For details, see
4.11.2.25 Querying RAID Controller Card, RAID Array, or Physical Drive
Information.
Step 2 Check for and delete the Preserved Cache data from the server.
NOTICE
● If the RAID array containing the Preserved Cache data is invalid (the number of
faulty drives in the RAID array exceeds the maximum number of faulty drives
supported by the RAID array), the RAID array will be deleted when
PreservedCache data is deleted.
● If the drive fault is caused by manual removal and installation of the drive in
the RAID array, remove the drive and then delete the Preserved Cache data. By
doing this, the RAID array will not be deleted.
● Clear the Preserved Cache data through the GUI. For details, see 4.5.8
Discarding Preserved Cache.
● To clear the Preserved Cache data through the CLI, perform the following
steps:
a. Run the storcli64/c0 show preservedcache command to check whether
a preserved cache exists.
In this example, Preserved Cache data exists in VD 0 managed by
controller 0.
NOTICE
Remove the faulty drive and install a new drive. The new drive can be restored in
the following ways based on the RAID configuration of the faulty drive:
● If the RAID array has a hot spare drive, copyback will be performed after the
hot spare drive is rebuilt. After data is copied to the new drive, the hot spare
drive restores to the hot backup state.
● If the RAID array has redundancy feature and has no hot spare drive, the
newly installed drive automatically rebuilds data. If more than one faulty
drive exists in a RAID array, replace the faulty drives one by one based on the
drive fault time. Replace the next drive only after the current drive data is
rebuilt.
● If the faulty drive is a pass-through drive, replace it.
● If the faulty drive belongs to a RAID array without redundancy (RAID 0),
create RAID 0 again.
– For details about how to create a RAID 0 array in Legacy mode, see 4.3.2
Manually Creating a RAID 0 Array.
– For details about how to create a RAID 0 array in UEFI mode, see 4.4.2
Creating RAID 0.
– For details about how to create a RAID 0 array by running commands,
see 4.11.2.7 Creating and Deleting a RAID Array.
----End
Solution
Step 1 Replace the controller card. Then check whether the alarm is cleared.
● If yes, go to Step 2.
● If no, go to Step 3.
Step 2 Import the original RAID information and check whether data can be written to
the drives controlled by the RAID controller.
● If yes, no further action is required.
● If no, go to Step 3.
Step 3 Contact Huawei technical support.
----End
Solution
Step 1 Go to the Configuration Utility of the controller card and check whether the
battery or capacitor state is Present:
● If yes, go to Step 5.
● If no, go to Step 2.
Step 2 Power off the server, open the chassis cover, and check whether the controller card
is connected to a battery or capacitor.
● If yes, go to Step 4.
● If no, go to Step 3.
Step 3 Install the battery or capacitor based on the system configuration, and check
whether the capacitor state is Present.
● If yes, go to Step 5.
● If no, go to Step 7.
Step 4 Remove and install the battery or capacitor and check whether the capacitor state
is Present.
● If yes, go to Step 5.
● If no, go to Step 6.
Step 6 Replace the battery or capacitor, and check whether the capacitor state is Present
and whether the fault is rectified.
● If yes, no further action is required.
● If no, go to Step 7.
----End
Cause Analysis
This is an inherent problem of LSI SAS2208. The preserved cache occasionally fails
to be cleared on the CU screen, resulting in RAID array configuration failures.
Solution
Solution 1: Use StroCLI to clear preserved cache. (The following uses Linux as an
example and assumes that the user can log in to the OS.)
1. On the RAID configuration screen, set Boot Error Handling to Ignore errors.
See Figure 4-204.
5. After the installation is complete, run find ./ -name *storcli* to query the
installation directory.
6. Go to the directory and run the following command to clear preserved cache:
[linux~host]# cd /opt/MegaRAID/storcli
[linux~host]# /storcli64 /c0/vall delete preservedcache
4. Restart the server from the virtual drive and load the toolkit when prompted.
5. After the loading is complete, press C to go to the CLI, and enter the user
name and password. See Figure 4-206.
Solution
Step 1 Select Some drivers are not healthy and press Enter.
The health status screen is displayed, as shown in Figure 4-208.
----End
NOTE
For description of messages displayed during startup of the RAID controller card and
handling suggestions displayed on the management screen, see A.3 Common Boot Error
Messages for RAID Controller Cards.
4.7.5.1 Some configured disks have been removed from your system/All of
the disks from your previous configuration are gone
Symptoms
When you select Repair the whole platform to open the Critical Message
screen, the message "Some configured disks have been removed from your
system" or "All of the disks from your previous configuration are gone" is
displayed, as shown in Figure 4-209.
Solution
The message indicates that drives are not detected by the RAID controller card,
which is not caused by the RAID controller card. You are advised to perform the
following operations based on actual needs:
● Power off the server and check whether cables are properly connected and
whether any drive is removed. If there is no abnormalities, power on the
server and check whether the alarm is cleared. If the alarm is cleared, the
fault is rectified. If the alarm persists, contact technical support.
● Ignore the message and perform the following steps to go to the RAID
controller card management screen:
a. On the screen shown in Figure 4-209, select Enter Your Input Here and
press Enter.
An input box is displayed.
b. Type c, select Yes or Ok, and press Enter.
c. Select Enter Your Input Here again and press Enter.
An input box is displayed.
d. Type y, select Yes or Ok, and press Enter.
The system displays "Critical Message handling completed. Please exit."
e. Press Esc to return to the Device Manager screen.
f. Select the LSI SAS2208 RAID controller card and press Enter. The RAID
controller card management screen is displayed.
Symptom
When you open the Critical Message screen, the message "There are offline or
missing virtual drives with preserved cache" is displayed, as shown in Figure
4-210.
Solution
Step 1 On the screen shown in Figure 4-210, select Enter Your Input Here and press
Enter.
The screen shown in Figure 4-211 is displayed.
Step 6 Select AVAGO MegaRAID<SAS2208> driver Health Protocol Utility and press
Enter.
The Dashboard View screen is displayed, as shown in Figure 4-214.
Step 12 The Device Manager screen is displayed. Check whether the message "The
platform is healthy" is displayed on the screen.
● If yes, no further action is required.
● If no, contact Huawei technical support.
----End
Symptom
On the Driver Healthy Protocol Utility screen, the message "The following VDs
have missing disks" is displayed.
Solution
Step 1 Repair the RAID controller card.
Step 4 Restart the server and check whether the message "The platform is healthy" is
displayed on the Device Manager screen.
● If yes, no further action is required.
● If no, contact Huawei technical support.
----End
Scenarios
The LSI SAS2208 BIOS Configuration Utility is used to configure and manage the
LSI SAS2208 RAID controller card. Configuration Utility is embedded in the BIOS of
the controller and runs independently from the OS. It simplifies the procedures for
configuring and managing RAID properties.
Procedure
Step 1 Set the Legacy mode. For details, see A.1.3 Setting the Legacy Mode.
3. Select the LSI SAS2208 RAID controller card and click Start.
The Configuration Utility main screen is displayed, as shown in Figure 4-218.
Table 4-54 describes the parameters.
Drivers Views the physical drive properties and operates the drive,
such as creating a hot spare drive.
Parameter Description
----End
Additional Information
Related Tasks
None
Related Concepts
Icon Function
Disables the onboard sound alarm for the RAID controller card.
Screen Introduction
Figure 4-219 shows the screen.
Screen Introduction
Figure 4-220 shows the Adapter Selection screen. Table 4-56 describes the
parameters on the screen.
Screen Description
Figure 4-221 shows the Controller Information screen. Table 4-57 describes the
parameters on the screen.
Unconfi Good Spin Indicates whether Spin Down Time Time before a
Down to enable drive drive enters the
energy standby mode for
conservation for energy
drives in the conservation. If a
Unconf Good drive is not
state. accessed within
the standby
duration, the
drive enters the
spin down state.
NOTE
Data protection is an advanced feature of RAID controller cards. The RAID controller
card stores keys that are used to encrypt the member drives. If a drive is removed
from the RAID controller card, data on the drive cannot be obtained without the key.
This setting is valid only for drives supporting data encryption.
2. Click Next.
The Controller Properties screen is displayed. See Figure 4-223.
4. Click Submit to save the settings or click Reset to restore default settings.
5. Click Next.
The Controller Properties screen is displayed. See Figure 4-224.
Parameter Description
Select VDs Specify virtual drives that do not require a consistency check.
to Exclude
CC
7. Click Submit to save the settings or click Reset to restore default settings.
8. Click Home.
9. The Configuration Utility main screen is displayed.
Parameter Description
Parameter Description
Parameter Description
Parameter Description
3. Click Close.
4. Click Close to close the supercapacitor property screen.
The Controller Properties screen is displayed, as shown in Figure 4-223.
5. Click Home.
The Configuration Utility main screen is displayed.
Parameter Description
Parameter Description
Parameter Description
The LSI SAS2208 RAID controller card rescans the status and configuration
information of physical or virtual drives.
After the scanning is complete, the screen shown in 4.8.9 Logical/Physical View is
displayed.
Screen Description
Figure 4-232 shows the Virtual Drives screen.
In the Virtual Drives pane, virtual drives are displayed in the Virtual drive
number:RAID type:Virtual drive capacity:Virtual drive status format.
A virtual drive can be in either of the following states:
Fast Initialization
NOTICE
Fast initialization will cause data loss. Exercise caution when performing this
operation.
After the initialization, the virtual drive status changes to Optimal in the
Virtual Drives pane.
Slow Initialization
NOTICE
Slow initialization will cause data loss. Exercise caution when performing this
operation.
1. On the Virtual Drives screen, select Slow Initialize and click Go.
The Confirm Page screen is displayed.
2. Click Yes.
The slow initialization of the virtual drive starts. See Figure 4-234.
3. After the initialization, the virtual drive status changes to Optimal in the
Virtual Drives pane.
Consistency Check
1. On the Virtual Drives screen, select Check Consistency and click Go.
The Confirm Page screen is displayed.
2. Click Yes.
The consistency check screen starts. See Figure 4-235.
NOTICE
3. After the consistency check, the virtual drive status changes to Optimal in the
Virtual Drives pane.
4.8.7 Drives
This menu allows you to view the physical drive properties and perform operations
such as creating a hot spare drive.
Screen Introduction
Figure 4-237 shows the Drives screen.
In the Drives area, drives are listed in the format of "Slot:drive slot number,
type, capacity, status".
The drive state is as follows:
● Online: The drive has been added to a RAID array.
● Dedicated Hot Spare: The drive is a dedicated hot spare drive.
● Global Hot Spare: The drive is a global hot spare drive.
● Unconfigured Good: The drive is idle and not added to any RAID array or
configured as a hot spare drive.
Rebuilding RAID
In the Drives area, select a drive.
Select Rebuild under Operations, and click Go.
Online Click a drive in Online state. The screen shown in 4.8.9.1 Drive
Group is displayed.
Global Hot Click a drive in Global Hot Spare state. The screen shown in
Spare 4.8.9.4 Global/Dedicated Hot Spares is displayed.
Dedicated Click a drive in Dedicated Hot Spare state. The screen shown in
Hot Spare 4.8.9.4 Global/Dedicated Hot Spares is displayed.
NOTICE
If you select New Configuration, all RAID configurations on the RAID controller
card are deleted and the configuration starts again.
Screen Description
Figure 4-239 shows the screen. Table 4-68 describes the parameters on the
screen.
Manual Allows users to control all storage attributes and set parameters.
Configuratio
n
– If the drives in the same group have different sizes, the capacity of the RAID array
to be created is affected and an alarm will be generated when you click Accept
DG. It is recommended that you use drives with the same specifications to create a
RAID array.
– To release a selected drive, select the drive in Drive Groups and click Reclaim.
– RAID 10, RAID 50, and RAID 60 require at least two drive groups. Add drive groups
based on the RAID type to be configured.
4. Click Next.
The Span Definition screen is displayed. See Figure 4-243.
RAID 10, RAID 50, and RAID 60 require at least two drive groups. Add drive groups to
the span based on the RAID type to be configured.
The screen shown in Figure 4-244 is displayed.
Parameter Description
Parameter Description
IO Policy Options for data I/O of special virtual drives. This policy does
not affect cache prefetch.
– Direct:
Drive Cache Cache policy of physical drives (valid only for drives with
cache).
– Enable: Data is cached on drives to improve write
performance. However, if no protection mechanism is
available when the system is powered off unexpectedly,
data in the cache will be lost.
– Disable: Data is not cached on drives during a write
process. Data will not be lost when the system is powered
off unexpectedly.
– Unchanged: The current drive cache policy remains
unchanged.
Default value: Unchanged
NOTE
If only one virtual drive needs to be configured, click Update Size and then Accept.
1. On the screen shown in Figure 4-246, click Back and repeat Step 2 to Step 4
for each virtual drive to be created.
In this example, three virtual drives are created. See Figure 4-247.
3. Click Accept.
NOTICE
Initializing virtual drives will damage data on the virtual drives. If the original
data on the drives needs to be retained, select No.
----End
NOTICE
Initializing virtual drives will damage data on the virtual drives. If the original
data on the drives needs to be retained, select No.
2. Click Yes.
RAID initialization starts.
After the initialization, Optimal is displayed in the Virtual Drives pane. See
Figure 4-254.
----End
Screen Introduction
Figure 4-256 shows the Select Configuration Method screen.
Logical View
Figure 4-257 shows the screen. Table 4-71 describes the screen.
Drive The format Click Drive Group. The screen shown in 4.8.9.1 Drive
Group is "Drive Group is displayed.
Group xx,
RAID level".
Virtual The format Click a RAID under Virtual Drives. The screen shown
Drives is "Virtual in 4.8.9.2 Virtual Drives is displayed.
Drives xxx".
Drives The format Click a drive under Drives. The screen shown in
is 4.8.9.3 Drives is displayed.
"Backplane
Slot:drive
slot number,
type,
capacity,
status".
Global The format Click a drive under Global Hot Spares. The screen
Hot is shown in 4.8.9.4 Global/Dedicated Hot Spares is
Spares "Backplane displayed.
Slot:drive
slot number,
type,
capacity,
status".
Dedicat The format Click a drive under Dedicated Hot Spares. The screen
ed Hot is shown in 4.8.9.4 Global/Dedicated Hot Spares is
Spares "Backplane displayed.
Slot:drive
slot number,
type,
capacity,
status".
Unconfi The format Click a drive under Unconfigured Drives. The screen
gured is shown in 4.8.9.5 Unconfigured Drives is displayed.
Drives "Backplane
Slot:drive
slot number,
type,
capacity,
status".
Physical View
Figure 4-258 shows the screen. Table 4-72 describes the screen.
In the Backplane area, drives are listed in the format of "Slot:drive slot number,
type, capacity, status".
Online The drive Click a drive in Online state. The screen shown in
has been 4.8.9.1 Drive Group is displayed.
added to a
RAID array.
Unconfi The drive is Click a drive in Unconfigured Good state. The screen
gured idle and not shown in 4.8.9.5 Unconfigured Drives is displayed.
Good added to
any RAID
array or
configured
as a hot
spare drive.
Screen Introduction
Figure 4-259 shows the Drive Group screen. Table 4-73 describes the parameters
on the screen.
NOTE
Data protection is an advanced feature of RAID controller cards. The RAID controller card
stores keys that are used to encrypt the member drives. If a drive is removed from the RAID
controller card, data on the drive cannot be obtained without the key. This setting is valid
only for drives that support data encryption.
Screen Description
Figure 4-260 shows the screen. Table 4-74 describes the parameters on the
screen.
Strip Size Size of a data strip on each drive. The default value is
256 KB.
----End
A message is displayed, indicating that all data on the virtual drive will be lost if it
is deleted.
The virtual drive is deleted, and the Logical View screen is displayed.
----End
----End
----End
NOTICE
Fast initialization will cause data loss. Exercise caution when performing this
operation.
----End
NOTICE
Slow initialization will cause data loss. Exercise caution when performing this
operation.
----End
Checking Consistency
Step 1 Select CC and click Go.
A message is displayed, indicating that the consistency check may cause
inconsistent information in logs.
Step 2 Click Yes.
The screen shown in Figure 4-262 is displayed, showing the initialization progress.
● Suspend: suspends consistency check.
● Abort: aborts consistency check.
----End
----End
Step 2 Select Change RAID Level and select a RAID level from the drop-down list box.
The RAID level is changed and the Advanced Operations screen is displayed.
----End
Step 2 Select Change RAID Level and Add Drive and select a drive to be added to the
RAID array.
Step 3 Click Go.
A message is displayed, indicating that the operation is irreversible.
Step 4 Click Yes.
The RAID level is changed and the Advanced Operations screen is displayed.
----End
● Delete Virtual Drive after Erase: deletes a virtual drive after the data is
erased.
Step 4 Click OK.
A message is displayed, indicating that data on the virtual drive will be erased.
Step 5 Click Yes.
After the data is erased, 4.3.1 Logging In to the Configuration Utility is
displayed.
----End
----End
4.8.9.3 Drives
This option allows you to view parameters of the drives in a RAID array and
manage the drives.
Screen Introduction
Figure 4-266 shows the screen. Table 4-75 describes the parameters on the
screen.
Click Next to go to the next screen, as shown in Figure 4-267 and Figure 4-268.
Parameter Description
FDE Capable Indicates whether a drive supports the Full Disk Encryption
(FDE) technology.
----End
Drive Indicators On
----End
Screen Introduction
Figure 4-269 shows the drive properties screen (1), and Table 4-76 describes the
parameters on the screen.
Click Next to go to the next screen, as shown in Figure 4-270 and Figure 4-271.
----End
Drive Indicators On
Step 1 Select Locate and click Go.
All drive indicators of the current virtual drive blink.
----End
----End
Screen Introduction
Figure 4-272 shows the drive properties screen (1), and Table 4-77 describes the
parameters on the screen.
Click Next to go to the next screen, as shown in Figure 4-273 and Figure 4-274.
Parameter Description
Parameter Description
FDE Capable Indicates whether a drive supports the Full Disk Encryption
(FDE) technology.
----End
----End
----End
Prepare Removal
Step 1 Select Prepare Removal and click Go.
----End
Drive Indicators On
Step 1 Select Locate and click Go.
----End
----End
Drive Erase
Step 1 Select Drive Erase and click Go.
The Mode Selection - Drive Erase screen is displayed. See Figure 4-276.
The LSI SAS2208 RAID controller card supports the following levels of secure
erasure for data on a drive:
----End
4.8.10 Events
This screen displays system event information.
Screen Description
Figure 4-277 shows the screen. Table 4-78 describes the parameters on the
screen.
Solution
Step 1 Set the parameters and click Go.
The events that meet the search criteria are displayed in the right pane, as shown
in Figure 4-278. Table 4-79 describes the parameters.
Parameter Description
----End
4.8.11 Exit
Click Exit to exit the Configuration Utility.
Procedure
Step 1 Set the EFI mode. For details, see A.1.3 Setting the Legacy Mode.
The position of the RAID controller card management interface in EFI/UEFI mode
varies with the BIOS platform.
● Brickland platform: The interface is integrated into the BIOS Setup. For details,
see A.1.1 Logging In to the RAID Controller Card Management Screen in
EFI/UEFI Mode (Brickland Platform).
● Grantley platform: The interface is integrated into the Device Manager. For
details, see A.1.2 Logging In to the RAID Controller Card Management
Screen in EFI/UEFI Mode (Grantley Platform).
Step 3 Select the LSI SAS2208 controller card and press Enter.
The screen shown in Figure 4-280 is displayed. Table 4-80 describes the
parameters on the screen.
Parameter Description
Parameter Description
----End
Screen Description
Figure 4-281 shows the Configuration Management screen. Table 4-81
describes the parameters on the screen.
----End
Screen Description
The Create Virtual Drive screen is shown in Figure 4-282. Table 4-82 describes
the parameters on the screen.
Select Drives From Specifies the source of member drives of the virtual drive.
Member drive sources are as follows:
● Unconfigured Capacity: idle drives that are not added
to any virtual drives
● Free Capacity: space that is not used as virtual drives in
drive groups
Virtual Drive Size Capacity of the virtual drive. The default value is the
current largest capacity supported.
Parameter Description
Read Policy Read policy of the virtual drive. The options are as follows:
● No Read Ahead: disables the Read Ahead function.
● Read Ahead: enables the Read Ahead function. The
controller pre-reads sequential data or the data
predicted to be used and saves it in the cache.
Default value: Read Ahead
NOTE
To achieve optimal disk performance, set the policy to Read Ahead
for HDDs and No Read Ahead for SSDs.
Parameter Description
Write Policy Default cache write policy of the RAID array. The values of
this parameter vary depending on the firmware version.
● If the firmware version of the LSI SAS2208 is
3.460.165-8277, the cache write policies are as follows:
– Write Through: When the drive subsystem receives
all data, the RAID controller card signals the host
that the data transmission is complete.
– Write Back: If the supercapacitor is not configured or
the supercapacitor is faulty, the RAID controller card
automatically switches to the Write Through mode.
– Always Write Back: When the cache receives all
data, the RAID controller card signals the host that
the data transmission is complete.
● If the firmware version of the LSI SAS2208 is
3.400.95-4061, the cache write policies are as follows:
– Write Through: When the drive subsystem receives
all data, the RAID controller card signals the host
that the data transmission is complete.
– Write Back: If the supercapacitor is not configured or
the supercapacitor is faulty, the RAID controller card
automatically switches to the Write Through mode.
– Force Write Back: When the cache receives all data,
the RAID controller card signals the host that the
data transmission is complete.
NOTE
● Default value: Write Back
● The RAID controller card does not have a supercapacitor.
● The supercapacitor is being charged and discharged.
● The supercapacitor is damaged.
● Pinned or preserved cache exists.
● If this parameter is set to Always Write Back or Force
Write Back, the DDR write data of the RAID controller card
will be lost when:
● The server is powered off unexpectedly.
● The supercapacitor is not installed.
● The supercapacior is being charged.
This mode is not recommended.
Parameter Description
I/O Policy I/O policy of the virtual drive. The policy does not affect
the Read Ahead function. The options are as follows:
● Direct:
– In a read scenario, data is directly read from drives.
(If Read Policy is set to Read Ahead, data is read
from the cache.)
– In a write scenario, data is written into the RAID
cache. (If Write Policy is set to Write Through, data
is directly written into drives.)
● Cached: Data is read from or written to the cache. Use
this option only when configuring CacheCade 1.1.
Default value: Direct
NOTE
The LSI SAS2208 does not support the CacheCade function.
Drive Cache Cache policy of physical drives (valid only for drives with
cache).
● Unchanged: uses the current cache policy.
● Enable: Data is cached on drives to improve write
performance. However, if no protection mechanism is
available when the system is powered off unexpectedly,
data in the cache will be lost.
● Disable: Data is not cached on drives during a write
process. Data will not be lost when the system is
powered off unexpectedly.
Default value: Unchanged
Parameter Description
Parameter Description
----End
Screen Introduction
Figure 4-284 shows the Create Profile Based Virtual Drive screen. Table 4-84
describes the parameters on the screen.
Parameter Description
Generic RAID 5 Quickly creates RAID 5, which can be used in scenarios that
have requirements on data redundancy and throughput.
The RAID array configuration screen is displayed, as shown in Figure 4-285. Table
4-85 describes the parameters on the screen.
Parameter Description
Create Whether to create dedicated hot spare drives for the RAID
Dedicated Hot array.
Spare
Step 2 View or configure parameters. For parameter details, see Table 4-85.
----End
Screen Description
Figure 4-286 shows the screen. Table 4-86 describes the parameters on the
screen.
Screen Introduction
Figure 4-287 shows the Manage Foreign Configuration screen.
----End
----End
Screen Description
Figure 4-289 shows the screen. Table 4-87 describes the parameters on the
screen.
Parameter Description
PCI Slot Number PCI slot number of the RAID controller card.
Parameter Description
Virtual Drive Number of virtual drives that can be managed by the RAID
Count controller card.
Screen Description
Figure 4-290 shows the screen. Table 4-88 describes the parameters on the
screen.
----End
----End
----End
----End
The port rate option is displayed. The LSISAS2208 supports the following port
rates: 1.5 Gbps, 3 Gbps, 6 Gbps, and Auto.
The message "Please restart the system for the changes to take effect" is
displayed.
----End
The screen shown in Figure 4-292 is displayed. Table 4-89 describes the
parameters on the screen.
Parameter Description
Parameter Description
----End
The screen shown in Figure 4-293 is displayed. Table 4-90 describes the
parameters on the screen.
Parameter Description
Parameter Description
----End
Screen Description
Figure 4-294 and Figure 4-295 show the Advanced Controller Properties screen.
Table 4-91 describes the parameters on the screen.
Cache and Views cache and memory information about the controller.
Memory
Boot Mode Specifies the action to be taken when the BIOS detects an
exception.
● Stop on errors: stops startup and continues with startup
only when user confirms.
● Pause on errors: suspends startup and continues with
startup even if the user does not confirm after a set
period.
● Ignore errors: continues startup. This option is usually for
system diagnosis.
● Safe mode on errors: enters safe startup mode.
The default value is Stop on errors.
Parameter Description
----End
Cache Flush Specifies the interval for data updates in the cache.
Interval
----End
Parameter Description
----End
Parameter Description
Spin Down Specifies whether to apply the power saving mode to idle
Unconfigured drives.
Good
Spin Down Specifies whether to apply the power saving mode to hot
Hotspare Drives spare drives.
Parameter Description
Drives Standby Specifies the standby time before a drive enters the power
Time saving mode. If there is no drive I/O within the standby time,
the drive enters the spin-down mode.
----End
Emergency Specifies the mode for using the emergency spare function
Spare when a drive is faulty.
● None
● Unconfigured Good: Idle drives are used for rebuild.
● Global Hotspare: Hot spare drives are used for rebuild.
● Unconfigured Good and Global Hotspare: Idle and hot
spare drives are used for rebuild.
Emergency for Specifies whether to use a hot spare drive to replace a drive
SMARTer that prefails.
Persistent Hot Specifies whether to enable the persistent hot spare function.
Spare
Replace Drive Specifies whether to enable the Replace Drive function when
on SMART Error a SMART error occurs on a drive.
----End
----End
Screen Description
The Virtual Drive Management page displays the virtual drive list.
Select a virtual drive and press Enter. The drive detail screen is displayed, as
shown in Figure 4-301. Table 4-97 describes the parameters on the screen.
Parameter Description
Parameter Description
----End
Screen Description
Figure 4-303 shows the View Associated Drives screen that lists virtual drives.
Parameter Description
----End
4.9.4.2 Advanced
This screen allows you to view and modify advanced properties of a virtual drive.
Screen Description
Figure 4-305 shows the Advanced screen. Table 4-100 describes the parameters
on the screen.
Parameter Description
Default Default cache write policy of the RAID array. The values of this
Write Cache parameter vary depending on the firmware version.
Policy ● If the firmware version of the LSI SAS2208 is 3.460.165-8277,
the cache write policies are as follows:
– Write Through: When the drive subsystem receives all
data, the RAID controller card signals the host that the
data transmission is complete.
– Write Back: If the supercapacitor is not configured or the
supercapacitor is faulty, the RAID controller card
automatically switches to the Write Through mode.
– Always Write Back: When the cache receives all data, the
RAID controller card signals the host that the data
transmission is complete.
● If the firmware version of the LSI SAS2208 is 3.400.95-4061,
the cache write policies are as follows:
– Write Through: When the drive subsystem receives all
data, the RAID controller card signals the host that the
data transmission is complete.
– Write Back: If the supercapacitor is not configured or the
supercapacitor is faulty, the RAID controller card
automatically switches to the Write Through mode.
– Force Write Back: When the cache receives all data, the
RAID controller card signals the host that the data
transmission is complete.
NOTE
● Default value: Write Back
● If this parameter is set to Write Back, Current Write Cache Policy
will automatically change to Write Through when the RAID
controller card does not have a supercapacitor, the supercapacitor
is in the charge or discharge state, the supercapacitor is damaged,
or pinned/preserved cache exists.
● The Always Write Back or Force Write Back mode is not
recommended because the DDR write data of the RAID controller
card will be lost when the server is powered off unexpectedly, the
capacitor is not installed or is being charged.
Parameter Description
----End
Screen Description
The Drive Management screen lists drives.
Select a drive and press Enter. The drive detail screen is displayed, as shown in
Figure 4-306. Table 4-101 describes the parameters on the screen.
Parameter Description
Parameter Description
Step 5 Press Enter to finish the configuration and return to the previous screen.
----End
Step 8 Press Enter to finish the configuration and return to the previous screen.
----End
The advanced drive property screen is displayed, as shown in Figure 4-307. Table
4-102 describes the parameters on the screen.
Parameter Description
Parameter Description
FDE Capable Specifies whether the drive supports the Full Disk
Encryption (FDE) technology.
----End
Screen Introduction
Figure 4-308 shows the Hardware Components screen. Table 4-103 describes
the parameters on the screen.
Parameter Description
4.9.7 Exit
Exit the Configuration Utility.
----End
NOTICE
Download the MegaRAID Storage Manager software for the OS you are using
(Linux or Windows).
Downloading StorCLI
Step 1 Go to the RAID controller card page at BROADCOM website.
Step 2 On the DOWNLOADS tab page, click Management Software and Tools.
Step 4 Decompress the downloaded package to obtain the tool packages for different
OSs.
----End
Installing StorCLI
The StorCLI installation method varies depending on the OS type. The following
uses Windows, Linux, and VMware as examples to describe the StorCLI installation
procedure. For the installation procedures for other OSs, see the Readme file in
the software package.
Function
Query and set the interval between the spinup of drives and the number of drives
that spin up simultaneously upon power-on.
Format
storcli64 /ccontroller_id show spinupdelay
storcli64 /ccontroller_id show spinupdrivecount
storcli64 /ccontroller_id set spinupdelay=time
storcli64 /ccontroller_id set spinupdrivecount=count
Parameters
Parameter Description Value
For details about how to query the RAID controller card ID, see 4.11.2.25
Querying RAID Controller Card, RAID Array, or Physical Drive Information.
Usage Guidelines
Retain the default settings.
Example
# Query the Spinup parameters of a RAID controller card.
domino:~# ./storcli64 /c0 show spinupdelay
Controller = 0
Status = Success
Description = None
Controller Properties:
=====================
--------------------------
Ctrl_Prop Value
--------------------------
Spin Up Delay 2 second(s)
--------------------------
domino:~# ./storcli64 /c0 show spinupdrivecount
Controller = 0
Status = Success
Description = None
Controller Properties:
=====================
--------------------------
Ctrl_Prop Value
--------------------------
Spin Up Drive Count 4
--------------------------
Function
Set the PowerSave parameters for idle drives and hot spare drives.
Format
storcli64 /ccontroller_id set ds= state type= disktype spindowntime= time
Parameters
Parameter Description Value
For details about how to query the RAID controller card ID, see 4.11.2.25
Querying RAID Controller Card, RAID Array, or Physical Drive Information.
Usage Guidelines
None
Example
# Enable the power saving mode for an idle drive.
domino:~# ./storcli64 /c0 set ds=on type=1 spindowntime=30
Controller = 0
Status = Success
Description = None
Controller Properties:
=====================
--------------------------
Ctrl_Prop Value
--------------------------
SpnDwnUncDrv Enable
SpnDwnTm 30 minutes
--------------------------
4.11.2.3 Setting the Initialization Function for a Physical Drive and Viewing
the Initialization Progress
Function
Set the initialization function for physical drives and view the initialization
progress.
Format
storcli64 /ccontroller_id/eenclosure_id/sslot_id action initialization
Parameters
Parameter Description Value
For details about how to query the IDs, see 4.11.2.25 Querying RAID Controller
Card, RAID Array, or Physical Drive Information.
Usage Guidelines
None
Example
# Initialize the drive in slot 3 and view the initialization progress.
domino:~# ./storcli64 /c0/e252/s3 start initialization
Controller = 0
Status = Success
Description = Start Drive Initialization Succeeded.
domino:~# ./storcli64 /c0/e252/s3 show initialization
Controller = 0
Status = Success
Description = Show Drive Initialization Status Succeeded.
------------------------------------------------------
Drive-ID Progress% Status Estimated Time Left
------------------------------------------------------
/c0/e252/s3 0 In progress 0 Seconds
------------------------------------------------------
4.11.2.4 Setting the Data Erasing Mode for a Drive and Viewing the Erasing
Progress
Function
Set the data erasing mode for a drive and view the erasing progress.
Format
storcli64 /ccontroller_id/eenclosure_id/sslot_id show erase
storcli64 /ccontroller_id/eenclosure_id/sslot_id stop erase
storcli64 /ccontroller_id/eenclosure_id/sslot_id start erase mode
Parameters
Parameter Description Value
For details about how to query the IDs, see 4.11.2.25 Querying RAID Controller
Card, RAID Array, or Physical Drive Information.
Usage Guidelines
None
Example
# Erase the data in simple mode from the drive in slot 3 and view the erasing
progress.
domino:~# ./storcli64 /c0/e252/s3 start erase simple
Controller = 0
Status = Success
Description = Start Drive Erase Succeeded.
domino:~# ./storcli64 /c0/e252/s3 show erase
Controller = 0
Status = Success
Description = Show Drive Erse Status Succeeded.
------------------------------------------------------
Drive-ID Progress% Status Estimated Time Left
------------------------------------------------------
/c0/e252/s3 0 In progress 0 Seconds
------------------------------------------------------
domino:~# ./storcli64 /c0/e252/s3 stop erase
Controller = 0
Status = Success
Description = Stop Drive Erase Succeeded.
Function
Set the background initialization rate, consistency check rate, drive patrol rate,
RAID rebuilding rate, and RAID capacity expansion and migration rate.
Format
storcli64 /ccontroller_id set action=value
Parameters
Parameter Description Value
For details about how to query the RAID controller card ID, see 4.11.2.25
Querying RAID Controller Card, RAID Array, or Physical Drive Information.
Usage Guidelines
None
Example
# Set the drive patrol rate to 30%.
domino:~# ./storcli64 /c0 set prrate=30
Controller = 0
Status = Success
Description = None
Controller Properties :
=====================
-----------------------
Ctrl_Prop Value
-----------------------
Patrol Read Rate 30%
-----------------------
Function
Enable the Stop On Error function so that the controller BIOS stops starting when
it detects an error.
Format
storcli64 /ccontroller_id set bios mode=action
Parameters
Parameter Description Value
For details about how to query the RAID controller card ID, see 4.11.2.25
Querying RAID Controller Card, RAID Array, or Physical Drive Information.
Usage Guidelines
None
Example
# Enable the Stop On Error function.
domino:~# ./storcli64 /c0 set bios mode=soe
Controller = 0
Status = Success
Description = None
Controller Properties :
=====================
----------------
Ctrl_Prop Value
----------------
BIOS Mode SOE
----------------
Function
Create and delete a RAID array.
Syntax
Syntax Descripti
on
NOTE
Parameters
Parameter Description Value
NOTE
● For details about how to query the IDs, see 4.11.2.25 Querying RAID Controller Card,
RAID Array, or Physical Drive Information.
● Use a comma (,) to separate multiple drives to be added to a RAID array. The format of
a single drive is enclosure_id:slot_id. The format of drives in consecutive slots is
enclosure_id:startid-endid.
Usage Guidelines
None
Example
# Create a RAID 0 array.
domino:~# ./storcli64 /c0 add vd r0 size=100GB drives=252:0-3
Controller = 0
Status = Success
Description = Add VD Succeeded
Status = Success
Description = Delete VD Succeeded
4.11.2.8 Setting the cache read and write properties for a RAID array
Function
Set the cache read and write properties for a RAID array.
Format
storcli64 /ccontroller_id/vraid_id set wrcache=mode
Parameters
Parameter Description Value
For details about how to query the IDs, see 4.11.2.25 Querying RAID Controller
Card, RAID Array, or Physical Drive Information.
Usage Guidelines
None
Example
# Set the cache read/write mode to wt.
Function
Set an access policy for a RAID array.
Format
storcli64 /ccontroller_id/vraid_id set accesspolicy=mode
Parameters
Parameter Description Value
For details about how to query the IDs, see 4.11.2.25 Querying RAID Controller
Card, RAID Array, or Physical Drive Information.
Usage Guidelines
None
Example
# Set the RAID access policy to rw.
domino:~# ./storcli64 /c0/v0 set accesspolicy=rw
Controller = 0
Status = Success
Description = None
Details Status :
==============
---------------------------------------
VD Property Value Status ErrCd ErrMsg
---------------------------------------
0 AccPolicy RW Success 0-
---------------------------------------
Function
Set RAID foreground initialization.
Format
storcli64 /ccontroller_id/vraid_id start mode
Parameters
Parameter Description Value
For details about how to query the IDs, see 4.11.2.25 Querying RAID Controller
Card, RAID Array, or Physical Drive Information.
Usage Guidelines
None
Example
# Quickly initialize a RAID array.
domino:~# ./storcli64 /c0/v0 start init
Controller = 0
Status = Success
Function
Pause, resume, and stop RAID background initialization and view the initialization
progress.
Format
storcli64 /ccontroller_id/vraid_id action bgi
Parameters
Parameter Description Value
For details about how to query the IDs, see 4.11.2.25 Querying RAID Controller
Card, RAID Array, or Physical Drive Information.
Usage Guidelines
None
Example
# View the background initialization progress.
domino:~# ./storcli64 /c0/v0 show bgi
Controller = 0
Status = Success
Description = Noe
VD Operation Status :
===================
------------------------------------------------------
VD Operation Progress% Staus Estimated Time Left
------------------------------------------------------
0 BGI 3 In progress 53 Minutes
------------------------------------------------------
Function
Set a virtual drive or physical drive as a boot drive.
Format
storcli64 /ccontroller_id/vvd_id set bootdrive=on
storcli64 /ccontroller_id/eenclosure_id/sslot_id set bootdrive=on
Parameters
Parameter Description Value
For details about how to query the IDs, see 4.11.2.25 Querying RAID Controller
Card, RAID Array, or Physical Drive Information.
Usage Guidelines
None
Example
# Set VD 0 to boot drive.
domino:~# ./storcli64 /c0/v0 set bootdrive=on
Controller = 0
Status = Success
Description = Noe
Detailed Status :
===============
----------------------------------------
VD Property Value Staus ErrCd ErrMsg
----------------------------------------
0 Boot Drive On Success 0 -
----------------------------------------
Function
Enable the emergency hot spare function and allow the emergency hot spare
function to be used when a SMART error occurs.
Format
storcli64 /ccontroller_id set eghs eug=state
storcli64 /ccontroller_id set eghs smarter=state
Parameters
Parameter Description Value
For details about how to query the RAID controller card ID, see 4.11.2.25
Querying RAID Controller Card, RAID Array, or Physical Drive Information.
Usage Guidelines
None
Example
# Enable the emergency hot spare function and allow the emergency hot spare
function to be used when a SMART error occurs.
domino:~# ./storcli64 /c0 set eghs eug=on
Controller = 0
Status = Success
Description = None
Controller Properties :
=====================
------------------
Ctrl_Prop Value
------------------
EmergencyUG ON
------------------
domino:~# ./storcli64 /c0 set eghs smarter=on
Controller = 0
Status = Success
Description = None
Controller Properties :
=====================
-----------------------
Ctrl_Prop Value
-----------------------
EmergencySmarter ON
-----------------------
Function
Set the hot spare drive status to global or dedicated.
Format
storcli64 /ccontroller_id/eenclosure_id/sslot_id add hotsparedrive [dgs=vd_id]
Parameters
Parameter Description Value
For details about how to query the IDs, see 4.11.2.25 Querying RAID Controller
Card, RAID Array, or Physical Drive Information.
Usage Guidelines
None
Example
# Set the drive in slot 3 to a global hot spare drive.
domino:~# ./storcli64 /c0/e252/s3 add hotsparedrive
Controller = 0
Status = Success
Description = Add Hot Spare Succeeded.
# Set the drive in slot 3 to the dedicated hot spare drive for VD 0.
domino:~# ./storcli64 /c0/e252/s3 add hotsparedrive dgs=0
Controller = 0
Status = Success
Description = Add Hot Spare Succeeded.
Function
Pause, resume, and stop the RAID rebuilding, copyback, and patrolread, and view
the progress.
Format
storcli64 /ccontroller_id/eenclosure_id/sslot_id action function
Parameters
Parameter Description Value
For details about how to query the IDs, see 4.11.2.25 Querying RAID Controller
Card, RAID Array, or Physical Drive Information.
Usage Guidelines
None
Example
# View RAID rebuild progress.
domino:~# ./storcli64 /c0/e252/s4 show rebuild
Controller = 0
Status = Success
Description = Show Drive Rebuild Status Succeeded.
------------------------------------------------------
Drive-ID Progress% Staus Estimated Time Left
------------------------------------------------------
/c0/e252/s4 9 In progress 12 Minutes
------------------------------------------------------
Function
Set a SMART scan interval.
Format
storcli64 /ccontroller_id set smartpollinterval=value
Parameters
Parameter Description Value
For details about how to query the RAID controller card ID, see 4.11.2.25
Querying RAID Controller Card, RAID Array, or Physical Drive Information.
Usage Guidelines
None
Example
# Set the SMART scan interval to 60 seconds.
domino:~# ./storcli64 /c0 set smartpollinterval=60
Controller = 0
Status = Success
Description = None
Controller Properties :
=====================
-------------------------------
Ctrl_Prop Value
-------------------------------
SmartPollInterval 60 second(s)
-------------------------------
Function
Adjust the available space of the virtual drive to expand its capacity if it does not
use all the capacity of member drives.
Format
storcli64 /ccontroller_id/vvd_id expand size=capacity
Parameters
Parameter Description Value
For details about how to query the IDs, see 4.11.2.25 Querying RAID Controller
Card, RAID Array, or Physical Drive Information.
Usage Guidelines
None
Example
# Expand the capacity of VD 0 by 200 GB.
domino:~# ./storcli64 /c0/v0 expand size=200GB
Controller = 0
Status = Success
Description = expansion operation succeeded.
EXPANSION RESULT :
================
--------------------------------------------------------------------------------
VD Size FreSpc ReqSize AbsUsrSz %FreSpc NewSize Statsu NoArrExp
--------------------------------------------------------------------------------
0 100.0 GB 457.861 GB 200.0 GB 201.458 GB 44 301.458 GB - 457.861 GB
--------------------------------------------------------------------------------
Size - Current VD size|FreSpc - Freespace available before expansion
%FreSpc - Requested expansion size in % of available free space
AbsUsrSz - User size rounded to nearest %
NOTE
● You can also expand the RAID capacity by adding the available capacity for its member
drives. For details, see 4.11.2.18 Expanding RAID Capacity by Adding New Drives and
Changing the RAID Level.
● The RAID controller card adjusts the capacity to be added based on the drive type.
Therefore, the expanded capacity may be varied.
4.11.2.18 Expanding RAID Capacity by Adding New Drives and Changing the
RAID Level
Function
There are two RAID capacity expansion methods:
After adding a new drive, you can change the RAID level.
Format
storcli64 /ccontroller_id/vvd_id start migrate type=rlevel option=add
drives=enclosure_id:slot_id
Parameters
Parameter Description Value
For details about how to query the IDs, see 4.11.2.25 Querying RAID Controller
Card, RAID Array, or Physical Drive Information.
Usage Guidelines
None
Example
# Add the drive in slot 2 to the RAID 0 array for capacity expansion.
domino:~# ./storcli64 /c0/v0 start migrate type=r0 option=add drives=252:2
Controller = 0
Status = Success
Description = Start MIGRATE Operation Success.
domino:~# ./storcli64 /c0/v0 show migrate
Controller = 0
Status = Success
Description = None
VD Operation Status :
===================
-------------------------------------------------------
VD Operation Progress% Status Estimated Time Left
-------------------------------------------------------
0 Migrate 1 In progress 13 Minutes
-------------------------------------------------------
# Add the drive to a single-disk RAID 0 array and change the RAID level to RAID 1.
domino:~# ./storcli64 /c0/v0 start migrate type=r1 option=add drives=252:3
Controller = 0
Status = Success
Description = Start MIGRATE Operation Success.
domino:~# ./storcli64 /c0/v0 show migrate
Controller = 0
Status = Success
Description = None
VD Operation Status :
===================
-------------------------------------------------------
VD Operation Progress% Status Estimated Time Left
-------------------------------------------------------
0 Migrate 1 In progress 14 Minutes
-------------------------------------------------------
Function
Query and clear PreservedCache data.
Format
storcli64 /ccontroller_id show preservedcache
Parameters
Parameter Description Value
For details about how to query the IDs, see 4.11.2.25 Querying RAID Controller
Card, RAID Array, or Physical Drive Information.
Usage Guidelines
None
Example
# Query PreservedCache data.
domino:~# ./storcli64 /c0 show preservedcache
Controller = 0
Status = Success
Description = No Virtual Drive has Preserved Cache Data.
Function
Set consistency check parameters.
Format
storcli64 /ccontroller_id[/vvd_id]show cc
Parameters
Parameter Description Value
For details about how to query the IDs, see 4.11.2.25 Querying RAID Controller
Card, RAID Array, or Physical Drive Information.
Usage Guidelines
If the show command does not contain /vvd_id, the consistency check parameters
are queried.
If the show command contains /vvd_id, the consistency check progress is queried.
Example
# Set automatic consistency check parameters.
domino:~# ./storcli64 /c0 set cc=conc delay=1 starttime=2016/07/14 22:00:00 excludevd=0
Controller = 0
Status = Success
Description = None
Controller Properties :
=====================
------------------------------------
Ctrl_Prop Value
------------------------------------
CC Mode CONC
CC delay 1
CC Starttime 2016/07/14 22:00:00
CC ExcludeVD(0) Success
------------------------------------
Function
Query and set patrolread parameters.
Format
storcli64 /ccontroller_id set patrolread starttime=time
maxconcurrentpd=number
Parameters
Parameter Description Value
For details about how to query the RAID controller card ID, see 4.11.2.25
Querying RAID Controller Card, RAID Array, or Physical Drive Information.
Usage Guidelines
None
Example
# Set the patrolread start time to 2016/07/15 23:00:00 and the number of drives
to be checked concurrently to 2.
domino:~# ./storcli64 /c0 set patrolread starttime=2016/07/15 23:00:00 maxconcurrentpd=2
Controller = 0
Status = Success
Description = None
Controller Properties :
=====================
---------------------------------------
Ctrl_Prop Value
---------------------------------------
PR Starttime 2016/07/15 23:00:00
PR MaxConcurrentPd 2
---------------------------------------
Function
Query and set CacheFlush parameters.
Format
storcli64 /ccontroller_id show cacheflushint
Parameters
Parameter Description Value
For details about how to query the RAID controller card ID, see 4.11.2.25
Querying RAID Controller Card, RAID Array, or Physical Drive Information.
Usage Guidelines
None
Example
# Set the CacheFlush interval to 10.
domino:~# ./storcli64 /c0 set cacheflushint=10
Controller = 0
Status = Success
Description = None
Controller Properties :
=====================
---------------------------
Ctrl_Prop Value
---------------------------
Cache Flush Interval 10 sec
---------------------------
Function
Set drive status.
Format
storcli64 /ccontroller_id/eenclosure_id/sslot_id set state
Parameters
Parameter Description Value
For details about how to query the IDs, see 4.11.2.25 Querying RAID Controller
Card, RAID Array, or Physical Drive Information.
Usage Guidelines
None
Example
# Change the state of the drive in slot 1 from Unconfigured Bad to
Unconfigured Good.
domino:~# ./storcli64 /c0/e252/s1 set good force
Controller = 0
Status = Success
Description = Set Drive Offline Succeeded.
Function
Turn on and off the UID indicator of a specified drive.
Format
storcli64 /ccontroller_id/eenclosure_id/sslot_id action locate
Parameters
Parameter Description Value
For details about how to query the IDs, see 4.11.2.25 Querying RAID Controller
Card, RAID Array, or Physical Drive Information.
Usage Guidelines
None
Example
# Turn on the UID indicator of the drive in slot 7.
domino:~# ./storcli64 /c0/e252/s7 start locate
Controller = 0
Status = Success
Description = Start Drive Locate Succeeded.
Function
Query detailed information about RAID controller cards, physical drives, and
virtual drives.
Format
storcli64 /ccontroller_id show
storcli64 /ccontroller_id/eenclosure_id/sslot_id show all
storcli64 /ccontroller_id/vvd_id show all
Parameters
Parameter Description Value
For details about how to query the IDs, see 4.11.2.25 Querying RAID Controller
Card, RAID Array, or Physical Drive Information.
Usage Guidelines
Table 4-105 describes the fields in the command output.
Example
# Query detailed information about RAID controller card 0.
[root@localhost ~]# ./storcli64 /c0 show
Generating detailed summary of the adapter, it may take a while to complete.
TOPOLOGY :
========
-----------------------------------------------------------------------------
DG Arr Row EID:Slot DID Type State BT Size PDC PI SED DS3 FSpace TR
-----------------------------------------------------------------------------
0- - - - RAID1 Optl N 744.125 GB dflt N N none N N
00 - - - RAID1 Optl N 744.125 GB dflt N N none N N
0 0 0 252:1 18 DRIVE Onln N 744.125 GB dflt N N none - N
0 0 1 252:0 48 DRIVE Onln N 744.125 GB dflt N N none - N
-----------------------------------------------------------------------------
Virtual Drives = 1
VD LIST :
=======
---------------------------------------------------------------
DG/VD TYPE State Access Consist Cache Cac sCC Size Name
---------------------------------------------------------------
0/0 RAID1 Optl RW Yes RWBD - ON 744.125 GB
---------------------------------------------------------------
Cac=CacheCade|Rec=Recovery|OfLn=OffLine|Pdgd=Partially Degraded|Dgrd=Degraded
Optl=Optimal|RO=Read Only|RW=Read Write|HD=Hidden|TRANS=TransportReady|B=Blocked|
Consist=Consistent|R=Read Ahead Always|NR=No Read Ahead|WB=WriteBack|
AWB=Always WriteBack|WT=WriteThrough|C=Cached IO|D=Direct IO|sCC=Scheduled
Check Consistency
Physical Drives = 6
PD LIST :
=======
---------------------------------------------------------------------------------
EID:Slt DID State DG Size Intf Med SED PI SeSz Model Sp Type
---------------------------------------------------------------------------------
252:0 48 Onln 0 744.125 GB SATA SSD N N 512B INTEL SSDSC2BB800G4 U -
252:1 18 Onln 0 744.125 GB SATA SSD N N 512B INTEL SSDSC2BB800G4 U -
252:2 19 UGood - 744.125 GB SATA SSD N N 512B INTEL SSDSC2BA800G4 U -
252:3 20 UGood - 744.125 GB SATA SSD N N 512B INTEL SSDSC2BA800G4 U -
252:4 49 UGood - 744.125 GB SATA SSD N N 512B INTEL SSDSC2BB800G4 U -
252:5 47 UGood - 744.125 GB SATA SSD N N 512B INTEL SSDSC2BB800G4 U -
---------------------------------------------------------------------------------
Cachevault_Info :
===============
------------------------------------
Model State Temp Mode MfgDate
------------------------------------
CVPM02 Optimal 28C - 2016/11/04
------------------------------------
Drive /c0/e252/s0 :
=================
---------------------------------------------------------------------------------
EID:Slt DID State DG Size Intf Med SED PI SeSz Model Sp Type
---------------------------------------------------------------------------------
252:0 48 Onln 0 744.125 GB SATA SSD N N 512B INTEL SSDSC2BB800G4 U -
---------------------------------------------------------------------------------
===================================
Drive position = DriveGroup:0, Span:0, Row:1
Enclosure position = 1
Connected Port Number = 0(path0)
Sequence Number = 2
Commissioned Spare = No
Emergency Spare = No
Last Predictive Failure Event Sequence Number = 0
Successful diagnostics completion on = N/A
SED Capable = No
SED Enabled = No
Secured = No
Cryptographic Erase Capable = No
Locked = No
Needs EKM Attention = No
PI Eligible = No
Certified = No
Wide Port Capable = No
Port Information :
================
-----------------------------------------
Port Status Linkspeed SAS address
-----------------------------------------
0 Active 6.0Gb/s 0x4433221100000000
-----------------------------------------
Inquiry Data =
40 00 ff 3f 37 c8 10 00 00 00 00 00 3f 00 00 00
00 00 00 00 48 50 4c 57 31 35 36 37 31 30 52 59
30 38 52 30 4e 47 20 20 00 00 00 00 00 00 32 44
31 30 33 30 30 37 4e 49 45 54 20 4c 53 53 53 44
32 43 42 42 30 38 47 30 20 34 20 20 20 20 20 20
20 20 20 20 20 20 20 20 20 20 20 20 20 20 01 80
00 40 00 2f 00 40 00 00 00 00 07 00 ff 3f 10 00
3f 00 10 fc fb 00 01 bf ff ff ff 0f 00 00 07 00
/c0/v0 :
======
---------------------------------------------------------
DG/VD TYPE State Access Consist Cache sCC Size Name
---------------------------------------------------------
1/0 RAID1 Optl RW Yes RWTD - 1.089 TB
---------------------------------------------------------
Cac=CacheCade|Rec=Recovery|OfLn=OffLine|Pdgd=Partially Degraded|dgrd=Degraded
Optl=Optimal|RO=Read Only|RW=Read Write|B=Blocked|Consist=Consistent|
R=Read Ahead Always|NR=No Read Ahead|WB=WriteBack|
AWB=Always WriteBack|WT=WriteThrough|C=Cached IO|D=Direct IO|sCC=Scheduled
Check Consistency
PDs for VD 0 :
============
-----------------------------------------------------------------------
EID:Slt DID State DG Size Intf Med SED PI SeSz Model Sp
-----------------------------------------------------------------------
25:22 14 Onln 1 1.089 TB SAS HDD N N 512B ST1200MM0007 U
VD0 Properties :
==============
Strip Size = 256 KB
Number of Blocks = 2341795840
VD has Emulated PD = No
Span Depth = 1
Number of Drives Per Span = 2
Write Cache(initial setting) = WriteThrough
Disk Cache Policy = Disk's Default
Encryption = None
Data Protection = Disabled
Active Operations = None
Exposed to OS = Yes
Creation Date = 04-01-2018
Creation Time = 12:38:35 PM
Emulation type = None
Function
Restore Frn-Bad drives in a RAID array to Online.
Format
storcli64 /ccontroller_id/eenclosure_id/sslot_id set good
Parameters
Parameter Description Value
For details about how to query the IDs, see 4.11.2.25 Querying RAID Controller
Card, RAID Array, or Physical Drive Information.
Usage Guidelines
None
Example
The drives in slots 1 and 5 are in the UBad F state, as shown in Figure 4-309.
Perform the following steps to restore the state to online.
Function
Query supercapacitor information, such as the supercapacitor name and the cache
capacity of the TFM Flash card.
Format
storcli64 /ccontroller_id/cv show all
Parameters
Parameter Description Value
For details about how to query the RAID controller card ID, see 4.11.2.25
Querying RAID Controller Card, RAID Array, or Physical Drive Information.
Example
# Query supercapacitor information.
[root@localhost ~]# ./storcli64 /c0/cv show all
CLI Version = 007.0409.0000.0000 Nov 06, 2017
Operating system = Linux3.10.0-514.el7.x86_64
Controller = 0
Status = Success
Description = None
Cachevault_Info :
===============
--------------------
Property Value
--------------------
Type CVPM02
Temperature 28 C
State Optimal
--------------------
Firmware_Status :
===============
---------------------------------------
Property Value
---------------------------------------
Replacement required No
No space to cache offload No
Module microcode update required No
---------------------------------------
GasGaugeStatus :
==============
------------------------------
Property Value
------------------------------
Pack Energy 294 J
Capacitance 108 %
Remaining Reserve Space 0
------------------------------
Design_Info :
===========
------------------------------------
Property Value
------------------------------------
Date of Manufacture 04/11/2016
Serial Number 22417
Manufacture Name LSI
Design Capacity 288 J
Device Name CVPM02
tmmFru N/A
CacheVault Flash Size 8.0 GB
tmmBatversionNo 0x05
tmmSerialNo 0xee7d
tmm Date of Manufacture 09/12/2016
tmmPcbAssmNo 022544412A
tmmPCBversionNo 0x03
tmmBatPackAssmNo 49571-13A
scapBatversionNo 0x00
scapSerialNo 0x5791
scap Date of Manufacture 04/11/2016
scapPcbAssmNo 1700134483
scapPCBversionNo A
scapBatPackAssmNo 49571-13A
Module Version 6635-02A
------------------------------------
Properties :
==========
--------------------------------------------------------------
Property Value
--------------------------------------------------------------
Auto Learn Period 27d (2412000 seconds)
Next Learn time 2018/08/03 17:48:38 (586633718 seconds)
Learn Delay Interval 0 hour(s)
Auto-Learn Mode Transparent
--------------------------------------------------------------
NOTE
In the command output, Device Name CVPM02 indicates that the supercapacitor name is
CVPM02, and CacheVault Flash Size 8.0GB indicates that the cache capacity of the TFM
Flash card is 8.0 GB.
Function
Upgrade the drive firmware.
Format
./storcli64 /ccontroller_id /eenclosure_id/sslot_id download src=FW_name.bin
Parameters
Parameter Description Value
For details about how to query the IDs, see 4.11.2.25 Querying RAID Controller
Card, RAID Array, or Physical Drive Information.
Example
# Upgrade the drive firmware.
[root@localhost ~]# ./storcli64 /c0/e64/s5 download src=5200_D1MU004_Releasefullconcatenatedbinary.bin
Starting microcode update .....please wait...
Flashing PD image ..... please wait...
CLI Version = 007.0504.0000.0000 Nov 22,2017
Operation system = Linux 3.10.0-514.el7.x86_64
Controller = 0
Status = Success
Description = Firmware Download succeeded.
Function
Rebuild the RAID array manually.
Syntax
storcli64 /ccontroller_id/eenclosure_id/sslot_id insert dg=DG array=Arr row=Row
storcli64 /ccontroller_id/eenclosure_id/sslot_id start rebuild
Parameters
Parameter Description Value
For details about how to query the IDs, see 4.11.2.25 Querying RAID Controller
Card, RAID Array, or Physical Drive Information.
Usage Guidelines
To rebuild a RAID array, perform the following steps:
1. Run the ./storcli64/c0 show command to query the DG, Arr, and Row
information of the faulty drive.
Examples
# Add the drive to the RAID array.
[root@localhost ~]# storcli64 /c0/e252/s1 insert dg=0 array=0 row=0
CLI Version = 007.0504.0000.0000 Nov 22, 2017
Operating system = Linux 3.10.0-693.el7.x86_64
Controller = 0
Status = Success
Description = Insert Drive Succeeded.
Function
Set the cache status of member drives in a RAID array.
Format
storcli64 /ccontroller_id/vvd_id set pdcache=action
Parameter description
Parameter Description Value
For details about how to query the IDs, see 4.11.2.30 Setting the Cache Status of
a Drive.
Usage Guidelines
None
Example
# Enable the drive cache of RAID array 0.
./storcli64 /c0/v0 set pdcache=on
Controller = 0
Status = Success
Description = None
Detailed Status :
===============
---------------------------------------
VD Property Value Status ErrCd ErrMsg
---------------------------------------
0 PdCac On Success 0-
---------------------------------------
Function
View, import, and delete foreign configurations of a RAID controller card.
Format
storcli64 /ccontroller_id/fall import preview
storcli64 /ccontroller_id/fall import
storcli64 /ccontroller_id/fall delete
Parameters
Parameter Description Value
Usage Guidelines
None
Example
# View foreign configurations of a RAID controller card.
[root@localhost ~]# ./storcli64 /c0/fall import preview
CLI Version = 007.0504.0000.0000 Nov 22, 2017
Operating system = Linux 3.10.0-957.el7.x86_64
Controller = 0
Status = Success
Description = Operation on foreign configuration Succeeded
FOREIGN PREVIEW :
===============
5 LSI SAS2308
5.1 Overview
The LSI SAS2308 RAID controller card is a 6 Gbit/s SAS controller with the Fusion-
MPTTM architecture. Equipped with PCIe 3.0 x8 ports and a powerful I/O storage
engine, the controller card handles data protection, verification, and restoration.
The controller card provides 1.5 Gbit/s, 3 Gbit/s, and 6 Gbit/s SAS and SATA ports.
Each port supports the SSP, SMP, and STP protocols.
The LSI SAS2308 supports boot and configuration in Legacy and UEFI modes.
The controller card has two structures to provide ease of connection in different
servers:
The LSI SAS2308 connects to the mainboard through an XCede connector and
connects to the drive backplane by using two Mini-SAS cables, as shown in
Figure 5-1.
● LSI SAS2308 for a blade server or X6000 server
The LSI SAS2308 connects to the mainboard through two XCede connectors,
as shown in Figure 5-2.
3 XCede connector – –
Indicators
Table 5-1 describes the indicators on the LSI SAS2308.
5.2 Functions
5.2.1 High-Speed Interfaces
The LSI SAS2308 provides the following high-speed interfaces:
● The PCIe x8 interface is used to connect to the server mainboard. The
maximum bandwidth is 8 Gbit/s.
● Eight 6 Gbit/s SAS/SATA ports are used to connect to the drive backplane for
server drive storage.
It is recommended that you use drives of the same type and specifications.
NOTICE
In RAID 1, RAID 1E, and RAID 10, the LSI SAS2308 firmware disables the write
cache function of the drives by default. In RAID 0, the write cache function is
enabled by default.
NOTE
The HDDs and SSDs cannot be used as the hot spare drives of each other.
drives mounted to the LSI SAS2308 RAID controller card. If a RAID controller card
does not support the passthrough feature, you can install the OS only on the
virtual drives configured under the RAID controller card.
NOTE
If a drive in pass-through mode is faulty, the Fault indicator on the drive will not be lit and
the iBMC will not generate an alarm.
NOTE
● After removing a drive, install it after at least 30 seconds. Otherwise, the drive cannot
be identified.
● If a drive in a RAID array is removed and inserted online, the RAID array where the drive
resides will be degraded or faulty. Before removing and inserting a drive, check the
logical status of the drive and the number of faulty drives allowed by the RAID level.
● If you remove and insert a pass-through drive without powering off the OS, the drive
letter in the system may change. Before removing and inserting a pass-through drive,
record the drive letter in the system.
Table 5-2 lists the supported RAID levels and quantity of drives.
RAID 0 2 to 10 0
RAID 1 2 1
NOTE
● The failed drives cannot be adjacent. For details, see A.2 RAID Levels.
● On a Windows drive management screen, Bus Type of virtual drives and JBOD drives for
the LSI SAS2308 is RAID.
Procedure
Step 1 Set the Legacy mode. For details, see A.1.3 Setting the Legacy Mode.
Step 2 When the message "Press Ctrl-C to start LSI Corp Configuration Utility" is
displayed during server startup, press Ctrl+C, as shown in Figure 5-4.
After the system self-check, the Configuration Utility main screen is displayed, as
shown in Figure 5-5. Table 5-3 describes the parameters on the screen.
NOTE
To view the global properties of the controller card, press Alt+N on this screen.
Paramet Description
er
Paramet Description
er
FW Firmware version.
Revision
Figure 5-6 Setting the boot order of RAID controller cards (1)
1. Select the box in the Boot Order column for a RAID controller card and use
Insert or Delete to set the boot order. For example, in Figure 5-7, SAS9217–
8i is set as the primary boot device and LSISAS2308 is set as the alternate
boot device.
Figure 5-7 Setting the boot order of RAID controller cards (2)
2. Press Esc.
A confirmation dialog box is displayed. See Figure 5-8.
If the server is configured with drive controllers of other chips, set the drive boot
device on the BIOS.
– For details about the settings on the Grantley platform, see "Setting the Boot
Device" in the Huawei Server Grantley Platform BIOS Parameter Reference.
– For details about the settings on the Brickland platform, see "Setting the Boot
Device" in the Huawei Server Brickland Platform BIOS Parameter Reference.
Step 4 When the message "Press Ctrl-C to start LSI Corp Configuration Utility" is
displayed during server startup, press Ctrl+C, as shown in Figure 5-9.
The message "Please wait, invoking SAS Configuration Utility..." is displayed. After
the system self-check is complete, the Configuration Utility main screen is
displayed.
The Adapter Properties screen is displayed, as shown in Figure 5-10. Table 5-4
describes the controller card properties.
Paramet Description
er
Paramet Description
er
----End
NOTICE
● Data on a drive will be deleted after the drive is added to a RAID array. Before
creating a RAID array, check that there is no data on drives or the data on
drives is not required. If the drive data needs to be retained, back up the data
first.
● The LSI SAS2308 supports SAS/SATA HDDs and SSDs. Drives in one RAID array
must be of the same type, but can have different capacities or be provided by
different vendors.
● 5.2.10 RAID 0, 1, 10, and 1E lists the number of drives required by each RAID
level.
Procedure
Step 1 Back up data on drives and access the Configuration Utility main screen. For
details, see 5.3.1 Logging In to the Configuration Utility.
Step 2 Create a RAID 0 array.
1. On the Adapter Properties screen, select RAID Properties and press Enter.
The Select New Volume Type screen is displayed. See Figure 5-11.
4. Press C.
The RAID array creation confirmation screen is displayed.
5. Select Save changes then exit this menu and press Enter.
RAID array creation takes about 1 minute, during which the Configuration
Utility is in the suspended state. Do not perform any other operations during
this period.
After the configuration is complete, the Figure 5-10 screen is displayed.
----End
Additional Information
Related Tasks
After creating the RAID array, check the configuration result as follows:
1. On the Adapter Properties screen, select RAID Properties and press Enter.
The Select New Volume Type screen is displayed.
2. Select View Existing Volume and press Enter.
RAID information is displayed.
Related Concepts
None
Scenarios
NOTICE
● Data on a drive will be deleted after the drive is added to a RAID array. Before
creating a RAID array, check that there is no data on drives or the data on
drives is not required. If the drive data needs to be retained, back up the data
first.
● The LSI SAS2308 supports SAS/SATA HDDs and SSDs. Drives in one RAID array
must be of the same type, but can have different capacities or be provided by
different vendors.
● 5.2.10 RAID 0, 1, 10, and 1E lists the number of drives required by each RAID
level.
Procedure
Step 1 Back up data on drives and access the Configuration Utility main screen. For
details, see 5.3.1 Logging In to the Configuration Utility.
NOTE
4. Press C.
The RAID array creation confirmation screen is displayed.
5. Select Save changes then exit this menu and press Enter.
RAID array creation takes about 1 minute, during which the Configuration
Utility is in the suspended state. Do not perform any other operations during
this period.
After the configuration is complete, the Figure 5-10 screen is displayed.
----End
Additional Information
Related Tasks
After creating the RAID array, check the configuration result as follows:
1. On the Adapter Properties screen, select RAID Properties and press Enter.
The Select New Volume Type screen is displayed.
2. Select View Existing Volume and press Enter.
RAID information is displayed.
Related Concepts
None
Scenarios
NOTICE
● Data on a drive will be deleted after the drive is added to a RAID array. Before
creating a RAID array, check that there is no data on drives or the data on
drives is not required. If the drive data needs to be retained, back up the data
first.
● The LSI SAS2308 supports SAS/SATA HDDs and SSDs. Drives in one RAID array
must be of the same type, but can have different capacities or be provided by
different vendors.
● 5.2.10 RAID 0, 1, 10, and 1E lists the number of drives required by each RAID
level.
Procedure
Step 1 Back up data on drives and access the Configuration Utility main screen. For
details, see 5.3.1 Logging In to the Configuration Utility.
NOTE
– A RAID 1E can contain three, five, seven, or nine drives (odd numbers). A RAID 10
array can contain four, six, eight, or ten drives (even numbers).
– If the total number of drives in all RAID arrays under a RAID controller card
exceeds 14, no drive can be added to RAID arrays.
4. Press C.
The RAID array creation confirmation screen is displayed.
5. Select Save changes then exit this menu and press Enter.
RAID array creation takes about 1 minute, during which the Configuration
Utility is in the suspended state. Do not perform any other operations during
this period.
After the configuration is complete, the Figure 5-10 screen is displayed.
----End
Additional Information
Related Tasks
After creating the RAID array, check the configuration result as follows:
1. On the Adapter Properties screen, select RAID Properties and press Enter.
The Select New Volume Type screen is displayed.
2. Select View Existing Volume and press Enter.
RAID information is displayed.
Related Concepts
None
NOTICE
Configuring the boot device is mandatory during the RAID configuration process.
Procedure
Step 1 Log in to the Configuration Utility main screen. For details, see 5.3.1 Logging In
to the Configuration Utility.
Step 2 Set boot devices.
1. On the Adapter Properties screen, select SAS Topology and press Enter.
The SAS Topology screen is displayed.
2. On the SAS Topology screen, press ↑ or ↓ to select a drive or RAID controller
card, and press ALT+B (or ALT+A) to set the selected device as the primary
(or alternate) boot device.
NOTE
If drive or RAID array information is collapsed, move the cursor to Controller or RAID
XX VOL and press Enter to expand the information.
After the setting is successful, the value of Device Info for the primary boot
device is Boot, and the value of Device Info for the alternate boot device is
Alt, as shown in Figure 5-20.
----End
Related Operations
If a server is configured with only the LSI SAS2308 and LSI SAS3008 RAID
controller cards, set the RAID controller card boot order by following "(Optional)
Set the boot order of RAID controller cards" in 5.3.1 Logging In to the
Configuration Utility. If the server is configured with drive controllers of other
chips, set the drive boot device on the BIOS.
● For details about the settings on the Grantley platform, see "Setting the Boot
Device" in the Huawei Server Grantley Platform BIOS Parameter
Reference.
● For details about the settings on the Brickland platform, see "Setting the Boot
Device" in the Huawei Server Brickland Platform BIOS Parameter
Reference.
All configurations described in this document about the LSI SAS2308 are
performed on the configuration management screen, which is only accessible after
server restart. To monitor RAID status and view RAID configurations during system
running, use the SAS2IRCU tool.
If the boot type is changed after the OS has been installed in Legacy or UEFI
mode, the OS will be inaccessible. To access the OS, you need to change the boot
type to that used when the OS is installed. If the OS needs to be reinstalled, select
the Legacy or UEFI mode based on actual situation.
If multiple boot devices are configured, you are advised to set Boot Type to UEFI
Boot Type because certain boot devices may fail to boot if Boot Type is set to
Legacy Boot Type. If you still want to set Boot Type to Legacy Boot Type, then
disable redirection for certain serial ports or disable PXE for certain NICs based on
the services in use. For details, see "Setting PXE for a NIC" and "Setting Serial Port
Redirection" in the respective BIOS Parameter Reference.
Procedure
Step 1 Set the EFI Boot Type mode. For details, see A.1.4 Setting the EFI/UEFI Mode.
Step 2 Log in to the management screen of the LSI SAS2308.
The position of the RAID controller card management interface in EFI/UEFI mode
varies with the BIOS platform.
● Brickland platform: The interface is integrated into the BIOS Setup. For details,
see A.1.1 Logging In to the RAID Controller Card Management Screen in
EFI/UEFI Mode (Brickland Platform).
● Grantley platform: The interface is integrated into the Device Manager. For
details, see A.1.2 Logging In to the RAID Controller Card Management
Screen in EFI/UEFI Mode (Grantley Platform).
Step 3 Select the LSI SAS2308 controller card from Disk Devices and press Enter.
The LSI SAS2 MPT Controller Configuration screen is displayed.
Physical Disk Configures and manages drives. For example, you can
Management view drive properties and perform operations on drives.
----End
NOTICE
● Data on a drive will be deleted after the drive is added to a RAID array. Before
creating a RAID array, check that there is no data on drives or the data on
drives is not required. If the drive data needs to be retained, back up the data
first.
● The LSI SAS2308 supports SAS/SATA HDDs and SSDs. Drives in one RAID array
must be of the same type, but can have different capacities or be provided by
different vendors.
● 5.2.10 RAID 0, 1, 10, and 1E lists the number of drives required by each RAID
level.
Procedure
Step 1 Back up data on drives and access the Configuration Utility main screen. For
details, see 5.4.1 Logging In to the Management Screen.
Step 2 Log in to the Create Configuration screen.
1. On the main screen, select Controller Management and press Enter.
2. Select Create Configuration and press Enter.
Step 3 Select a RAID level.
1. Use ↑ and ↓ to select Select RAID level and press Enter.
The list of configurable RAID levels is displayed, as shown in Figure 5-23.
Select Media Selects a drive type. If the controller card controls drives
Type of different capacities, you can use this parameter to
filter the drives.
----End
NOTICE
● Data on a drive will be deleted after the drive is added to a RAID array. Before
creating a RAID array, check that there is no data on drives or the data on
drives is not required. If the drive data needs to be retained, back up the data
first.
● The LSI SAS2308 supports SAS/SATA HDDs and SSDs. Drives in one RAID array
must be of the same type, but can have different capacities or be provided by
different vendors.
● 5.2.10 RAID 0, 1, 10, and 1E lists the number of drives required by each RAID
level.
Procedure
Step 1 Back up data on drives and access the Configuration Utility main screen. For
details, see 5.4.1 Logging In to the Management Screen.
Step 2 Log in to the Create Configuration screen.
1. On the main screen, select Controller Management and press Enter.
2. Select Create Configuration and press Enter.
Step 3 Select a RAID level.
1. Use ↑ and ↓ to select Select RAID level and press Enter.
The list of configurable RAID levels is displayed, as shown in Figure 5-26.
Select Media Selects a drive type. If the controller card controls drives
Type of different capacities, you can use this parameter to
filter the drives.
NOTICE
----End
NOTICE
● Data on a drive will be deleted after the drive is added to a RAID array. Before
creating a RAID array, check that there is no data on drives or the data on
drives is not required. If the drive data needs to be retained, back up the data
first.
● The LSI SAS2308 supports SAS/SATA HDDs and SSDs. Drives in one RAID array
must be of the same type, but can have different capacities or be provided by
different vendors.
● 5.2.10 RAID 0, 1, 10, and 1E lists the number of drives required by each RAID
level.
Procedure
Step 1 Back up data on drives and access the Configuration Utility main screen. For
details, see 5.4.1 Logging In to the Management Screen.
Step 2 Log in to the Create Configuration screen.
1. On the main screen, select Controller Management and press Enter.
2. Select Create Configuration and press Enter.
Step 3 Select a RAID level.
1. Use ↑ and ↓ to select Select RAID level and press Enter.
The list of configurable RAID levels is displayed, as shown in Figure 5-29.
Select Media Selects a drive type. If the controller card controls drives
Type of different capacities, you can use this parameter to
filter the drives.
– A RAID 1E can contain three, five, seven, or nine drives (odd numbers).
– If the total number of drives in all RAID arrays under a RAID controller card
exceeds 14, no drive can be added to RAID arrays.
----End
NOTICE
● Data on a drive will be deleted after the drive is added to a RAID array. Before
creating a RAID array, check that there is no data on drives or the data on
drives is not required. If the drive data needs to be retained, back up the data
first.
● The LSI SAS2308 supports SAS/SATA HDDs and SSDs. Drives in one RAID array
must be of the same type, but can have different capacities or be provided by
different vendors.
● 5.2.10 RAID 0, 1, 10, and 1E lists the number of drives required by each RAID
level.
Procedure
Step 1 Back up data on drives and access the Configuration Utility main screen. For
details, see 5.4.1 Logging In to the Management Screen.
Step 2 Log in to the Create Configuration screen.
1. On the main screen, select Controller Management and press Enter.
2. Select Create Configuration and press Enter.
Step 3 Select a RAID level.
1. Use ↑ and ↓ to select Select RAID level and press Enter.
The list of configurable RAID levels is displayed, as shown in Figure 5-32.
Select Media Selects a drive type. If the controller card controls drives
Type of different capacities, you can use this parameter to
filter the drives.
– A RAID 10 array can contain four, six, eight, or ten drives (even numbers).
– If the total number of drives in all RAID arrays under a RAID controller card
exceeds 14, no drive can be added to RAID arrays.
----End
● For details about the settings on the Grantley platform, see "Setting the Boot
Device" in the Huawei Server Grantley Platform BIOS Parameter
Reference.
● For details about the settings on the Brickland platform, see "Setting the Boot
Device" in the Huawei Server Brickland Platform BIOS Parameter
Reference.
Procedure
Step 1 Back up data on drives and access the Configuration Utility main screen. For
details, see 5.3.1 Logging In to the Configuration Utility.
Step 2 Configure a hot spare drive.
1. On the Adapter Properties screen, select RAID Properties and press Enter.
The Select New Volume Type screen is displayed.
NOTE
If the LSI SAS2308 has more than one RAID array, press Alt+N to view information
about other RAID arrays.
3. Select Manage Volume and press Enter.
The Manage Volume screen is displayed, as shown in Figure 5-37.
NOTICE
Press -, +, or the space bar to specify whether to add a drive to the RAID
array, as shown in Figure 5-38.
Yes indicates that the drive will be added to the RAID array.
6. Press C.
The hot spare drive configuration update screen is displayed.
7. Select Save changes then exit this menu and press Enter.
RAID array creation takes about 1 minute, during which the Configuration
Utility is in the suspended state. Do not perform any other operations during
this period.
After the configuration is complete, the Manage Volume screen is displayed.
----End
Scenarios
If the number of member drives in a RAID array is insufficient, you can delete a
hot spare drive to enable it to function as a common drive.
Procedure
Step 1 Log in to the Configuration Utility main screen. For details, see 5.3.1 Logging In
to the Configuration Utility.
6. Press C.
The hot spare drive configuration update screen is displayed.
7. Select Save changes then exit this menu and press Enter.
RAID array creation takes about 1 minute, during which the Configuration
Utility is in the suspended state. Do not perform any other operations during
this period.
After the configuration is complete, the Manage Volume screen is displayed.
----End
Scenarios
● A physical drive newly installed on the server may already have a RAID
configuration. You can activate the RAID configuration to add it to the LSI
SAS2308 RAID controller card.
● After replacing a RAID controller card on a server, perform this operation to
activate the original configuration information in the new RAID controller
card.
Procedure
Step 1 Log in to the Configuration Utility main screen. For details, see 5.3.1 Logging In
to the Configuration Utility.
----End
View Existing Allows you to add or delete hot spare drives, perform
Volume consistency check, activate RAID arrays, and delete RAID arrays.
This option is available only when the LSI SAS2308 RAID
controller card contains RAID arrays.
RAID Disk Indicates whether the drive is a member of the RAID array.
NOTE
If the LSI SAS2308 has more than one RAID array, press Alt+N to view information about
other RAID arrays.
----End
Procedure
Step 1 Log in to the Configuration Utility main screen. For details, see 5.3.1 Logging In
to the Configuration Utility.
Step 2 On the Adapter Properties screen, select SAS Topology and press Enter.
The SAS Topology screen is displayed, as shown in Figure 5-47. Table 5-15
describes the parameters on the screen.
Parameter Description
----End
Scenarios
If the server does not need a RAID array, you can delete the RAID array to release
the drives.
NOTICE
A deleted RAID array cannot be restored. Exercise caution when deleting a RAID
array.
Procedure
Step 1 Log in to the Configuration Utility main screen. For details, see 5.3.1 Logging In
to the Configuration Utility.
Step 2 Delete a RAID array.
1. On the Adapter Properties screen, select RAID Properties and press Enter.
The Select New Volume Type screen is displayed.
NOTICE
----End
Scenarios
A fault-tolerant RAID system requires a consistency check on a regular basis. A
consistency check verifies the correctness and validity of redundant data in RAID 1,
10, or 1E. You can start a consistency check for a RAID array on the Configuration
Utility.
Procedure
Step 1 Log in to the Configuration Utility main screen. For details, see 5.3.1 Logging In
to the Configuration Utility.
----End
----End
----End
NOTE
● You can use the same method to turn off the indicator.
● You can also turn on the drive indicators on the OS CLI. For details, see 5.10.2.10
Turning On Drive Indicators.
----End
NOTE
● During the formatting process, do not shut down or restart the system or remove and
reinstall the drive. Otherwise, the drive will be corrupted.
● After formatting, all data on the drive is cleared. Exercise caution when performing this
operation.
– ALT+A:
Sets the drive as the alternate boot device.
Alt is displayed in Device Info.
----End
Prerequisites
Conditions
NOTICE
● An idle drive can be configured as a hot spare drive, but a RAID member drive
cannot be configured as a hot spare drive.
● A hot spare drive must be a SATA or SAS drive, and it must have at least the
capacity of the RAID member drive with the largest capacity.
● All RAID levels except RAID 0 support hot spare drives.
The following requirements must be met before you configure hot spare drives:
Data
Procedure
Step 1 Access the virtual drive properties screen.
1. On the main screen, select Virtual Disk Management and press Enter.
2. Select Manage Virtual Disk Properties and press Enter.
The virtual drive properties screen is displayed, as shown in Figure 5-55.
----End
Scenarios
If the number of member drives in a RAID array is insufficient, you can delete a
hot spare drive to enable it to function as a common drive.
Procedure
Step 1 Log in to the Configuration Utility main screen. For details, see 5.4.1 Logging In
to the Management Screen.
1. On the main screen, select Virtual Disk Management and press Enter.
2. Select Manage Virtual Disk Properties and press Enter.
The virtual drive properties screen is displayed, as shown in Figure 5-57.
2. On the screen shown in Figure 5-58, use ↑ and ↓ to select a hot spare drive
and press Enter.
The drive status changes to [X].
3. Use ↑ and ↓ to select Unassign Global Hotspare Disk and press Enter.
The message "Operation complete successfully" is displayed.
4. Press Enter.
The configuration is complete.
----End
Scenarios
● A newly installed drive may already have a storage configuration (considered
as a foreign configuration). You can use the WebBIOS to import the foreign
configuration to the current RAID controller card.
● After replacing a RAID controller card on a server, perform this operation to
import the original configuration information to the new RAID controller card.
NOTE
● If the number of faulty or missing drives exceeds the maximum number allowed by the
RAID array, the RAID array cannot be imported.
● To avoid configuration import failure, replace the original RAID controller card with a
new card of the same type.
Step 2 On the main screen, select Controller Management and press Enter.
Step 6 In the confirmation dialog boxes, click Confirm and then Yes.
----End
Step 2 On the main screen, select Controller Management and press Enter.
Step 6 In the confirmation dialog boxes, click Confirm and then Yes.
----End
Scenarios
If the server does not need a RAID array, you can delete the RAID array to release
the drives.
NOTICE
A deleted RAID array cannot be restored. Exercise caution when deleting a RAID
array.
Procedure
Step 1 Log in to the Configuration Utility main screen. For details, see 5.4.1 Logging In
to the Management Screen.
----End
5.7 Troubleshooting
This section describes solutions to drive faults and RAID controller card faults. For
other situations, contact technical supportsee the Huawei Server Maintenance
Guide.
Solution
Step 1 Determine the slot number of the faulty drive.
NOTE
If a drive in pass-through mode is faulty, the Fault indicator on the drive will not be lit and
the iBMC will not generate an alarm.
● Locate the faulty drive based on the fault indicator, which is steady orange.
For details, see the drive numbering section in the user guide of the server
you use.
● Locate the faulty drive based on the iMana/iBMC drive alarm information. For
details, see iMana/iBMC Alarm Handling.
● Locate the faulty drive using the RAID controller card GUI. For details, see
5.8.3 SAS Topology or 5.9.4.1 Viewing Physical Disk Properties.
● Locate the faulty drive using the RAID controller card CLI tool. For details, see
5.10.2.2 Viewing Device Information.
Step 2 Replace the drive.
NOTICE
Remove the faulty drive and install a new drive. The new drive can be restored in
the following ways based on the RAID configuration of the faulty drive:
● If the RAID array has a hot spare drive, copyback will be performed after the
hot spare drive is rebuilt. After data is copied to the new drive, the hot spare
drive restores to the hot backup state.
● If the RAID array has redundancy feature and has no hot spare drive, the
newly installed drive automatically rebuilds data. If more than one faulty
drive exists in a RAID array, replace the faulty drives one by one based on the
drive fault time. Replace the next drive only after the current drive data is
rebuilt.
● If the faulty drive is a pass-through drive, replace it.
● If the faulty drive belongs to a RAID array without redundancy (RAID 0),
create RAID 0 again.
– For details about how to create a RAID 0 array in Legacy mode, see 5.3.2
Creating a RAID 0 Array.
– For details about how to create a RAID 0 array in UEFI mode, see 5.4.2
Creating RAID 0.
– For details about how to create a RAID 0 array by running commands,
see 5.10.2.3 Creating and Deleting a RAID Array.
----End
Solution
Step 1 Log in to the iBMC WebUI to view the alarm information.
Step 2 Rectify the fault based on the alarm information. For details, see the iBMC Alarm
Handling.
● If the fault is rectified, no further action is required.
● If the fault persists, go to Step 3.
Step 3 Collect and view logs and other necessary fault information.
Step 4 Use the Computing Product Case Library or contact technical support.
NOTE
The Computing Product Case Library is available only to Huawei engineers and partners.
----End
Procedure
Step 1 Set the Legacy mode. For details, see A.1.3 Setting the Legacy Mode.
Step 2 When the message "Press Ctrl-C to start LSI Corp Configuration Utility" is
displayed during server startup, press Ctrl+C, as shown in Figure 5-4.
The message "Please wait, invoking SAS Configuration Utility..." is displayed.
After the system self-check, the Configuration Utility main screen is displayed, as
shown in Figure 5-5. Table 5-3 describes the parameters.
NOTE
To view the global properties of the controller card, press Alt+N on this screen.
Paramet Description
er
FW Firmware version.
Revision
Figure 5-64 Setting the boot order of RAID controller cards (1)
1. Select the box in the Boot Order column for a RAID controller card and use
Insert or Delete to set the boot order. For example, in Figure 5-7, SAS9217–
8i is set as the primary boot device and LSISAS2308 is set as the alternate
boot device.
Figure 5-65 Setting the boot order of RAID controller cards (2)
2. Press Esc.
A confirmation dialog box is displayed, as shown in Figure 5-8.
If the server is configured with drive controllers of other chips, set the drive boot
device on the BIOS.
– For details about the settings on the Grantley platform, see "Setting the Boot
Device" in the Huawei Server Grantley Platform BIOS Parameter Reference.
– For details about the settings on the Brickland platform, see "Setting the Boot
Device" in the Huawei Server Brickland Platform BIOS Parameter Reference.
Step 4 When the message "Press Ctrl-C to start LSI Corp Configuration Utility" is
displayed during server startup, press Ctrl+C, as shown in Figure 5-9.
The message "Please wait, invoking SAS Configuration Utility..." is displayed. After
the system self-check is complete, the Configuration Utility main screen is
displayed.
Paramet Description
er
----End
NOTE
If the number of RAID arrays created reaches the maximum, the 5.8.2.1 Viewing Existing
Volume screen is displayed after you select RAID Properties on the Adapter Properties
screen and press Enter.
View Existing Allows you to add or delete hot spare drives, perform
Volume consistency check, activate RAID arrays, and delete RAID arrays.
This option is available only when the LSI SAS2308 RAID
controller card contains RAID arrays.
Screen Introduction
Figure 5-70 shows the screen. Table 5-19 describes the parameters on the screen.
RAID Disk Indicates whether the drive is a member of the RAID array.
On the View Volume screen, select Manage Volume and press Enter.
The Manage Volume screen is displayed, as shown in Figure 5-71.
----End
----End
NOTICE
Press -, +, or the space bar to specify whether to add a drive to the RAID array, as
shown in Figure 5-72.
● Yes indicates that the drive will be added to the RAID array.
● No indicates that the drive will not be added to the RAID array.
Step 4 Press C.
The hot spare drive configuration update screen is displayed.
Step 5 Select Save changes then exit this menu and press Enter.
RAID array creation takes about 1 minute, during which the Configuration Utility
is in the suspended state. Do not perform any other operations during this period.
After the configuration is complete, the Figure 5-71 screen is displayed.
----End
Checking Consistency
Step 1 Select Manage Volume and press Enter.
The Manage Volume screen is displayed.
Step 2 Select Consistency Check and press Enter.
The Consistency Check screen is displayed.
Step 3 Select an operation as required.
● Press Y to start the consistency check.
After the consistency check is complete, the Figure 5-71 screen is displayed.
● Press N to quit.
The Figure 5-71 screen is displayed.
----End
----End
NOTICE
● Press N to quit.
The Figure 5-71 screen is displayed.
----End
Step 2 Insert a spare drive with large capacity into the vacant slot.
Step 4 Repeat the preceding steps to replace the other member drive in RAID 1.
Step 5 After RAID 1 is rebuilt, select Manage Volume and press Enter.
Step 7 Press Y.
Step 8 Check the capacity expansion result on the Manage Volume screen. If the value
of Status is Optimal and the value of Size is the capacity of the RAID array of the
two larger drives, the capacity expansion is successful.
----End
Screen Introduction
Figure 5-74 shows the screen. Table 5-20 describes the parameters on the screen.
Parameter Description
RAID Disk Indicates whether the drive is a member of the RAID array.
NOTICE
Press -, +, or the space bar to specify whether to add a drive to the RAID array, as
shown in Figure 5-75.
● Yes indicates that the drive will be added to the RAID array.
● No indicates that the drive will not be added to the RAID array.
NOTE
● The first added drive is the primary drive, and the other added drives are secondary
drives. The secondary drives synchronize data from the primary drive.
● To facilitate identification of the positions of primary and secondary drives in future
maintenance, add the drives in ascending order of slot numbers.
Step 2 Press C.
The RAID array creation confirmation screen is displayed.
Step 3 Select Save changes then exit this menu and press Enter.
RAID array creation takes about 1 minute, during which the Configuration Utility
is in the suspended state. Do not perform any other operations during this period.
After the configuration is complete, the Figure 5-10 screen is displayed.
----End
Screen Introduction
Figure 5-76 shows the screen. Table 5-21 describes the parameters on the screen.
Parameter Description
RAID Disk Indicates whether the drive is a member of the RAID array.
NOTICE
Press -, +, or the space bar to specify whether to add a drive to the RAID array, as
shown in Figure 5-77.
● Yes indicates that the drive will be added to the RAID array.
● No indicates that the drive will not be added to the RAID array.
Step 2 Press C.
The RAID array creation confirmation screen is displayed.
Step 3 Select Save changes then exit this menu and press Enter.
RAID array creation takes about 1 minute, during which the Configuration Utility
is in the suspended state. Do not perform any other operations during this period.
After the configuration is complete, the Figure 5-10 screen is displayed.
----End
Screen Introduction
Figure 5-78 shows the screen. Table 5-22 describes the parameters on the screen.
RAID Disk Indicates whether the drive is a member of the RAID array.
NOTICE
Press -, +, or the space bar to specify whether to add a drive to the RAID array, as
shown in Figure 5-79.
● Yes indicates that the drive will be added to the RAID array.
● No indicates that the drive will not be added to the RAID array.
Step 2 Press C.
The RAID array creation confirmation screen is displayed.
Step 3 Select Save changes then exit this menu and press Enter.
RAID array creation takes about 1 minute, during which the Configuration Utility
is in the suspended state. Do not perform any other operations during this period.
After the configuration is complete, the Figure 5-10 screen is displayed.
----End
Screen Introduction
Figure 5-80 shows the screen. Table 5-23 describes the parameters on the screen.
Use ↑ and ↓ to select an item and press Enter.
----End
----End
– ALT+A:
Sets the drive as the alternate boot device.
Alt is displayed in Device Info.
----End
Screen Introduction
Figure 5-83 shows the screen. Table 5-24 describes the parameters on the screen.
Parameter Description
Parameter Description
Step 2 Use ↑ and ↓ to select an item and press Enter to set the value.
----End
Parameter Description
Direct Attached Maximum number of directly attached devices that can spin
Max Targets to up simultaneously.
Spinup
Step 2 Use ↑ and ↓ to select an item and press Enter to set the value.
----End
5.8.5 Exit
Step 1 Press Esc to display the exit screen.
Parameter Description
Exit the Exits the Configuration Utility and restarts the server.
Configuration
Utility and
Reboot
----End
Procedure
Step 1 Set the EFI Boot Type mode. For details, see A.1.4 Setting the EFI/UEFI Mode.
The position of the RAID controller card management interface in EFI/UEFI mode
varies with the BIOS platform.
● Brickland platform: The interface is integrated into the BIOS Setup. For details,
see A.1.1 Logging In to the RAID Controller Card Management Screen in
EFI/UEFI Mode (Brickland Platform).
● Grantley platform: The interface is integrated into the Device Manager. For
details, see A.1.2 Logging In to the RAID Controller Card Management
Screen in EFI/UEFI Mode (Grantley Platform).
Step 3 Select the LSI SAS2308 controller card from Disk Devices and press Enter.
Physical Disk Configures and manages drives. For example, you can
Management view drive properties and perform operations on drives.
----End
Parameter Description
Screen Introduction
Figure 5-90 shows the screen. Table 5-30 describes the parameters on the screen.
Parameter Description
Parameter Description
Screen Introduction
Figure 5-91 shows the screen.
----End
Screen Introduction
Figure 5-92 shows the screen.
Configuring a RAID
Step 1 Select a RAID level.
1. Use ↑ and ↓ to select Select RAID level and press Enter.
The list of configurable RAID levels is displayed, as shown in Figure 5-93.
Select Media Selects a drive type. If the controller card controls drives
Type of different capacities, you can use this parameter to
filter the drives.
Configuration procedure:
1. Select the item and press Enter.
2. In the displayed list, select the parameter to be
configured and press Enter.
----End
Screen Introduction
Figure 5-96 shows the screen.
----End
----End
----End
Screen Introduction
Figure 5-98 shows the screen. Table 5-32 describes the parameters on the screen.
Parameter Description
----End
Manage Virtual Allows you to view and modify virtual drive properties, and
Disk Properties configure hot spare drives.
Operation method: Select this item and press Enter.
Parameter Description
Figure 5-101 shows the screen. Table 5-35 describes the parameters on the
screen.
Step 4 Use ↑ and ↓ to select View More Physical Disk Properties and press Enter.
More physical drive properties are displayed, as shown in Figure 5-103. Table
5-36 describes the parameters on the screen.
----End
Parameter Description
Compatible Bare Disks Lists physical drives that can be configured as hot
spare drives.
Step 3 Use ↑ and ↓ to select Assign Global Hotspare Disk and press Enter.
----End
Step 2 Use ↑ and ↓ to select Unassign Global Hotspare Disk and press Enter.
----End
Parameter Description
Parameter Description
Start Locate / Turns on the UID indicator of a member drive of the virtual
Blink drive.
Stop Locate / Turns off the UID indicator of a member drive of the virtual
Blink drive.
----End
----End
Select View More Physical Disk Properties and press Enter to view more drive
properties. Table 5-40 describes the operations that are available.
Locating a Drive
Step 1 Use ↑ and ↓ to select Select Physical Disk and press Enter.
Step 2 Select a physical drive and press Enter.
Step 3 Use ↑ and ↓ to select Start Locate / Blink and press Enter.
----End
5.9.5 Exit
Exit the configuration screen.
Press Esc consecutively on any screen to exit Device Manager. The screen shown
in Figure 5-110 is displayed. Select options based on the site requirements.
----End
Installing SAS2IRCU
The SAS2IRCU installation method varies depending on the OS type. The following
uses Windows, Linux, and VMware as examples to describe the SAS2IRCU
installation procedure. For the installation procedures for other OSs, see the
README file in the software package.
Function
View all controllers.
Format
sas2ircu list
Parameters
None
Usage Guidelines
None
Example
# View all controllers.
domino:~# ./sas2ircu list
Avago Technologies SAS2 IR Configuration Utility.
Version 13.00.00.00 (2016.03.08)
Copyright (c) 2009-2016 Avago Technologies. All rights reserved.
Function
View detailed information about RAID controller cards, physical drives, and virtual
drives.
Format
sas2ircu controller_id display
Parameters
Parameter Description Value
Usage Guidelines
None
Example
# View detailed information about RAID controller cards, physical drives, and
virtual drives.
domino:~# ./sas2ircu 0 display
Avago Technologies SAS2 IR Configuration Utility.
Version 13.00.00.00 (2016.03.08)
Copyright (c) 2009-2016 Avago Technologies. All rights reserved.
Read configuration has been initiated for controller 0
----------------------------------------------------------------------
Controller information
----------------------------------------------------------------------
Controller type : SAS2308
BIOS version : 8.17.00.00
Firmware version : 12.00.02.00
Channel description : 1 Serial Attached SCSI
Initiator ID :0
Maximum physical devices : 255
Concurrent commands supported : 4096
Slot :0
Segment :0
Bus :1
Device :0
Function :0
RAID Support : Yes
----------------------------------------------------------------------
IR Volume information
----------------------------------------------------------------------
IR volume 1
Volume ID : 323
Function
Create and delete a RAID array. (The delete command will delete all RAID arrays.)
Format
sas2ircu controller_id create RAIDlevel capacity enclosure_id:slot_id name
noprompt
Parameters
Parameter Description Value
Usage Guidelines
None
Example
# Create a RAID array.
domino:~# ./sas2ircu 0 create RAID1 MAX 1:0 1:1 Test01 noprompt
Avago Technologies SAS2 IR Configuration Utility.
Version 13.00.00.00 (2016.03.08)
Copyright (c) 2009-2016 Avago Technologies. All rights reserved.
Please wait, may take up to a minute...
SAS2IRCU: Volume created successfully.
SAS2IRCU: Command CREATE Completed Successfully.
SAS2IRCU: Utility Completed Successfully.
Function
Delete a RAID array.
Format
sas2ircu controller_id deletevolume volume_id noprompt
Parameters
Parameter Description Value
Usage Guidelines
None
Example
# Delete the RAID array whose ID is 322.
domino:~# ./sas2ircu 0 deletevolume 322 noprompt
Avago Technologies SAS2 IR Configuration Utility.
Version 13.00.00.00 (2016.03.08)
Copyright (c) 2009-2016 Avago Technologies. All rights reserved.
Please wait, may take up to a minute...
SAS2IRCU: Volume deleted successfully.
SAS2IRCU: Command DELETEVOLUME Completed Successfully.
SAS2IRCU: Utility Completed Successfully.
Function
Simulate a drive fault.
Format
sas2ircu controller_id action enclosure_id:slot_id
Parameters
Parameter Description Value
Usage Guidelines
None
Example
# Simulate a drive offline fault.
domino:~# ./sas2ircu 0 setoffline 1:2
Avago Technologies SAS2 IR Configuration Utility.
Version 13.00.00.00 (2016.03.08)
Copyright (c) 2009-2016 Avago Technologies. All rights reserved.
SAS2IRCU: Physical disk set to Offline successfully.
SAS2IRCU: Command SETOFFLINE Completed Successfully.
SAS2IRCU: Utility Completed Successfully.
Function
Create and delete a hot spare drive.
Format
sas2ircu controller_id hotspare enclosure_id:slot_id
Parameters
Parameter Description Value
Usage Guidelines
None
Example
# Create a hot spare drive.
domino:~# ./sas2ircu 0 hotspare 1:2
Avago Technologies SAS2 IR Configuration Utility.
Version 13.00.00.00 (2016.03.08)
Copyright (c) 2009-2016 Avago Technologies. All rights reserved.
WARNING: Proceeding with this operation may cause data loss or data
corruption. Are you sure you want to proceed (YES/NO)?yes
WARNING: This is your last chance to abort this operation. Do you wish
to abort (YES/NO)?no
Please wait,may take up to a minute...
SAS2IRCU: Hot Spare disk created successfully.
SAS2IRCU: Command HOTSPARE Completed Successfully.
SAS2IRCU: Utility Completed Successfully.
Function
View virtual drive status.
Format
sas2ircu controller_id status
Parameters
Parameter Description Value
Usage Guidelines
None
Example
# View virtual drive status.
domino:~# ./sas2ircu 0 status
Avago Technologies SAS2 IR Configuration Utility.
Version 13.00.00.00 (2016.03.08)
Copyright (c) 2009-2016 Avago Technologies. All rights reserved.
Background command progress status for controller 0...
IR volume 1
Volume ID : 323
Current operation : Background Init
volume Status : Enabled
Volume state : Optimal
Volume wwid : 0307b565aa18c1fb
Physical disk I/Os : Not quiesced
Volume size (in sectors) : 1560545280
Number of remaining sectors : 1558128640
Percentage complete : 0.15%
IR volume 2
Volume ID : 323
Current operation : none
volume Status : Enabled
Volume state : Optimal
Volume wwid : 09fb6da3d1048925
Physical disk I/Os : Not quiesced
SAS2IRCU: Command STATUS Completed Successfully.
SAS2IRCU: Utility Completed Successfully.
Function
Check consistency.
Format
sas2ircu controller_id constchk volume_id noprompt
Parameters
Parameter Description Value
Usage Guidelines
None
Example
# Perform a consistency check.
domino:~# ./sas2ircu 0 constchk 322 noprompt
Avago Technologies SAS2 IR Configuration Utility.
Version 13.00.00.00 (2016.03.08)
Copyright (c) 2009-2016 Avago Technologies. All rights reserved.
SAS2IRCU: Consistency Check Operation started on IR Volume.
SAS2IRCU: Command CONSTCHK Completed Successfully.
SAS2IRCU: Utility Completed Successfully.
Function
Activate a RAID Array.
Format
sas2ircu controller_id activate volume_id
Parameters
Parameter Description Value
Usage Guidelines
None
Example
# Activate the RAID array whose ID is 322.
domino:~# ./sas2ircu 0 activate 322
Avago Technologies SAS2 IR Configuration Utility.
Version 13.00.00.00 (2016.03.08)
Copyright (c) 2009-2016 Avago Technologies. All rights reserved.
Function
Turn on the UID indicator of a specified drive.
Format
sas2ircu controller_id locate enclosure_id:slot_id on
Parameters
Parameter Description Value
Usage Guidelines
None
Example
# Turn on the UID indicator of a specified drive.
domino:~# ./sas2ircu 0 locate 1:0 on
Avago Technologies SAS2 IR Configuration Utility.
Version 13.00.00.00 (2016.03.08)
Copyright (c) 2009-2016 Avago Technologies. All rights reserved.
SAS2IRCU: LOCATE command completed successfully.
SAS2IRCU: Command LOCATE Completed Successfully.
SAS2IRCU: Utility Completed Successfully.
Function
Collect and clear RAID array event logs.
Format
sas2ircu controller_id logir upload name
Parameters
Parameter Description Value
Usage Guidelines
None
Example
# Collect RAID array event logs and save them to the local file FW.log.
domino:~# ./sas2ircu 0 logir upload FW.log
Avago Technologies SAS2 IR Configuration Utility.
Version 13.00.00.00 (2016.03.08)
Copyright (c) 2009-2016 Avago Technologies. All rights reserved.
SAS2IRCU: LogIR command successful.
SAS2IRCU: Command LOGIR Completed Successfully.
SAS2IRCU: Utility Completed Successfully.
Function
Set a specified RAID array as the primary boot device.
Format
sas2ircu controller_id bootir volume_id
Parameters
Parameter Description Value
Usage Guidelines
None
Example
# Set the RAID array whose ID is 322 as the primary boot device.
domino:~# ./sas2ircu 0 bootir 322
Avago Technologies SAS2 IR Configuration Utility.
Version 13.00.00.00 (2016.03.08)
Copyright (c) 2009-2016 Avago Technologies. All rights reserved.
SAS2IRCU: Command BOOTIR Completed Successfully.
SAS2IRCU: Utility Completed Successfully.
Function
Set a specified drive as the primary boot device.
Format
sas2ircu controller_id bootencl enclosure_id:slot_id
Parameters
Parameter Description Value
Usage Guidelines
None
Example
# Set a specified drive as the primary boot device.
domino:~# ./sas2ircu 0 bootencl 1:4
Avago Technologies SAS2 IR Configuration Utility.
Version 13.00.00.00 (2016.03.08)
Copyright (c) 2009-2016 Avago Technologies. All rights reserved.
SAS2IRCU: Command BOOTENCL Completed Successfully.
SAS2IRCU: Utility Completed Successfully.
Function
Set a specified RAID array as the alternate boot device.
Format
sas2ircu controller_id bootir volume_id
Parameters
Parameter Description Value
Usage Guidelines
None
Example
# Set a specified RAID array as the alternate boot device.
domino:~# ./sas2ircu 0 altbootir 322
Avago Technologies SAS2 IR Configuration Utility.
Version 13.00.00.00 (2016.03.08)
Copyright (c) 2009-2016 Avago Technologies. All rights reserved.
SAS2IRCU: Command ALTBOOTIR Completed Successfully.
SAS2IRCU: Utility Completed Successfully.
Function
Set a specified drive as the alternate boot device.
Format
sas2ircu controller_id altbootencl enclosure_id:slot_id
Parameters
Parameter Description Value
Usage Guidelines
None
Example
# Set a specified drive as the alternate boot device.
domino:~# ./sas2ircu 0 bootencl 1:4
Avago Technologies SAS2 IR Configuration Utility.
Version 13.00.00.00 (2016.03.08)
Copyright (c) 2009-2016 Avago Technologies. All rights reserved.
SAS2IRCU: Command BOOTENCL Completed Successfully.
SAS2IRCU: Utility Completed Successfully.
6 LSI SAS3008IR
6.1 Overview
The LSI SAS3008IR controller card is a 12 Gbit/s SAS controller with the Fusion-
MPT™ architecture. (MPT refers to Message Passing Technology.) Equipped with
PCIe 3.0 x8 ports and a powerful I/O storage engine, the controller card handles
data protection, verification, and restoration.
The LSI SAS3008IR provides the 3 Gbit/s, 6 Gbit/s, and 12 Gbit/s SAS ports and 3
Gbit/s and 6 Gbit/s SATA ports, with each port supporting the SSP, SMP, and STP.
The LSI SAS3008IR supports boot and configuration in legacy and UEFI modes.
The LSI SAS3008 IR does not support out-of-band management, but supports
hybrid configuration of RAID and JBOD. RAID controller cards and attached drives
can be managed using the controller card WebUI and CLI, and RAID controller
cards cannot be managed using out-of-band management tools, such as the
iBMC.
The LSI SAS3008IR has two structures to provide ease of connection in different
servers:
● LSI SAS3008IR for a rack, X8000, X6800 or X6000 V3 server
The LSI SAS3008IR connects to the mainboard through an XCede connector
and connects to the drive backplane by using two Mini-SAS cables, as shown
in Figure 6-1.
● LSI SAS3008IR for a blade, X6000 V2, G560, or mission-critical server
The LSI SAS3008IR connects to the mainboard through two XCede connectors,
as shown in Figure 6-2.
Figure 6-1 LSI SAS3008IR for a rack, X8000, X6800 or X6000 V3 server
Figure 6-2 LSI SAS3008IR for a blade, X6000 V2, G560, or mission-critical server
Indicators
Table 6-1 describes indicators on the LSI SAS3008IR.
6.2 Functions
The SAS module of the LSI SAS3008IR provides SAS functions and defines the
supported rates. The LSI SAS3008IR supports the following SAS modules:
● 12 Gbit/s SAS, 6 Gbit/s SAS, and 3 Gbit/s SAS
● 6 Gbit/s SATA, 3 Gbit/s SATA, 1.5 Gbit/s SATA
NOTICE
● In RAID 1, RAID 1E, or RAID 10, the LSI SAS3008IR firmware disables the write
cache function of the drive by default. In RAID 0, the write cache function is
enabled by default.
● Drives in the same RAID group must be of the same type and specifications.
NOTE
● The HDDs and SSDs cannot be used as the hot spare drives of each other.
● An idle drive that is not added to a RAID array can be configured as a hot spare drive.
● The type of hot spare drives must be the same as that of the member drives in the RAID
array, and the capacity of hot spare drives must be greater than or equal to the
maximum capacity of the member drives in the RAID array.
● All RAIDs except RAID 0 support hot spare disks.
NOTE
If a drive in pass-through mode is faulty, the Fault indicator on the drive will not be lit and
the iBMC will not generate an alarm.
NOTICE
After formatting, all data on the drive is cleared. Exercise caution when
performing this operation.
Enabling this feature puts drives in the Unconfigured Good state and idle hot
spare drives into the power saving state. Operations, such as RAID array creation,
hot spare drive creation, dynamic capacity expansion, and rebuild, will wake the
drives from power saving.
NOTE
● After removing a drive, install it after at least 30 seconds. Otherwise, the drive cannot
be identified.
● If a drive in a RAID array is removed and inserted online, the RAID array where the drive
resides will be degraded or faulty. Before removing and inserting a drive, check the
logical status of the drive and the number of faulty drives allowed by the RAID level.
● If you remove and insert a pass-through drive without powering off the OS, the drive
letter in the system may change. Before removing and inserting a pass-through drive,
record the drive letter in the system.
RAID 0 2 to 10 0
RAID 1 2 1
NOTE
● The failed drives cannot be adjacent. For details, see A.2 RAID Levels.
● On a Windows drive management screen, Bus Type of virtual drives and JBOD drives for
the LSI SAS3008 IR and LSI SAS3008 iMR is RAID, and Bus Type of JBOD drives for the
LSI SAS3008 IT is SAS.
Procedure
Step 1 Set the Legacy mode. For details, see A.1.3 Setting the Legacy Mode.
Step 2 Log in to the Configuration Utility screen of the LSI SAS3008IR.
1. When the message "Press Ctrl-C to start LSI Corp Configuration Utility" is
displayed during server startup, press Ctrl+C, as shown in Figure 6-4.
The message "Please wait, invoking SAS Configuration Utility..." is displayed.
2. After the system self-check, the Configuration Utility main screen is displayed,
as shown in Figure 6-5. Table 6-3 describes the parameters.
NOTE
You can view the global properties of the controller card on this screen.
The global properties include RAID controller card startup settings, such as whether to
enable the controller card, whether to modify the controller card boot order, and the
number of devices displayed.
Parameter Description
Parameter Description
Figure 6-6 Setting the boot order of RAID controller cards (1)
1. Select the box in the Boot Order column for a RAID controller card and use +
or - to set the boot order. For example, in Figure 6-7, SAS9300–8i is set as
the primary boot device and SAS3008 is set as the alternate one.
Figure 6-7 Setting the boot order of RAID controller cards (2)
2. Press Esc.
A confirmation dialog box is displayed. See Figure 6-8.
NOTE
If the server is configured with drive controllers of other chips, set the drive boot
device on the BIOS.
– For details about the settings on the Grantley platform, see "Setting the Boot
Device" in the Huawei Server Grantley Platform BIOS Parameter Reference.
– For details about the settings on the Brickland platform, see "Setting the Boot
Device" in the Huawei Server Brickland Platform BIOS Parameter Reference.
----End
NOTICE
● Data on a drive will be deleted after the drive is added to a RAID array. Before
creating a RAID array, check that there is no data on drives or the data on
drives is not required. If the drive data needs to be retained, back up the data
first.
● The LSI SAS3008IR supports SAS/SATA HDDs and SSDs. Drives in one RAID
array must be of the same type, but can have different capacities or be
provided by different vendors.
● 6.2.11 RAID 0, 1, 10, and 1E lists the number of drives required by each RAID
level.
Procedure
Step 1 Back up data on drives and access the Configuration Utility main screen. For
details, see 6.3.1 Logging In to the Configuration Utility.
▪ If the total number of drives in all RAID arrays under a RAID controller card
exceeds 14, no drive can be added to RAID arrays.
4. Press C.
The RAID array creation confirmation screen is displayed.
5. Select Save changes then exit this menu and press Enter.
RAID array creation takes about 1 minute, during which the Configuration
Utility is in the suspended state. Do not perform any other operations during
this period.
After the configuration is complete, the Figure 6-10 screen is displayed.
----End
Additional Information
Related Tasks
After creating the RAID array, check the configuration result as follows:
1. On the Adapter Properties screen, select RAID Properties and press Enter.
The Select New Volume Type screen is displayed.
2. Select View Existing Volume and press Enter.
RAID information is displayed.
Related Concepts
None
Scenarios
NOTICE
● Data on a drive will be deleted after the drive is added to a RAID array. Before
creating a RAID array, check that there is no data on drives or the data on
drives is not required. If the drive data needs to be retained, back up the data
first.
● The LSI SAS3008IR supports SAS/SATA HDDs and SSDs. Drives in one RAID
array must be of the same type, but can have different capacities or be
provided by different vendors.
● 6.2.11 RAID 0, 1, 10, and 1E lists the number of drives required by each RAID
level.
Procedure
Step 1 Back up data on drives and access the Configuration Utility main screen. For
details, see 6.3.1 Logging In to the Configuration Utility.
Parameter Description
– Yes indicates that the drive will be added to the RAID array.
– No indicates that the drive will not be added to the RAID array.
NOTE
4. Press C.
The RAID array creation confirmation screen is displayed.
5. Select Save changes then exit this menu and press Enter.
RAID array creation takes about 1 minute, during which the Configuration
Utility is in the suspended state. Do not perform any other operations during
this period.
After the configuration is complete, the Figure 6-10 screen is displayed.
----End
NOTE
After the RAID 1 array is created, the LSI SAS3008IR automatically starts a background
initialization. The background initialization synchronizes data from the primary drive to the
secondary drive. It does not affect the use of the RAID array but affects the performance.
Additional Information
Related Tasks
After creating the RAID array, check the configuration result as follows:
1. On the Adapter Properties screen, select RAID Properties and press Enter.
The Select New Volume Type screen is displayed.
2. Select View Existing Volume and press Enter.
RAID information is displayed.
Related Concepts
None
NOTICE
● Data on a drive will be deleted after the drive is added to a RAID array. Before
creating a RAID array, check that there is no data on drives or the data on
drives is not required. If the drive data needs to be retained, back up the data
first.
● The LSI SAS3008IR supports SAS/SATA HDDs and SSDs. Drives in one RAID
array must be of the same type, but can have different capacities or be
provided by different vendors.
● 6.2.11 RAID 0, 1, 10, and 1E lists the number of drives required by each RAID
level.
Procedure
Step 1 Back up data on drives and access the Configuration Utility main screen. For
details, see 6.3.1 Logging In to the Configuration Utility.
Step 2 Create RAID 1E or 10.
1. On the Adapter Properties screen, select RAID Properties and press Enter.
The Select New Volume Type screen is displayed, as shown in Figure 6-17.
Parameter Description
– A RAID 1E can contain three, five, seven, or nine drives (odd numbers). A RAID 10
array can contain four, six, eight, or ten drives (even numbers).
– If the total number of drives in all RAID arrays under a RAID controller card
exceeds 14, no drive can be added to RAID arrays.
Press the space bar to specify whether to add a drive to the RAID array, as
shown in Figure 6-19.
– Yes indicates that the drive will be added to the RAID array.
– No indicates that the drive will not be added to the RAID array.
4. Press C.
The RAID array creation confirmation screen is displayed.
5. Select Save changes then exit this menu and press Enter.
RAID array creation takes about 1 minute, during which the Configuration
Utility is in the suspended state. Do not perform any other operations during
this period.
After the configuration is complete, the Figure 6-10 screen is displayed.
----End
NOTE
After the RAID 1E or RAID 10 array is created, the LSI SAS3008IR automatically starts a
background initialization. The background initialization synchronizes data from the primary
drive to the secondary drive. It does not affect the use of the RAID array but affects the
performance.
Additional Information
Related Tasks
After creating the RAID array, check the configuration result as follows:
1. On the Adapter Properties screen, select RAID Properties and press Enter.
The Select New Volume Type screen is displayed.
2. Select View Existing Volume and press Enter.
RAID information is displayed.
Related Concepts
None
NOTICE
To ensure that the server can start installed systems, you must set boot devices
during RAID configuration.
● If both configured, the primary and alternate boot devices are started in
sequence.
● If only the primary boot device is configured, the primary boot device is started.
● If only the alternate boot device is configured, the alternate boot device is
started.
Procedure
Step 1 Access the Configuration Utility main screen. For details, see 6.3.1 Logging In to
the Configuration Utility.
Step 2 Set the boot device.
1. On the Adapter Properties screen, select SAS Topology and press Enter.
The SAS Topology screen is displayed.
2. On the SAS Topology screen, press ↑ or ↓ to select a drive or RAID controller
card, and press ALT+B (or ALT+A) to set the selected device as the primary
(or alternate) boot option.
NOTE
If drive or RAID array information is collapsed, move the cursor to Controller or RAID
XX VOL and press Enter to expand the information.
After the setting is successful, the value of Device Info for the primary boot
device is Boot, and the value of Device Info for the alternate boot device is
Alt, as shown in Figure 6-20.
----End
Related Operations
If a server is configured with only the LSI SAS2308 and LSI SAS3008IR RAID
controller cards, set the RAID controller card boot order by following "(Optional)
Set the boot order of RAID controller cards" in 6.3.1 Logging In to the
Configuration Utility. If the server is configured with drive controllers of other
chips, set the drive boot device on the BIOS.
● For details about the settings on the Grantley platform, see "Setting the Boot
Device" in the Huawei Server Grantley Platform BIOS Parameter
Reference.
● For details about the settings on the Brickland platform, see "Setting the Boot
Device" in the Huawei Server Brickland Platform BIOS Parameter
Reference.
All configurations described in this document about the LSI SAS3008IR are
performed on the configuration management screen, which can be accessed only
after you restart the server. To monitor RAID status and view RAID configurations
during system running, use the SAS3IRCU tool.
If the boot type is changed after the OS has been installed in Legacy or UEFI
mode, the OS will be inaccessible. To access the OS, you need to change the boot
type to that used when the OS is installed. If the OS needs to be reinstalled, select
the Legacy or UEFI mode based on actual situation.
If multiple boot devices are configured, you are advised to set Boot Type to UEFI
Boot Type because certain boot devices may fail to boot if Boot Type is set to
Legacy Boot Type. If you still want to set Boot Type to Legacy Boot Type, then
disable redirection for certain serial ports or disable PXE for certain NICs based on
the services in use. For details, see "Setting PXE for a NIC" and "Setting Serial Port
Redirection" in the respective BIOS Parameter Reference.
● Huawei Server Brickland Platform BIOS Parameter Reference
● Huawei Server Grantley Platform BIOS Parameter Reference
NOTE
● For the configuration of the LSI SAS3008IR RAID controller card in EFI/UEFI mode,
upgrade the card firmware to V109 or later.
● For details about how to upgrade the RAID controller card firmware, see the
FusionServer Pro Rack Server Upgrade Guide or FusionServer Pro High-Density
Server Upgrade Guide.
Procedure
Step 1 Set the EFI/UEFI Boot Type mode. For details, see A.1.4 Setting the EFI/UEFI
Mode.
Step 2 Log in to the management screen of the LSI SAS3008IR.
The position of the RAID controller card management interface in EFI/UEFI mode
varies with the BIOS platform.
● Brickland platform: The interface is integrated into the BIOS Setup. For details,
see A.1.1 Logging In to the RAID Controller Card Management Screen in
EFI/UEFI Mode (Brickland Platform).
● Grantley platform: The interface is integrated into the Device Manager. For
details, see A.1.2 Logging In to the RAID Controller Card Management
Screen in EFI/UEFI Mode (Grantley Platform).
Step 3 Select the LSI SAS3008IR and press Enter.
The screen shown in Figure 6-21 is displayed.
----End
NOTICE
● Data on a drive will be deleted after the drive is added to a RAID array. Before
creating a RAID array, check that there is no data on drives or the data on
drives is not required. If the drive data needs to be retained, back up the data
first.
● The LSI SAS3008IR supports SAS/SATA HDDs and SSDs. Drives in one RAID
array must be of the same type, but can have different capacities or be
provided by different vendors.
● 6.2.11 RAID 0, 1, 10, and 1E lists the number of drives required by each RAID
level.
Procedure
Step 1 Back up data on drives and access the Configuration Utility main screen. For
details, see 6.4.1 Logging In to the Management Screen.
Step 2 Access the Create Configuration screen.
1. On the main screen, select Controller Management and press Enter.
2. Select Create Configuration and press Enter.
Step 3 Select a RAID level.
1. Use ↑ and ↓ to select Select RAID Level and press Enter.
The list of configurable RAID levels is displayed, as shown in Figure 6-23.
Select Media Selects a drive type. If the controller card controls drives
Type of different capacities, you can use this parameter to
filter the drives.
----End
NOTICE
● Data on a drive will be deleted after the drive is added to a RAID array. Before
creating a RAID array, check that there is no data on drives or the data on
drives is not required. If the drive data needs to be retained, back up the data
first.
● The LSI SAS3008IR supports SAS/SATA HDDs and SSDs. Drives in one RAID
array must be of the same type, but can have different capacities or be
provided by different vendors.
● 6.2.11 RAID 0, 1, 10, and 1E lists the number of drives required by each RAID
level.
Procedure
Step 1 Back up data on drives and access the Configuration Utility main screen. For
details, see 6.4.1 Logging In to the Management Screen.
Step 2 Access the Create Configuration screen.
1. On the main screen, select Controller Management and press Enter.
2. Select Create Configuration and press Enter.
Step 3 Select a RAID level.
1. Use ↑ and ↓ to select Select RAID Level and press Enter.
The list of configurable RAID levels is displayed, as shown in Figure 6-26.
Select Media Selects a drive type. If the controller card controls drives
Type of different capacities, you can use this parameter to
filter the drives.
----End
NOTE
After the RAID 1 array is created, the LSI SAS3008IR automatically starts a background
initialization. The background initialization synchronizes data from the primary drive to the
secondary drive. It does not affect the use of the RAID array but affects the performance.
NOTICE
● Data on a drive will be deleted after the drive is added to a RAID array. Before
creating a RAID array, check that there is no data on drives or the data on
drives is not required. If the drive data needs to be retained, back up the data
first.
● The LSI SAS3008IR supports SAS/SATA HDDs and SSDs. Drives in one RAID
array must be of the same type, but can have different capacities or be
provided by different vendors.
● 6.2.11 RAID 0, 1, 10, and 1E lists the number of drives required by each RAID
level.
Procedure
Step 1 Back up data on drives and access the Configuration Utility main screen. For
details, see 6.4.1 Logging In to the Management Screen.
Step 2 Access the Create Configuration screen.
1. On the main screen, select Controller Management and press Enter.
2. Select Create Configuration and press Enter.
Step 3 Select a RAID level.
1. Use ↑ and ↓ to select Select RAID Level and press Enter.
The list of configurable RAID levels is displayed, as shown in Figure 6-29.
Select Media Selects a drive type. If the controller card controls drives
Type of different capacities, you can use this parameter to
filter the drives.
– A RAID 1E can contain three, five, seven, or nine drives (odd numbers).
– If the total number of drives in all RAID arrays under a RAID controller card
exceeds 14, no drive can be added to RAID arrays.
----End
NOTE
After the RAID 1E array is created, the LSI SAS3008IR automatically starts a background
initialization. The background initialization synchronizes data from the primary drive to the
secondary drive. It does not affect the use of the RAID array but affects the performance.
NOTICE
● Data on a drive will be deleted after the drive is added to a RAID array. Before
creating a RAID array, check that there is no data on drives or the data on
drives is not required. If the drive data needs to be retained, back up the data
first.
● The LSI SAS3008IR supports SAS/SATA HDDs and SSDs. Drives in one RAID
array must be of the same type, but can have different capacities or be
provided by different vendors.
● 6.2.11 RAID 0, 1, 10, and 1E lists the number of drives required by each RAID
level.
Procedure
Step 1 Back up data on drives and access the Configuration Utility main screen. For
details, see 6.4.1 Logging In to the Management Screen.
Step 2 Access the Create Configuration screen.
1. On the main screen, select Controller Management and press Enter.
2. Select Create Configuration and press Enter.
Step 3 Select a RAID level.
1. Use ↑ and ↓ to select Select RAID Level and press Enter.
The list of configurable RAID levels is displayed, as shown in Figure 6-32.
Select Media Selects a drive type. If the controller card controls drives
Type of different capacities, you can use this parameter to
filter the drives.
– A RAID 10 array can contain four, six, eight, or ten drives (even numbers).
– If the total number of drives in all RAID arrays under a RAID controller card
exceeds 14, no drive can be added to RAID arrays.
----End
NOTE
After the RAID 10 array is created, the LSI SAS3008IR automatically starts a background
initialization. The background initialization synchronizes data from the primary drive to the
secondary drive. It does not affect the use of the RAID array but affects the performance.
● For details about the settings on the Grantley platform, see "Setting the Boot
Device" in the Huawei Server Grantley Platform BIOS Parameter
Reference.
● For details about the settings on the Brickland platform, see "Setting the Boot
Device" in the Huawei Server Brickland Platform BIOS Parameter
Reference.
Scenarios
After creating a RAID 1, 1E, or 10 array for the LSI SAS3008IR, you can configure
one or two global hot spare drives to enhance data security.
The LSI SAS3008IR does not support dedicated hot spare drives.
NOTE
● The HDDs and SSDs cannot be used as the hot spare drives of each other.
● An idle drive that is not added to a RAID array can be configured as a hot spare drive.
● The type of hot spare drives must be the same as that of the member drives in the RAID
array, and the capacity of hot spare drives must be greater than or equal to the
maximum capacity of the member drives in the RAID array.
● All RAIDs except RAID 0 support hot spare disks.
Prerequisites
The following requirements must be met before you configure hot spare drives:
Data
Procedure
Step 1 Access the Configuration Utility main screen. For details, see 6.3.1 Logging In to
the Configuration Utility.
NOTICE
The data on the drives to be used as hot spare drives will be lost.
When the target drive is selected, press + or the space bar to mark the drive
as a hot spare drive, as shown in Figure 6-38.
In the Hot Spr column, Yes indicates that the drive will be a hot spare drive of
the RAID array.
6. Press C.
The hot spare drive configuration update screen is displayed.
7. Select Save changes then exit this menu and press Enter.
RAID array creation takes about 1 minute, during which the Configuration
Utility is in the suspended state and no other operations can be performed.
After the configuration is complete, the screen shown in Figure 6-37 is
displayed.
----End
Procedure
Step 1 Access the Configuration Utility main screen. For details, see 6.3.1 Logging In to
the Configuration Utility.
Step 2 Delete a hot spare drive.
1. On the Adapter Properties screen, select RAID Properties and press Enter.
The Select New Volume Type screen is displayed, as shown in Figure 6-39.
6. Press C.
The hot spare drive configuration update screen is displayed.
7. Select Save changes then exit this menu and press Enter.
After the configuration is complete, the Figure 6-41 screen is displayed.
----End
Procedure
Step 1 Access the Configuration Utility main screen. For details, see 6.3.1 Logging In to
the Configuration Utility.
Step 2 Activate the RAID configuration.
1. On the Adapter Properties screen, select RAID Properties and press Enter.
The Select New Volume Type screen is displayed, as shown in Figure 6-43.
----End
Task Task of the RAID array. None indicates that the RAID array
does not have running tasks in the background.
Parameter Description
NOTE
If the LSI SAS3008IR has more than one RAID array, press Alt+N to view information about
other RAID arrays.
----End
Parameter Description
----End
NOTICE
A deleted RAID array cannot be restored. Exercise caution when deleting a RAID
array.
Procedure
Step 1 Access the Configuration Utility main screen. For details, see 6.3.1 Logging In to
the Configuration Utility.
NOTICE
– Press N to quit.
The Figure 6-82 screen is displayed.
----End
Procedure
Step 1 Log in to the Configuration Utility main screen. For details, see 6.3.1 Logging In
to the Configuration Utility.
Step 2 Check consistency.
1. On the Adapter Properties screen, select RAID Properties and press Enter.
The Select New Volume Type screen is displayed, as shown in Figure 6-51.
----End
----End
----End
NOTE
● You can use the same method to turn off the indicator.
● You can also turn on the hard drive indicators on the OS CLI. For details, see 6.10.2.10
Turning On a Drive UID Indicator.
● During the formatting process, do not shut down or restart the system or remove and
reinstall the hard drive. Otherwise, the hard drive will be corrupted.
● After formatting, all data on the hard drive is cleared. Exercise caution when performing
this operation.
The Device Properties screen of the selected hard drive is displayed and provides
menus for formatting and verifying the hard drive, as shown in Figure 6-55.
2. Press F.
The formatting starts.
When the progress reaches 100%, the formatting is complete, as shown in
Figure 6-57.
2. Press Enter.
The verification process starts, as shown in Figure 6-59.
----End
● ALT+A
Set the selected device as the second boot device.
After the setting is successful, Alt is displayed in Device Info, as shown in
Figure 6-62.
----End
Scenarios
After you have created a RAID 1, RAID 1E or RAID 10 array for the LSI SAS3008IR,
you can configure one or two global hot spare drives to enhance data security.
The LSI SAS3008IR does not support dedicated hot spare drives.
NOTE
● The HDDs and SSDs cannot be used as the hot spare drives of each other.
● An idle drive that is not added to a RAID array can be configured as a hot spare drive.
● The type of hot spare drives must be the same as that of the member drives in the RAID
array, and the capacity of hot spare drives must be greater than or equal to the
maximum capacity of the member drives in the RAID array.
● All RAID arrays except RAID 0 support hot spare drives.
Prerequisites
The following requirements must be met before you configure hot spare drives:
● The server has idle drives.
● You have logged in to the Configuration Utility. For details, see 6.4.1 Logging
In to the Management Screen.
Data
Data preparation is not required for this operation.
Procedure
Step 1 Log in to the Configuration Utility main screen. For details, see 6.4.1 Logging In
to the Management Screen.
Step 2 Access the virtual drive properties screen.
1. On the main screen, select Virtual Disk Management and press Enter.
2. Select Manage Virtual Disk Properties and press Enter.
The virtual drive properties screen is displayed, as shown in Figure 6-63.
----End
Procedure
Step 1 Log in to the Configuration Utility. For details, see 6.4.1 Logging In to the
Management Screen.
----End
● If the number of faulty or missing drives exceeds the maximum number allowed by
the RAID array, the RAID array cannot be imported.
● To avoid configuration import failure, replace the original RAID controller card with
a new card of the same type.
----End
Step 2 On the main screen, select Controller Management and press Enter.
Step 7 Use ↑ and ↓ to select Clear Foreign Configuration and press Enter.
----End
Scenarios
If the server does not need a RAID array, you can delete the RAID array to release
the drives.
NOTICE
A deleted RAID array cannot be restored. Exercise caution when deleting a RAID
array.
Procedure
Step 1 Log in to the Configuration Utility. For details, see 6.4.1 Logging In to the
Management Screen.
----End
6.7 Troubleshooting
This section describes solutions to drive faults and RAID controller card faults. For
other situations, see the Huawei Server Maintenance Guide.
Solution
Step 1 Determine the slot number of the faulty drive.
NOTE
If a drive in pass-through mode is faulty, the Fault indicator on the drive will not be lit and
the iBMC will not generate an alarm.
● Locate the faulty drive based on the fault indicator, which is steady orange.
For details, see the drive numbering section in the user guide of the server
you use.
● Locate the faulty drive based on the iMana/iBMC drive alarm information. For
details, see iMana/iBMC Alarm Handling.
● Locate the faulty drive using the RAID controller card GUI. For details, see
6.8.3 SAS Topology or 6.9.4.1 View Physical Disk Properties.
● Locate the faulty drive using the RAID controller card CLI tool. For details, see
6.10.2.2 Viewing Device Information.
Step 2 Replace the drive.
NOTICE
Remove the faulty drive and install a new drive. The new drive can be restored in
the following ways based on the RAID configuration of the faulty drive:
● If the RAID array has redundancy feature and has a hot spare drive, the global
hot spare drive automatically replaces the faulty drive for data
synchronization and recovery. After a new drive is inserted into the slot of the
faulty drive, the new drive automatically becomes the hot spare drive.
● If the RAID array has redundancy feature and has no hot spare drive, the
newly installed drive automatically rebuilds data. If more than one faulty
drive exists in a RAID array, replace the faulty drives one by one based on the
drive fault time. Replace the next drive only after the current drive data is
rebuilt.
● If the faulty drive is a pass-through drive, replace it.
● If the faulty drive belongs to a RAID array without redundancy (RAID 0),
create RAID 0 again.
– For details about how to create a RAID 0 array in Legacy mode, see 6.3.2
Creating RAID 0.
– For details about how to create a RAID 0 array in UEFI mode, see 6.4.2
Creating RAID 0.
– For details about how to create a RAID 0 array by running commands,
see 6.10.2.3 Creating and Deleting a RAID Array.
----End
Solution
Step 1 Log in to the iBMC WebUI to view the alarm information.
Step 2 Rectify the fault based on the alarm information. For details, see the iBMC Alarm
Handling.
● If the fault is rectified, no further action is required.
● If the fault persists, go to Step 3.
Step 3 Collect and view logs and other necessary fault information.
Step 4 Use the Computing Product Case Library or contact technical support.
NOTE
The Computing Product Case Library is available only to Huawei engineers and partners.
----End
Procedure
Step 1 Set the Legacy mode. For details, see A.1.3 Setting the Legacy Mode.
Step 2 Log in to the Configuration Utility of the LSI SAS3008IR.
1. When the message "Press Ctrl-C to start LSI Corp Configuration Utility" is
displayed during server startup, press Ctrl+C, as shown in Figure 6-73.
The message "Please wait, invoking SAS Configuration Utility..." is displayed.
2. After the system self-check, the Configuration Utility main screen is displayed,
as shown in Figure 6-74. Table 6-14 describes the parameters.
NOTE
You can view the global properties of the controller card on this screen.
The global properties include RAID controller card startup settings, such as whether to
enable the controller card, whether to modify the controller card boot order, and the
number of devices displayed.
Parameter Description
Parameter Description
Figure 6-75 Setting the boot order of RAID controller cards (1)
1. Select the box in the Boot Order column for a RAID controller card and use +
or - to set the boot order. For example, in Figure 6-76, SAS9300–8i is set as
the primary boot device and SAS3008 is set as the alternate one.
Figure 6-76 Setting the boot order of RAID controller cards (2)
2. Press Esc.
A confirmation dialog box is displayed. See Figure 6-77.
NOTE
If the server is configured with drive controllers of other chips, set the drive boot
device on the BIOS.
– For details about the settings on the Grantley platform, see "Setting the Boot
Device" in the Huawei Server Grantley Platform BIOS Parameter Reference.
– For details about the settings on the Brickland platform, see "Setting the Boot
Device" in the Huawei Server Brickland Platform BIOS Parameter Reference.
----End
NOTE
If the number of RAID arrays created reaches the maximum, the Figure 6-80 screen is
displayed after you select RAID Properties on the Adapter Properties screen and press
Enter.
View Existing This menu allows you to add or delete hot spare drives, perform
Volume consistency checks, and activate, delete, or expand a RAID array.
This option is available only when RAID arrays have been
created for the LSI SAS3008.
Screen Introduction
Figure 6-81 shows the View Volume screen. Table 6-17 describes the parameters
on the screen.
NOTE
If the LSI SAS3008IR has more than one RAID array, press Alt+N to view information about
other RAID arrays.
Parameter Description
RAID Disk Indicates whether the drive is a member of the RAID array.
● Yes: The drive is a member of the RAID array.
● No: The drive is not a member of the RAID array.
Parameter Description
NOTICE
Press -, +, or the space bar to specify whether to add or delete a drive, as shown in
Figure 6-83.
Step 4 Press C.
Step 5 Select Save changes then exit this menu and press Enter.
RAID array creation takes about 1 minute, during which the Configuration Utility
is in the suspended state. Do not perform any other operations during this period.
----End
Checking Consistency
Step 1 Select Manage Volume and press Enter.
----End
----End
NOTICE
● Press N to quit.
The Figure 6-82 screen is displayed.
----End
● The LSI SAS3008IR RAID controller card supports only capacity expansion of RAID 1.
● Before the expansion, check that the online capacity expansion function is enabled for
the firmware of the LSI SAS3008IR RAID controller card.
● Before the expansion, prepare two drives of the same capacity. Ensure that the two
drives use the same storage media as those in RAID 1 and have a greater capacity.
Step 2 Insert a spare drive with large capacity into the vacant slot.
Step 3 Ensure that RAID 1 is rebuilt.
Use one of the following methods to check the rebuild progress:
● Run the sas3ircu controllerid status command of SAS3IRCU in the OS.
● Check the drive indicator status. When the yellow and green indicators are
blinking at the same time, the rebuild is under way. When the yellow
indicator stops blinking, the rebuild is complete.
● Log in to the iBMC WebUI and check whether the In Failed Array alarm is
cleared.
Step 4 Repeat the preceding steps to replace the other member drive in RAID 1.
Step 5 After RAID 1 is rebuilt, select Manage Volume and press Enter.
The Manage Volume screen is displayed, as shown in Figure 6-84.
----End
Screen Overview
Figure 6-85 shows the Create New Volume screen. Table 6-18 describes the
parameters on the screen.
RAID Disk Indicates whether the drive is a member of the RAID array.
● Yes: The drive is a member of the RAID array.
● No: The drive is not a member of the RAID array.
For details about how to create RAID 1, see 6.3.3 Creating RAID 1.
Screen Overview
On the Select New Volume Type screen, select Create RAID 1E/10 Volume and
press Enter.
The Create New Volume screen is displayed, as shown in Figure 6-86. Table 6-19
describes the parameters on the screen.
RAID Disk Indicates whether the drive is a member of the RAID array.
● Yes: The drive is a member of the RAID array.
● No: The drive is not a member of the RAID array.
Parameter Description
NOTICE
Press -, +, or the space bar to specify whether to add a drive to the RAID array, as
shown in Figure 6-87.
● Yes indicates that the drive will be added to the RAID array.
● No indicates that the drive will not be added to the RAID array.
Step 2 Press C.
Step 3 Select Save changes then exit this menu and press Enter.
RAID array creation takes about 1 minute, during which the Configuration Utility
is in the suspended state. Do not perform any other operations during this period.
----End
Screen Overview
On the Select New Volume Type screen, select Create RAID 0 Volume and press
Enter.
The Create New Volume screen is displayed, as shown in Figure 6-88. Table 6-20
describes the parameters on the screen.
Parameter Description
Parameter Description
RAID Disk Indicates whether the drive is a member of the RAID array.
● Yes: The drive is a member of the RAID array.
● No: The drive is not a member of the RAID array.
NOTICE
Press -, +, or the space bar to specify whether to add a drive to the RAID array, as
shown in Figure 6-89.
● Yes indicates that the drive will be added to the RAID array.
● No indicates that the drive will not be added to the RAID array.
Step 2 Press C.
Step 3 Select Save changes then exit this menu and press Enter.
RAID array creation takes about 1 minute, during which the Configuration Utility
is in the suspended state. Do not perform any other operations during this period.
----End
Screen Overview
Figure 6-90 shows the SAS Topology screen. Table 6-21 describes the parameters
on the screen.
Parameter Description
NOTE
● You can use the same method to turn off the drive indicator.
● You can also turn on the hard drive indicators on the OS CLI. For details, see 6.10.2.10
Turning On a Drive UID Indicator.
● During the formatting process, do not shut down or restart the system or remove and
reinstall the hard drive. Otherwise, the hard drive will be corrupted.
● After formatting, all data on the hard drive is cleared. Exercise caution when performing
this operation.
The properties of the selected drive is displayed and provides menus for
formatting and verifying the drive, as shown in Figure 6-91.
2. Press F.
The formatting starts.
When the progress reaches 100%, the formatting is complete, as shown in
Figure 6-93.
2. Press Enter.
The verification process starts, as shown in Figure 6-95.
----End
● ALT+A:
This operation sets the device as the second boot device.
Alt is displayed in Device Info for the selected device, as shown in Figure
6-98.
Screen Overview
Figure 6-99 shows the Advanced Adapter Properties screen. Table 6-22
describes the parameters on the screen.
Parameter Description
Parameter Description
Parameter Description
Parameter Description
Step 2 Use ↑ and ↓ to select an item and press Enter to set the value.
----End
Parameter Description
Direct Attached Maximum number of directly attached devices that can spin
Max Targets to up simultaneously.
Spinup
Step 2 Use ↑ and ↓ to select an item and press Enter to set the value.
----End
6.8.5 Exit
Press Esc to exit the LSI SAS3008IR Configuration Utility. The screen shown in
Figure 6-102 is displayed. Table 6-25 describes the operation options.
Operation Description
Operation Description
Exit the Exits the Configuration Utility and restarts the server.
Configuration
Utility and
Reboot
Procedure
Step 1 Set the EFI/UEFI Boot Type mode. For details, see A.1.4 Setting the EFI/UEFI
Mode.
Step 2 Log in to the management screen of the LSI SAS3008IR.
The position of the RAID controller card management interface in EFI/UEFI mode
varies with the BIOS platform.
● Brickland platform: The interface is integrated into the BIOS Setup. For details,
see A.1.1 Logging In to the RAID Controller Card Management Screen in
EFI/UEFI Mode (Brickland Platform).
● Grantley platform: The interface is integrated into the Device Manager. For
details, see A.1.2 Logging In to the RAID Controller Card Management
Screen in EFI/UEFI Mode (Grantley Platform).
Step 3 Select the LSI SAS3008IR and press Enter.
The screen shown in Figure 6-103 is displayed.
----End
Screen Description
Figure 6-106 shows the screen. Table 6-27 describes the parameters on the
screen.
Parameter Description
PCI Slot Number PCI slot number of the RAID controller card.
● For 16P configurations, the value ranges from 0 to 3.
– 0: primary enclosure of SCE 1.
– 1: secondary enclosure of SCE 1.
– 2: primary enclosure of SCE 2.
– 3: secondary enclosure of SCE 2.
● For 32P configurations, the value ranges from 0 to 7.
– 0: primary enclosure of SCE 1.
– 1: secondary enclosure of SCE 1.
– 2: primary enclosure of SCE 2.
– 3: secondary enclosure of SCE 2.
– 4: primary enclosure of SCE 3.
– 5: secondary enclosure of SCE 3.
– 6: primary enclosure of SCE 4.
– 7: secondary enclosure of SCE 4.
Screen Introduction
Figure 6-107 shows the screen.
----End
Screen Introduction
Figure 6-108 shows the screen.
Configuring a RAID
Step 1 Select a RAID level.
1. Use ↑ and ↓ to select Select RAID Level and press Enter.
The list of configurable RAID levels is displayed, as shown in Figure 6-109.
Parameter Description
Select Media Selects a drive type. If the controller card controls drives
Type of different capacities, you can use this parameter to
filter the drives.
----End
----End
Screen Description
Figure 6-113 shows the screen. Table 6-29 describes the parameters on the
screen.
Step 2 Use ↑ and ↓ to select Import Foreign Configuration and press Enter.
A confirmation screen is displayed, as shown in Figure 6-115.
----End
----End
Screen Introduction
Figure 6-116 shows the screen. Table 6-31 describes the parameters on the
screen.
----End
Manage Virtual Displays and modifies virtual drive properties, and configures
Disk Properties hot spare drives.
Figure 6-119 shows the screen. Table 6-34 describes the parameters on the
screen.
Step 5 Use ↑ and ↓ to select View More Physical Disk Properties and press Enter.
More physical drive properties are displayed, as shown in Figure 6-121. Table
6-35 describes the parameters on the screen.
----End
Compatible Bare Disks Lists physical drives that can be configured as hot
spare drives.
----End
Step 4 Use ↑ and ↓ to select Unassign Global Hotspare Disk and press Enter.
----End
Parameter Description
Parameter Description
Start Locate / Turns on the UID indicator of a member drive of the virtual
Blink drive.
Stop Locate / Turns off the UID indicator of a member drive of the virtual
Blink drive.
----End
----End
Parameter Description
Parameter Description
Locating a Drive
Step 1 Use ↑ and ↓ to select Select Physical Disk and press Enter.
Step 3 Use ↑ and ↓ to select Start Locate / Blink and press Enter.
----End
6.9.5 Exit
Exit the Configuration Utility.
----End
----End
Installing SAS3IRCU
The SAS3IRCU installation method varies depending on the OS type. The following
uses Windows, Linux, and VMware as examples to describe the SAS3IRCU
installation procedure. For the installation procedures for other OSs, see the
Readme file in the software package.
● Installing SAS3IRCU in Windows
a. Upload the tool package applicable to Windows to the server OS.
b. Go to the command-line interface (CLI).
c. Run a command to go to the directory where the SAS3IRCU tool package
resides.
For Windows, SAS3IRCU does not require installation. You can directly run
RAID controller card management commands.
● Installing SAS3IRCU in Linux
a. Use a file transfer tool (for example, PuTTY) to upload the SAS3IRCU
package applicable to Linux to the server OS.
b. Go to the directory where SAS3IRCU resides in shell mode.
For Linux, SAS3IRCU does not require installation. You can directly run
RAID controller card management commands.
● Installing SAS3IRCU in VMware
a. Use a file transfer tool (such as PuTTY) to upload the SAS3IRCU package
applicable to VMware to the /temp directory on the server OS.
b. In VMware, run the esxcli software vib install -v=/temp/vmware-xxx-
sas3ircu.vib command to install SAS3IRCU. In the command, /temp/
vmware-xxx-sas3ircu.vib indicates the full path of the SAS3IRCU tool
file.
NOTE
Function
View all controllers.
Format
sas3ircu list
Parameters
None
Usage Guidelines
None
Example
# View all controllers.
domino:~# ./sas3ircu list
Avago Technologies SAS3 IR Configuration Utility.
Version 13.00.00.00 (2016.03.08)
Copyright (c) 2009-2016 Avago Technologies. All rights reserved.
Adapter Vendor Device SubSys SubSys
Index Type ID ID Pci Address Ven ID Dev ID
----- ------------ ------ ------ ----------------- ------ ------
0 SAS3008 1000h 97h 00h:01h:00h:00h 1000h 3090h
SAS3IRCU: Utility Completed Successfully.
Function
View detailed information about RAID controller cards, physical drives, and virtual
drives.
Format
sas3ircu controller_id display
Parameters
Parameter Description Value
Usage Guidelines
None
Example
# View detailed information about RAID controller cards, physical drives, and
virtual drives.
● You can run the display command to see the information. In the following
example, two drives are set as the primary and alternate boot devices
respectively.
domino:~# ./sas3ircu 0 display
● In the following example, a drive and a RAID array are set as the primary and
alternate boot devices respectively.
domino:~# ./sas3ircu 0 display
Function
Create and delete a RAID array. (The delete command will delete all RAID arrays.)
Format
sas3ircu controller_id create RAIDlevel capacity enclosure_id:slot_id name
noprompt
sas3ircu controller_id delete noprompt
Parameters
Parameter Description Value
Usage Guidelines
None
Example
# Create a RAID array.
domino:~# ./sas3ircu 0 create RAID1 MAX 1:0 1:1 Test01 noprompt
Avago Technologies SAS3 IR Configuration Utility.
Version 13.00.00.00 (2016.03.08)
Copyright (c) 2009-2016 Avago Technologies. All rights reserved.
domino:~# ./sas3ircu 0 delete noprompt
Avago Technologies SAS3 IR Configuration Utility.
Version 13.00.00.00 (2016.03.08)
Copyright (c) 2009-2016 Avago Technologies. All rights reserved.
Function
Delete a specified RAID array.
Format
sas3ircu controller_id deletevolume volume_id noprompt
Parameters
Parameter Description Value
Usage Guidelines
None
Example
# Delete the RAID array whose ID is 322.
domino:~# ./sas3ircu 0 deletevolume 322 noprompt
Avago Technologies SAS3 IR Configuration Utility.
Version 13.00.00.00 (2016.03.08)
Copyright (c) 2009-2016 Avago Technologies. All rights reserved.
Please wait, may take up to a minute...
SAS3IRCU: Volume deleted successfully.
SAS3IRCU: Command DELETEVOLUME Completed Successfully.
SAS3IRCU: Utility Completed Successfully.
Function
Simulate a drive fault.
Format
sas3ircu controller_id action enclosure_id:slot_id
Parameters
Parameter Description Value
Usage Guidelines
None
Example
# Simulate a drive offline fault.
domino:~# ./sas3ircu 0 setoffline 1:2
Avago Technologies SAS3 IR Configuration Utility.
Version 13.00.00.00 (2016.03.08)
Copyright (c) 2009-2016 Avago Technologies. All rights reserved.
SAS3IRCU: Physical disk set to Offline successfully.
SAS3IRCU: Command SETOFFLINE Completed Successfully.
SAS3IRCU: Utility Completed Successfully.
Function
Create and delete a hot spare drive.
Format
sas3ircu controller_id hotspare enclosure_id:slot_id
sas3ircu controller_id hotspare delete enclosure_id:slot_id
Parameters
Parameter Description Value
Usage Guidelines
None
Example
# Set the drive in slot 2 of enclosure 1 to a hot spare drive.
domino:~# ./sas3ircu 0 hotspare 1:2
Avago Technologies SAS3 IR Configuration Utility.
Version 13.00.00.00 (2016.03.08)
Copyright (c) 2009-2016 Avago Technologies. All rights reserved.
WARNING: Proceeding with this operation may cause data loss or data
corruption. Are you sure you want to proceed (YES/NO)?yes
WARNING: This is your last chance to abort this operation. Do you wish
to abort (YES/NO)?no
Please wait,may take up to a minute...
SAS3IRCU: Hot Spare disk created successfully.
SAS3IRCU: Command HOTSPARE Completed Successfully.
SAS3IRCU: Utility Completed Successfully.
Function
View virtual drive status.
Format
sas3ircu controller_id status
Parameters
Parameter Description Value
For details about how to query the RAID controller card ID, see 6.10.2.1 Viewing
All Controllers.
Usage Guidelines
None
Example
# View virtual drive status.
domino:~# ./sas3ircu 0 status
Avago Technologies SAS3 IR Configuration Utility.
Version 13.00.00.00 (2016.03.08)
Copyright (c) 2009-2016 Avago Technologies. All rights reserved.
Background command progress status for controller 0...
IR volume 1
Volume ID : 323
Current operation : Background Init
volume Status : Enabled
Volume state : Optimal
Volume wwid : 0307b565aa18c1fb
Physical disk I/Os : Not quiesced
Volume size (in sectors) : 1560545280
Number of remaining sectors : 1558128640
Percentage complete : 0.15%
IR volume 2
Volume ID : 323
Current operation : none
volume Status : Enabled
Volume state : Optimal
Volume wwid : 09fb6da3d1048925
Physical disk I/Os : Not quiesced
SAS3IRCU: Command STATUS Completed Successfully.
SAS3IRCU: Utility Completed Successfully.
Function
Perform a consistency check.
Format
sas3ircu controller_id constchk volume_id noprompt
Parameters
Parameter Description Value
Usage Guidelines
None
Example
# Perform a consistency check.
domino:~# ./sas3ircu 0 constchk 322 noprompt
Avago Technologies SAS3 IR Configuration Utility.
Version 13.00.00.00 (2016.03.08)
Copyright (c) 2009-2016 Avago Technologies. All rights reserved.
SAS3IRCU: Consistency Check Operation started on IR Volume.
SAS3IRCU: Command CONSTCHK Completed Successfully.
SAS3IRCU: Utility Completed Successfully.
Function
Activate a RAID array.
Format
sas3ircu controller_id activate volume_id
Parameters
Parameter Description Value
Usage Guidelines
None
Example
# Activate the RAID array whose ID is 322.
domino:~# ./sas3ircu 0 activate 322
Avago Technologies SAS3 IR Configuration Utility.
Version 13.00.00.00 (2016.03.08)
Copyright (c) 2009-2016 Avago Technologies. All rights reserved.
SAS3IRCU: ACTIVATE Volume 322 Passed!
SAS3IRCU: Command ACTIVATE Completed Successfully.
SAS3IRCU: Utility Completed Successfully.
Function
Turn on the UID indicator of a specified drive.
Format
sas3ircu controller_id locate enclosure_id:slot_id on
Parameters
Parameter Description Value
Usage Guidelines
None
Example
# Turn on the UID indicator of a specified drive.
domino:~# ./sas3ircu 0 locate 1:0 on
Avago Technologies SAS3 IR Configuration Utility.
Version 13.00.00.00 (2016.03.08)
Copyright (c) 2009-2016 Avago Technologies. All rights reserved.
SAS3IRCU: LOCATE command completed successfully.
SAS3IRCU: Command LOCATE Completed Successfully.
SAS3IRCU: Utility Completed Successfully.
Function
Collect and clear RAID array event logs.
Format
sas3ircu controller_id logir upload name
Parameters
Parameter Description Value
Usage Guidelines
None
Example
# Collect RAID array event logs and save them to the local file FW.log.
domino:~# ./sas3ircu 0 logir upload FW.log
Avago Technologies SAS3 IR Configuration Utility.
Version 13.00.00.00 (2016.03.08)
Copyright (c) 2009-2016 Avago Technologies. All rights reserved.
SAS3IRCU: LogIR command successful.
SAS3IRCU: Command LOGIR Completed Successfully.
SAS3IRCU: Utility Completed Successfully.
Function
Set a specified RAID array as the primary boot device.
Format
sas3ircu controller_id bootir volume_id
Parameters
Parameter Description Value
Usage Guidelines
None
Example
# Set the RAID array whose ID is 322 as the primary boot device.
domino:~# ./sas3ircu 0 bootir 322
Avago Technologies SAS3 IR Configuration Utility.
Version 13.00.00.00 (2016.03.08)
Copyright (c) 2009-2016 Avago Technologies. All rights reserved.
SAS3IRCU: Command BOOTIR Completed Successfully.
SAS3IRCU: Utility Completed Successfully.
Function
Set a specified drive as the primary boot device.
Format
sas3ircu controller_id bootencl enclosure_id:slot_id
Parameters
Parameter Description Value
Usage Guidelines
None
Example
# Set a specified drive as the primary boot device.
domino:~# ./sas3ircu 0 bootencl 1:4
Avago Technologies SAS3 IR Configuration Utility.
Version 13.00.00.00 (2016.03.08)
Copyright (c) 2009-2016 Avago Technologies. All rights reserved.
SAS3IRCU: Command BOOTENCL Completed Successfully.
SAS3IRCU: Utility Completed Successfully.
Function
Set a specified RAID array as the second boot device.
Format
sas3ircu controller_id altbootir volume_id
Parameters
Parameter Description Value
Usage Guidelines
None
Example
# Set a specified RAID array as the second boot device.
domino:~# ./sas3ircu 0 altbootir 322
Avago Technologies SAS3 IR Configuration Utility.
Version 13.00.00.00 (2016.03.08)
Copyright (c) 2009-2016 Avago Technologies. All rights reserved.
SAS3IRCU: Command ALTBOOTIR Completed Successfully.
SAS3IRCU: Utility Completed Successfully.
Function
Set a specified drive as the second boot device.
Format
sas3ircu controller_id altbootencl enclosure_id:slot_id
Parameters
Parameter Description Value
Usage Guidelines
None
Example
# Set a specified drive as the second boot device.
domino:~# ./sas3ircu 0 altbootencl 1:4
Avago Technologies SAS3 IR Configuration Utility.
Version 13.00.00.00 (2016.03.08)
Copyright (c) 2009-2016 Avago Technologies. All rights reserved.
SAS3IRCU: Command BOOTENCL Completed Successfully.
SAS3IRCU: Utility Completed Successfully.
7 LSI SAS3008IT
7.1 Overview
The LSI SAS3008IT controller card is a 12 Gbit/s SAS controller with the Fusion-
MPT™ architecture. (MPT refers to Message Passing Technology.) Equipped with
PCIe 3.0 x8 ports and a powerful I/O storage engine, the controller card handles
data protection, verification, and restoration.
The LSI SAS3008IT provides the 3 Gbit/s, 6 Gbit/s, and 12 Gbit/s SAS ports and 3
Gbit/s and 6 Gbit/s SATA ports, with each port supporting the SSP, SMP, and STP.
The LSI SAS3008 IT controller card supports out-of-band management and
transparent transmission but does not support RAID functions. Out-of-band
management tools, such as the iBMC, can be used to manage RAID controller
cards and attached drives, and RAID array configuration and operations are not
supported.
The LSI SAS3008IT has two structures to provide ease of connection in different
servers:
● LSI SAS3008IT for a rack, X8000, X6800 or X6000 V3 server
Figure 7-1 LSI SAS3008IT for a rack, X8000, X6800 or X6000 V3 server
Figure 7-2 LSI SAS3008IT for a blade, X6000 V2, G560, or mission-critical server
7.2 Functions
NOTE
If a drive in pass-through mode is faulty, the Fault indicator on the drive will not be lit and
the iBMC will not generate an alarm.
NOTE
● After removing a drive, install it after at least 30 seconds. Otherwise, the drive cannot
be identified.
● If a drive in a RAID array is removed and inserted online, the RAID array where the drive
resides will be degraded or faulty. Before removing and inserting a drive, check the
logical status of the drive and the number of faulty drives allowed by the RAID level.
● If you remove and insert a pass-through drive without powering off the OS, the drive
letter in the system may change. Before removing and inserting a pass-through drive,
record the drive letter in the system.
Procedure
Step 1 Set the Legacy mode. For details, see A.1.3 Setting the Legacy Mode.
Step 2 Log in to the Configuration Utility.
1. When the message "Press Ctrl-C to start LSI Corp Configuration Utility" is
displayed during server startup, press Ctrl+C, as shown in Figure 7-3.
The message "Please wait, invoking SAS Configuration Utility..." is displayed.
2. After the system self-check, the Configuration Utility main screen is displayed,
as shown in Figure 7-4. Table 7-1 describes the parameters.
NOTE
You can view the global properties of the controller card on this screen.
The global properties include RAID controller card startup settings, such as whether to
enable the controller card, whether to modify the controller card boot order, and the
number of devices displayed.
Parameter Description
Parameter Description
Figure 7-5 Setting the boot order of RAID controller cards (1)
1. Select the box in the Boot Order column for a RAID controller card and use +
or - to set the boot order. For example, in Figure 7-6, 9305-24i is set as the
primary boot device and SAS3008 is set as the alternate one.
Figure 7-6 Setting the boot order of RAID controller cards (2)
2. Press Esc.
A confirmation dialog box is displayed. See Figure 7-7.
NOTE
If the server is configured with drive controllers of other chips, set the drive boot
device on the BIOS.
– For details about the settings on the Grantley platform, see "Setting the Boot
Device" in the Huawei Server Grantley Platform BIOS Parameter Reference.
– For details about the settings on the Brickland platform, see "Setting the Boot
Device" in the Huawei Server Brickland Platform BIOS Parameter Reference.
Step 4 Select the LSI SAS3008IT controller card and press Enter.
The Adapter Properties screen is displayed, as shown in Figure 7-9. Table 7-2
describes the controller card properties.
----End
Scenarios
NOTICE
To ensure that the server can start installed systems, you must set boot devices
during RAID configuration.
● If both configured, the primary and alternate boot devices are started in
sequence.
● If only the primary boot device is configured, the primary boot device is
started.
● If only the alternate boot device is configured, the alternate boot device is
started.
Procedure
Step 1 Access the Configuration Utility main screen. For details, see 7.3.1 Logging In to
the Configuration Utility.
Step 2 Set the boot device.
1. On the Adapter Properties screen, select SAS Topology and press Enter.
The SAS Topology screen is displayed.
2. On the SAS Topology screen, press ↑ or ↓ to select a drive, and press ALT+B
(or ALT+A) to set the selected device as the primary (or alternate) boot
device.
NOTE
If the drive information is collapsed, move the cursor to Direct Attach Devices and
press Enter.
After the setting is successful, the value of Device Info for the primary boot
device is Boot, and the value of Device Info for the alternate boot device is
Alt, as shown in Figure 7-10.
----End
Procedure
Step 1 Set the EFI/UEFI Boot Type mode. For details, see A.1.4 Setting the EFI/UEFI
Mode.
Step 2 Log in to the management screen of the LSI SAS3008IT.
The position of the RAID controller card management interface in EFI/UEFI mode
varies with the BIOS platform.
● Brickland platform: The interface is integrated into the BIOS Setup. For details,
see A.1.1 Logging In to the RAID Controller Card Management Screen in
EFI/UEFI Mode (Brickland Platform).
● Grantley platform: The interface is integrated into the Device Manager. For
details, see A.1.2 Logging In to the RAID Controller Card Management
Screen in EFI/UEFI Mode (Grantley Platform).
Step 3 Select the LSI SAS3008IT controller card and press Enter.
The screen shown in Figure 7-11 is displayed.
----End
If the server is configured with multiple RAID controller cards, you need to set the
boot device in the BIOS in EFI/UEFI mode. For details, see "Setting the Boot
Device" in the respective BIOS Parameter Reference.
7.5 Troubleshooting
This topic describes solutions to drive faults and RAID controller card faults. For
other situations, see the Huawei Server Maintenance Guide.
Symptoms
A drive is faulty if any of the following occurs:
Solution
Step 1 Determine the slot number of the faulty drive.
NOTE
If a drive in pass-through mode is faulty, the Fault indicator on the drive will not be lit and
the iBMC will not generate an alarm.
● Locate the faulty drive based on the fault indicator, which is steady orange.
For details, see the drive numbering section in the user guide of the server
you use.
● Locate the faulty drive based on the iMana/iBMC drive alarm information. For
details, see iMana/iBMC Alarm Handling.
● Locate the faulty drive using the RAID controller card GUI. For details, see
7.6.1 SAS Topology or 7.7.2.1 View Physical Disk Properties.
● Locate the faulty drive using the RAID controller card CLI tool. For details, see
7.8.2.2 Viewing RAID Controller Card and Physical Drive Information.
----End
Symptom
● Data cannot be written into the drives of the controller card.
● The server reports an alarm indicating a controller card fault.
Solution
Step 1 Replace the controller card. Then check whether the alarm is cleared.
For details about how to replace a RAID controller card, see the user guide
delivered with your server.
----End
Screen Introduction
Figure 7-13 shows the Device Identifier screen. Table 7-3 describes the
parameters on the screen.
Parameter Description
Screen Introduction
Figure 7-15 shows the Advanced Adapter Properties screen. Table 7-4 describes
the parameters on the screen.
Parameter Description
Step 2 Use ↑ and ↓ to select an item and press Enter to set the value.
----End
Parameter Description
Direct Attached Maximum number of directly attached devices that can spin
Max Targets to up simultaneously.
Spinup
Parameter Description
Step 2 Use ↑ and ↓ to select an item and press Enter to set the value.
----End
7.6.3 Exit
Press Esc to exit the LSI SAS3008IT Configuration Utility. The screen shown in
Figure 7-18 is displayed. Table 7-7 describes the operation options.
Parameter Description
Exit the Exits the Configuration Utility and restarts the server.
Configuration
Utility and
Reboot
Screen Description
Figure 7-20 shows the screen. Table 7-9 describes the parameters on the screen.
Property Description
PCI Slot Number PCI slot number of the RAID controller card.
● For 16P configurations, the value ranges from 0 to 3.
– 0: primary enclosure of SCE 1.
– 1: secondary enclosure of SCE 1.
– 2: primary enclosure of SCE 2.
– 3: secondary enclosure of SCE 2.
● For 32P configurations, the value ranges from 0 to 7.
– 0: primary enclosure of SCE 1.
– 1: secondary enclosure of SCE 1.
– 2: primary enclosure of SCE 2.
– 3: secondary enclosure of SCE 2.
– 4: primary enclosure of SCE 3.
– 5: secondary enclosure of SCE 3.
– 6: primary enclosure of SCE 4.
– 7: secondary enclosure of SCE 4.
Screen Introduction
Figure 7-21 shows the screen.
Parameter Description
Parameter Description
Locating a Drive
Step 1 Use ↑ and ↓ to select Select Physical Disk and press Enter.
Step 2 Select a physical drive and press Enter.
Step 3 Use ↑ and ↓ to select Start Locate / Blink and press Enter.
----End
----End
Installing SAS3IRCU
The SAS3IRCU installation method varies depending on the OS type. The following
uses Windows, Linux, and VMware as examples to describe the SAS3IRCU
installation procedure. For the installation procedures for other OSs, see the
Readme file in the software package.
● Installing SAS3IRCU in Windows
a. Upload the tool package applicable to Windows to the server OS.
b. Go to the command-line interface (CLI).
c. Run a command to go to the directory where the SAS3IRCU tool package
resides.
For Windows, SAS3IRCU does not require installation. You can directly run
RAID controller card management commands.
● Installing SAS3IRCU in Linux
a. Use a file transfer tool (for example, PuTTY) to upload the SAS3IRCU
package applicable to Linux to the server OS.
b. Go to the directory where SAS3IRCU resides in shell mode.
For Linux, SAS3IRCU does not require installation. You can directly run
RAID controller card management commands.
● Installing SAS3IRCU in VMware
NOTE
When installing SAS3IRCU in the VMware OS, you must use the Huawei-issued RAID
card driver instead of the RAID card driver of the OS.
a. Use a file transfer tool (such as PuTTY) to upload the SAS3IRCU package
applicable to VMware to the /temp directory on the server OS.
Function
View all controllers.
Format
sas3ircu list
Parameters
None.
Usage Guidelines
None
Example
# View all controllers.
domino:~# ./sas3ircu list
Avago Technologies SAS3 IR Configuration Utility.
Version 13.00.00.00 (2016.03.08)
Copyright (c) 2009-2016 Avago Technologies. All rights reserved.
Adapter Vendor Device SubSys SubSys
Index Type ID ID Pci Address Ven ID Dev ID
----- ------------ ------ ------ ----------------- ------ ------
0 SAS3008 1000h 97h 00h:01h:00h:00h 1000h 3090h
SAS3IRCU: Utility Completed Successfully.
Function
View detailed information about RAID controller cards and physical drives.
Format
sas3ircu controller_id display
Parameters
Parameter Description Value
Usage Guidelines
None.
Example
# View detailed information about RAID controller cards and physical drives.
You can run the display command to see the information. In the following
example, two drives are set as the primary and alternate boot devices respectively.
domino:~# ./sas3ircu 0 display
Function
Simulate a drive fault.
Format
sas3ircu controller_id action enclosure_id:slot_id
Parameters
Parameter Description Value
Usage Guidelines
None
Example
# Simulate a drive offline fault.
domino:~# ./sas3ircu 0 setoffline 1:2
Avago Technologies SAS3 IR Configuration Utility.
Version 13.00.00.00 (2016.03.08)
Copyright (c) 2009-2016 Avago Technologies. All rights reserved.
SAS3IRCU: Physical disk set to Offline successfully.
SAS3IRCU: Command SETOFFLINE Completed Successfully.
SAS3IRCU: Utility Completed Successfully.
Function
Turn on the UID indicator of a specified drive.
Format
sas3ircu controller_id locate enclosure_id:slot_id on
Parameters
Parameter Description Value
Usage Guidelines
None
Example
# Turn on the UID indicator of a specified drive.
domino:~# ./sas3ircu 0 locate 1:0 on
Avago Technologies SAS3 IR Configuration Utility.
Version 13.00.00.00 (2016.03.08)
Copyright (c) 2009-2016 Avago Technologies. All rights reserved.
SAS3IRCU: LOCATE command completed successfully.
SAS3IRCU: Command LOCATE Completed Successfully.
SAS3IRCU: Utility Completed Successfully.
8 LSI SAS3108
8.1 Overview
The LSI SAS3108 controller card is a 12 Gbit/s SAS controller with the MegaRAID
architecture. Equipped with PCIe 3.0 x8 ports and a powerful I/O storage engine,
the controller card handles data protection, verification, and restoration.
In addition to better system performance, the controller card supports fault-
tolerant data storage in multiple drive partitions and read/write operations on
multiple drives at the same time. This makes accessing data on drives faster.
The LSI SAS3108 supports boot and configuration in legacy and UEFI modes.
The built-in cache improves performance as follows:
● Data is directly written to the cache. The RAID controller card updates data to
drives after data is accumulated to some extent in the cache. This implements
data writing in batches. In addition, the cache improves the overall data write
speed due to its higher speed than a drive.
● Data is directly read from the cache, reducing the response time from 6 ms to
less than 1 ms.
NOTE
The LSI SAS3108 has two structures to provide ease of connection in different
servers:
● LSI SAS3108 for a rack, X8000, X6800 or X6000 V3 server
The LSI SAS3108 connects to the mainboard through an XCede connector and
connects to the drive backplane by using two mini-SAS cables, as shown in
Figure 8-1.
● LSI SAS3108 for a blade, G560, or X6000 V2 server
The LSI SAS3108 connects to the mainboard through two XCede connectors,
as shown in Figure 8-2.
Figure 8-1 LSI SAS3108 for a rack, X8000, X6800 or X6000 V3 server
Indicators
The LSI SAS3108 screw-in RAID controller card is classified into two types, as
shown in Figure 8-3 and Figure 8-4.
8.2 Functions
● 32 or 240 physical drives (The LSI SAS3108 supports two types of RAID keys,
which support 32 and 240 drives respectively).
NOTE
The total number of drives of all RAID arrays is the sum of the hot spare drives, the
idle drives (drives in Unconfigured Good status), and the drives added in RAID arrays.
● 64 virtual drives, specified by Virtual Drive on the Configuration Utility.
● 64 drive groups and 32 or 240 drives for each group, specified by Drive Group
on the Configuration Utility.
● 16 virtual drives (specified by Virtual Drive on the Configuration Utility
screen) for each group (specified by Drive Group on the Configuration
Utility screen)
Table 8-4 lists the supported RAID levels and quantity of drives.
RAID 5 3 to 32 1
RAID 6 3 to 32 2
NOTE
● Global hot spare drive: shared by all RAID arrays of a controller, which can be
configured with one or more global hot spare drives. A global hot spare drive
automatically replaces a failed drive of the same type as the hot spare drive
in any RAID array.
For details, see 8.6.1.1 Configuring a Global Hot Spare Drive.
● Dedicated hot spare drive: replaces a failed drive only in a specified RAID
array of a controller. One or more dedicated hot spare drives can be
configured for each RAID array. The hot spare drive automatically takes over
services of a failed member drive only in a specified RAID array.
For details, see 8.6.1.2 Configuring a Dedicated Hot Spare Drive.
A hot spare drive must have at least the capacity of a member drive.
NOTE
● The HDDs and SSDs cannot be used as the hot spare drives of each other.
● The HDDs include SAS HDDs and SATA HDDS. If the member drives of a RAID array are
SAS drives, the SATA drives can be used as dedicated hot spare drives. If the member
drives are SATA drives, the SAS drives cannot be used as dedicated hot spare drives.
● An idle drive can be configured as a hot spare drive, but a RAID member drive cannot
be configured as a hot spare drive.
● The type of hot spare drives must be the same as that of the member drives in the RAID
array, and the capacity of hot spare drives must be greater than or equal to the
maximum capacity of the member drives in the RAID array.
● All RAID levels except RAID 0 support hot spare drives.
● You cannot directly change a global hot spare drive to a dedicated hot spare drive or
vice versa. You need to set the drive to idle state, and then set it as a global or
dedicated hot spare drive as required.
Emergency Spares
After the emergency spare function is enabled for a RAID that supports
redundancy and has no hot spare drive specified, a spare drive in the fail or
prefail state will automatically replace a failed member drive of the same type
and rebuild data to avoid data loss.
The capacity of the idle drive used to rebuild data must be greater than or equal
to that of a member drive.
NOTE
● After removing a drive, install it after at least 30 seconds. Otherwise, the drive cannot
be identified.
● Before removing and inserting a drive, check the logical status of the drive and the
number of faulty drives allowed by the RAID level.
● If a member drive in a RAID array is manually removed and inserted online, the drive is
identified as a member of an external RAID array. As a result, the drive is considered
faulty. If this fault occurs, set the drive to Unconfigured Good and follow steps provided
in 8.5.5 Importing or Clearing a Foreign Configuration or 8.6.8 Importing or
Clearing a Foreign Configuration to restore the RAID array. You do not need to replace
the drive.
● If you remove and insert a pass-through drive without powering off the OS, the drive
letter in the system may change. Before removing and inserting a pass-through drive,
record the drive letter in the system.
8.2.4 Copyback
If a member drive of a RAID array becomes faulty, a hot spare drive automatically
replaces the failed drive and starts data synchronization. Once the faulty drive has
been replaced with a newly installed data drive, data is copied from the hot spare
drive to the new data drive. Once the data copyback is complete, the hot spare
drive is restored to the hot spare state.
NOTE
● Different types of hot spare copyback have the same performance. During the copyback,
the RAID and drive status changes as follows:
● The RAID array status remains Optimal.
● The status of the hot spare drive changes from Online to Hot Spare.
● The status of the newly added drive changes from Copyback or Copybacking to
Online.
● You can use the command line tool to pause or resume copyback. For details, see
4.11.2.15 Querying and Setting RAID Rebuild, Copyback, and Patrolread Functions.
● If the server is restarted during the copyback, the progress is saved.
Striping
Multiple processes accessing a drive at the same time may cause drive conflicts.
Most drives are specified with thresholds for the access count (I/O operations per
second) and data transmission rate (data volume transmitted per second). If the
thresholds are reached, new access requests will be suspended.
The striping technology evenly distributes I/O loads to multiple physical drives. It
divides continuous data into multiple blocks and saves them to different drives.
This allows multiple processes to access these data blocks concurrently without
causing any drive conflicts. Striping also optimizes concurrent processing
performance in sequential access to data.
The strip width equals the number of physical drives in a RAID array.
Increasing the strip width can improve the RAID read/write performance. An
increased number of drives indicates more strips for concurrent read/write
operations. Under the same circumstances, a RAID array consisting of eight 18
GB drives provides better transmission performance than a RAID array
consisting of four 36 GB drives.
● Strip size: size of a strip data block on each drive
Stripes
The storage space of each member drive in a RAID array is striped based on the
strip size. The data written to the drives is also sliced based on the strip size.
Options include 64 KB, 128 KB, 256 KB, 512 KB, and 1 MB. The default value is
256 KB.
Table 8-5 lists the minimum numbers of drives to be added for RAID level
migration.
Table 8-5 Minimum numbers of drives to be added for RAID level migration
8.2.7 Initialization
Drive Initialization
The two types of initialization are:
NOTE
● Only RAID 0, RAID 1, RAID 5, and RAID 6 support capacity expansion through drive
addition.
● RAID 10, 50, and 60 do not support capacity expansion through drive addition.
● If a RAID array contains two or more VDs, its capacity cannot be expanded through
drive addition.
● During capacity expansion, you need to add two drives to RAID 1 each time, and only
one drive to RAID 0, RAID 5, or RAID 6 each time.
● When the available capacity of a drive group is increased by replacing a member drive
and the capacity to be expanded exceeds the original available capacity of the drive
group, use the CLI to expand the capacity. For details, see 8.11.2.20 Increasing Member
Drive Available Space to Expand RAID.
● Simple: erases data on a virtual or physical drive for only one round.
● Normal: erases data on a virtual or physical drive for three rounds.
● Thorough: erases data on a virtual or physical drive for nine rounds.
The LSI SAS3108 supports secure data erasure of physical and virtual drives.
Storing write failure data in the power failure protection zone and rewriting the
data when appropriate can solve the write hole problem. The main protection
scenarios are as follows:
● Write hole protection is only supported for RAID 5, 6, and 50. Write hole data
is stored and write hole protection is automatic.
– Available power failure protection device: write hole data is recovered on
restart.
– Unavailable power failure protection device: write hole data is lost on
restart.
● When a drive of RAID 6 goes offline, write hole protection is supported.
● When RAID 50 is partially degraded, write hole protection is supported and
data will be restored by span. When RAID 50 is fully degraded, write hole
protection is not supported.
● Read Ahead or Ahead: The LSI SAS3108 caches the data that follows the
data being read for faster access. This policy reduces drive seeks and shortens
read time.
NOTE
To achieve optimal drive performance, set the policy to Read Ahead or Ahead for
HDDs and No Read Ahead or Normal for SSDs.
● Write Back: After the cache receives host data, the LSI SAS3108 signals the
host that the data transmission is complete.
Data is directly written to the cache. The LSI SAS3108 updates data to drives
only after the cache data is accumulated to a specified extent. This enables
data writing in batches. In addition, the cache increases the overall data write
speed due to its higher speed than a drive.
● Enabling caching greatly improves the server write performance. If the server
write pressure decreases, or if the cache is nearly full, data is migrated from
the cache to drives.
● Enabling caching also increases the data loss risk. If a server is powered off
unexpectedly, data in the cache gets lost.
If a power failure occurs, the supercapacitor supplies power and cache data is
written to the NAND flash of the supercapacitor protection module.
RAID controller cards calibrate power in the following process to ensure power
stability:
During the calibration process, the write policy of the controller card is
automatically changed to WT to ensure data integrity, and the controller card
performance decreases accordingly. The power calibration duration depends on
the supercapacitor discharge speed.
8.2.13 CacheCade
Of all RAID controller cards that use the LSI SAS3108 chip, only SR530C and
SR530C-M support CacheCade Pro 2.0.
You can use SSDs to create a RAID array dedicated for CacheCade and function as
level-2 cache to provide better I/O performance for the existing RAID array.
The system allocates resources for patrol read based on the I/O workload. For
example, if the I/O workload is high, the system allocates fewer resources for
patrol read to ensure higher priority of I/O operations.
Patrol read cannot be performed on a drive that has any of the following
operations in progress:
NOTE
● The JBOD mode can be enabled only when the LSI SAS3108 firmware version is
4.650.00-6121 or later. To use drives in JBOD mode, you must install a corresponding
RAID controller card driver.
● After the JBOD mode is enabled for the LSI SAS3108, the status of the physical drives
that have been configured as virtual drives remains unchanged, and the status of other
unconfigured physical drives become pass-through drives (in JBOD state). If you need to
configure a pass-through drive as a virtual drive, set the drive status to Unconfigured
good.
● If a drive in JBOD mode is in UBAD status, the Fault indicator on the drive will be lit and
the iBMC will generate an alarm.
Import the foreign configuration to the current controller for the configuration to
take effect. This import process is also called configuration migration.
Procedure
Step 1 Set the Legacy mode. For details, see A.1.3 Setting the Legacy Mode.
Step 2 Log in to the Configuration Utility.
During server startup, press Ctrl+R when the message "Press <Ctrl><R> to Run
MegaRAID Configuration Utility" is displayed.
The SAS3108 BIOS Configuration Utility screen is displayed, as shown in Figure
8-6. Table 8-6 describes the tabs.
----End
NOTICE
● Data will be cleared from the drives added to a RAID array. Before creating an
array, check that the drives to be added have no data or that the data does not
need to be retained.
● The LSI SAS3108 supports SAS/SATA HDDs and SSDs. Drives in one RAID array
must be of the same type, but can have different capacities or be provided by
different vendors.
● 8.2.1 RAID 0, 1, 5, 6, 10, 50, and 60 lists the number of drives required by each
RAID level.
Workflow
Procedure
Step 1 Back up data on drives and access the Configuration Utility main screen. For
details, see 8.3.1 Logging In to the Configuration Utility.
Press Enter in the RAID Level area, and use ↑ or ↓ to set the RAID level RAID-0.
NOTE
– If multiple virtual drives are not required, go to Step 8 after setting the virtual
drive name. The RAID capacity is set to the maximum value by default.
– If the drive group needs to be divided into multiple virtual drives, manually set the
virtual drive capacity. For details, see Step 6 to Step 7. Each drive group supports a
maximum of 16 virtual drives.
2. Use ↓ to select Name, enter the RAID name, and select OK.
A message is displayed asking you whether to set the RAID name.
3. Select OK and press Enter.
The VD Mgmt screen is displayed.
Step 7 (Optional) Create multiple virtual drives.
1. After creating a drive group, choose Drive Group in the VD Mgmt screen and
press F2, and select Add New VD.
The Add VD in Drive Group screen is displayed.
2. Enter the virtual drive capacity in Size, select OK, and press Enter.
A confirmation message is displayed.
3. Select OK and press Enter.
The VD Mgmt screen is displayed. Virtual drives are created.
Step 8 Set advanced properties.
1. In the main menu displayed in Figure 8-8, Select Advanced by using ↓, and
press Enter.
Parameter Description
Write Policy Options for writing data on a virtual drive. The write
policy of virtual drives varies with the firmware
version of the LSI SAS3108 RAID controller card.
– If the firmware version of the LSI SAS3108 RAID
controller card is 4.270.00-4382 or earlier, the
following write policies are supported:
Parameter Description
I/O Policy Options for data I/O of special virtual drives. This
policy does not affect cache prefetch. The options
are as follows:
– Direct:
Disk cache Policy Cache policy of physical drives (valid only for drives
with cache).
– Enable: Data is cached on drives to improve
write performance. However, if no protection
mechanism is available when the system is
powered off unexpectedly, data in the cache will
be lost.
– Disable: Data is not cached on drives during a
write process. Data will not be lost when the
system is powered off unexpectedly.
– Unchanged: The current drive cache policy
remains unchanged.
Parameter Description
NOTICE
Initialization will damage data on drives. If the original data on the drives
needs to be retained, do not select Initialize on the screen shown in Figure
8-9.
----End
NOTICE
● Data will be cleared from the drives added to a RAID array. Before creating an
array, check that the drives to be added have no data or that the data does not
need to be retained.
● The LSI SAS3108 supports SAS/SATA HDDs and SSDs. Drives in one RAID array
must be of the same type, but can have different capacities or be provided by
different vendors.
● 8.2.1 RAID 0, 1, 5, 6, 10, 50, and 60 lists the number of drives required by each
RAID level.
Workflow
Procedure
Step 1 Back up data on drives and access the Configuration Utility main screen. For
details, see 8.3.1 Logging In to the Configuration Utility.
Step 2 Log in to the Create Virtual Drive screen.
1. Press Ctrl+P or Ctrl+N to switch to the VD Mgmt tab.
2. Use ↑ and ↓ to select SAS3108 (Bus 0x01, Dev 0x00), and press F2.
3. On the screen displayed, select Create Virtual Drive and press Enter.
The screen shown in Figure 8-11 is displayed.
This step is optional for RAID 0, 1, 5, and 6, but mandatory for RAID 10, 50, and 60.
– If multiple virtual drives are not required, go to Step 8 after setting the virtual
drive name. The RAID capacity is set to the maximum value by default.
– If the drive group needs to be divided into multiple virtual drives, manually set the
virtual drive capacity. For details, see Step 6 to Step 7. Each drive group supports a
maximum of 16 virtual drives.
2. Use ↓ to select Name, enter the RAID name, and select OK.
A message is displayed asking you whether to set the RAID name.
3. Select OK and press Enter.
The VD Mgmt screen is displayed.
Step 7 (Optional) Create multiple virtual drives.
1. After creating a drive group, choose Drive Group in the VD Mgmt screen and
press F2, and select Add New VD.
The Add VD in Drive Group screen is displayed.
2. Enter the virtual drive capacity in Size, select OK, and press Enter.
A confirmation message is displayed.
3. Select OK and press Enter.
The VD Mgmt screen is displayed. Virtual drives are created.
Step 8 Set advanced properties.
1. In the main menu displayed in Figure 8-12, Select Advanced by using ↓, and
press Enter.
The screen for setting advanced RAID properties is displayed, as shown in
Figure 8-13.
Item Description
Write Policy Write policy of the virtual drive. The write policy of
virtual drives varies with the firmware version of the
LSI SAS3108 RAID controller card.
– If the firmware version of the LSI SAS3108 RAID
controller card is 4.270.00-4382 or earlier, the
following write policies are supported:
Item Description
I/O Policy Data I/O of special virtual drives. This policy does
not affect cache prefetch.
– Direct:
Disk cache Policy Cache policy of physical drives (valid only for drives
with cache).
– Enable: Data is cached on drives to improve
write performance. However, if no protection
mechanism is available when the system is
powered off unexpectedly, data in the cache will
be lost.
– Disable: Data is not cached on drives during a
write process. Data will not be lost when the
system is powered off unexpectedly.
– Unchanged: uses the current cache policy.
The default value is Unchanged.
Item Description
NOTICE
Initialization will damage data on hard disks. If the original data on the drives
needs to be retained, do not select Initialize on the screen shown in Figure
8-13.
----End
NOTICE
● Data will be cleared from the drives added to a RAID array. Before creating an
array, check that the drives to be added have no data or that the data does not
need to be retained.
● The LSI SAS3108 supports SAS/SATA HDDs and SSDs. Drives in one RAID array
must be of the same type, but can have different capacities or be provided by
different vendors.
● 8.2.1 RAID 0, 1, 5, 6, 10, 50, and 60 lists the number of drives required by each
RAID level.
Workflow
Procedure
Step 1 Back up data on drives and access the Configuration Utility main screen. For
details, see 8.3.1 Logging In to the Configuration Utility.
Step 2 Log in to the Create Virtual Drive screen.
1. Press Ctrl+P or Ctrl+N to switch to the VD Mgmt tab.
2. Use ↑ and ↓ to select SAS3108 (Bus 0x01, Dev 0x00), and press F2.
3. On the screen displayed, select Create Virtual Drive and press Enter.
The screen shown in Figure 8-15 is displayed.
This step is optional for RAID 0, 1, 5, and 6, but mandatory for RAID 10, 50, and 60.
– If multiple virtual drives are not required, go to Step 8 after setting the virtual
drive name. The RAID capacity is set to the maximum value by default.
– If the drive group needs to be divided into multiple virtual drives, manually set the
virtual drive capacity. For details, see Step 6 to Step 7. Each drive group supports a
maximum of 16 virtual drives.
2. Use ↓ to select Name, enter the RAID name, and select OK.
A message is displayed asking you whether to set the RAID name.
3. Select OK and press Enter.
The VD Mgmt screen is displayed.
Item Description
Write Policy Write policy of the virtual drive. The write policy of
virtual drives varies with the firmware version of the
LSI SAS3108 RAID controller card.
– If the firmware version of the LSI SAS3108 RAID
controller card is 4.270.00-4382 or earlier, the
following write policies are supported:
Item Description
I/O Policy Data I/O of special virtual drives. This policy does
not affect cache prefetch.
– Direct:
Disk cache Policy Cache policy of physical drives (valid only for drives
with cache).
– Enable: Data is cached on drives to improve
write performance. However, if no protection
mechanism is available when the system is
powered off unexpectedly, data in the cache will
be lost.
– Disable: Data is not cached on drives during a
write process. Data will not be lost when the
system is powered off unexpectedly.
– Unchanged: uses the current cache policy.
The default value is Unchanged.
Item Description
NOTICE
Initialization will damage data on hard disks. If the original data on the drives
needs to be retained, do not select Initialize on the screen shown in Figure
8-17.
----End
NOTICE
● Data will be cleared from the drives added to a RAID array. Before creating an
array, check that the drives to be added have no data or that the data does not
need to be retained.
● The LSI SAS3108 supports SAS/SATA HDDs and SSDs. Drives in one RAID array
must be of the same type, but can have different capacities or be provided by
different vendors.
● 8.2.1 RAID 0, 1, 5, 6, 10, 50, and 60 lists the number of drives required by each
RAID level.
Workflow
Procedure
Step 1 Back up data on drives and access the Configuration Utility main screen. For
details, see 8.3.1 Logging In to the Configuration Utility.
Step 2 Log in to the Create Virtual Drive screen.
1. Press Ctrl+P or Ctrl+N to switch to the VD Mgmt tab.
2. Use ↑ and ↓ to select SAS3108 (Bus 0x01, Dev 0x00), and press F2.
3. On the screen displayed, select Create Virtual Drive and press Enter.
The screen shown in Figure 8-19 is displayed.
This step is optional for RAID 0, 1, 5, and 6, but mandatory for RAID 10, 50, and 60.
– If multiple virtual drives are not required, go to Step 8 after setting the virtual
drive name. The RAID capacity is set to the maximum value by default.
– If the drive group needs to be divided into multiple virtual drives, manually set the
virtual drive capacity. For details, see Step 6 to Step 7. Each drive group supports a
maximum of 16 virtual drives.
2. Use ↓ to select Name, enter the RAID name, and select OK.
A message is displayed asking you whether to set the RAID name.
3. Select OK and press Enter.
The VD Mgmt screen is displayed.
Item Description
Write Policy Write policy of the virtual drive. The write policy of
virtual drives varies with the firmware version of the
LSI SAS3108 RAID controller card.
– If the firmware version of the LSI SAS3108 RAID
controller card is 4.270.00-4382 or earlier, the
following write policies are supported:
Item Description
I/O Policy Data I/O of special virtual drives. This policy does
not affect cache prefetch.
– Direct:
Disk cache Policy Cache policy of physical drives (valid only for drives
with cache).
– Enable: Data is cached on drives to improve
write performance. However, if no protection
mechanism is available when the system is
powered off unexpectedly, data in the cache will
be lost.
– Disable: Data is not cached on drives during a
write process. Data will not be lost when the
system is powered off unexpectedly.
– Unchanged: uses the current cache policy.
The default value is Unchanged.
Item Description
NOTICE
Initialization will damage data on hard disks. If the original data on the drives
needs to be retained, do not select Initialize on the screen shown in Figure
8-21.
----End
NOTICE
● Data will be cleared from the drives added to a RAID array. Before creating an
array, check that the drives to be added have no data or that the data does not
need to be retained.
● The LSI SAS3108 supports SAS/SATA HDDs and SSDs. Drives in one RAID array
must be of the same type, but can have different capacities or be provided by
different vendors.
● 8.2.1 RAID 0, 1, 5, 6, 10, 50, and 60 lists the number of drives required by each
RAID level.
Workflow
Procedure
Step 1 Back up data on drives and access the Configuration Utility main screen. For
details, see 8.3.1 Logging In to the Configuration Utility.
Step 2 Log in to the Create Virtual Drive screen.
1. Press Ctrl+P or Ctrl+N to switch to the VD Mgmt tab.
2. Use ↑ and ↓ to select SAS3108 (Bus 0x01, Dev 0x00), and press F2.
3. On the screen displayed, select Create Virtual Drive and press Enter.
The screen shown in Figure 8-23 is displayed.
This step is optional for RAID 0, 1, 5, and 6, but mandatory for RAID 10, 50, and 60.
RAID 10 supports 2 to 8 spans. Each span supports an even number of drives, for
example, 2, 4, 6...16 or 32. The number of drives in each span must be the same.
– If multiple virtual drives are not required, go to Step 8 after setting the virtual
drive name. The RAID capacity is set to the maximum value by default.
– If the drive group needs to be divided into multiple virtual drives, manually set the
virtual drive capacity. For details, see Step 6 to Step 7. Each drive group supports a
maximum of 16 virtual drives.
2. Use ↓ to select Name, enter the RAID name, and select OK.
A message is displayed asking you whether to set the RAID name.
3. Select OK and press Enter.
The VD Mgmt screen is displayed.
Item Description
Write Policy Write policy of the virtual drive. The write policy of
virtual drives varies with the firmware version of the
LSI SAS3108 RAID controller card.
– If the firmware version of the LSI SAS3108 RAID
controller card is 4.270.00-4382 or earlier, the
following write policies are supported:
Item Description
I/O Policy Data I/O of special virtual drives. This policy does
not affect cache prefetch.
– Direct:
Disk cache Policy Cache policy of physical drives (valid only for drives
with cache).
– Enable: Data is cached on drives to improve
write performance. However, if no protection
mechanism is available when the system is
powered off unexpectedly, data in the cache will
be lost.
– Disable: Data is not cached on drives during a
write process. Data will not be lost when the
system is powered off unexpectedly.
– Unchanged: uses the current cache policy.
The default value is Unchanged.
Item Description
NOTICE
Initialization will damage data on hard disks. If the original data on the drives
needs to be retained, do not select Initialize on the screen shown in Figure
8-25.
----End
NOTICE
● Data will be cleared from the drives added to a RAID array. Before creating an
array, check that the drives to be added have no data or that the data does not
need to be retained.
● The LSI SAS3108 supports SAS/SATA HDDs and SSDs. Drives in one RAID array
must be of the same type, but can have different capacities or be provided by
different vendors.
● 8.2.1 RAID 0, 1, 5, 6, 10, 50, and 60 lists the number of drives required by each
RAID level.
Workflow
Procedure
Step 1 Back up data on drives and access the Configuration Utility main screen. For
details, see 8.3.1 Logging In to the Configuration Utility.
Step 2 Log in to the Create Virtual Drive screen.
1. Press Ctrl+P or Ctrl+N to switch to the VD Mgmt tab.
2. Use ↑ and ↓ to select SAS3108 (Bus 0x01, Dev 0x00), and press F2.
3. On the screen displayed, select Create Virtual Drive and press Enter.
The screen shown in Figure 8-27 is displayed.
This step is optional for RAID 0, 1, 5, and 6, but mandatory for RAID 10, 50, and 60.
– If multiple virtual drives are not required, go to Step 8 after setting the virtual
drive name. The RAID capacity is set to the maximum value by default.
– If the drive group needs to be divided into multiple virtual drives, manually set the
virtual drive capacity. For details, see Step 6 to Step 7. Each drive group supports a
maximum of 16 virtual drives.
2. Use ↓ to select Name, enter the RAID name, and select OK.
A message is displayed asking you whether to set the RAID name.
3. Select OK and press Enter.
The VD Mgmt screen is displayed.
Item Description
Write Policy Write policy of the virtual drive. The write policy of
virtual drives varies with the firmware version of the
LSI SAS3108 RAID controller card.
– If the firmware version of the LSI SAS3108 RAID
controller card is 4.270.00-4382 or earlier, the
following write policies are supported:
Item Description
I/O Policy Data I/O of special virtual drives. This policy does
not affect cache prefetch.
– Direct:
Disk cache Policy Cache policy of physical drives (valid only for drives
with cache).
– Enable: Data is cached on drives to improve
write performance. However, if no protection
mechanism is available when the system is
powered off unexpectedly, data in the cache will
be lost.
– Disable: Data is not cached on drives during a
write process. Data will not be lost when the
system is powered off unexpectedly.
– Unchanged: uses the current cache policy.
The default value is Unchanged.
Item Description
NOTICE
Initialization will damage data on hard disks. If the original data on the drives
needs to be retained, do not select Initialize on the screen shown in Figure
8-29.
----End
NOTICE
● Data will be cleared from the drives added to a RAID array. Before creating an
array, check that the drives to be added have no data or that the data does not
need to be retained.
● The LSI SAS3108 supports SAS/SATA HDDs and SSDs. Drives in one RAID array
must be of the same type, but can have different capacities or be provided by
different vendors.
● 8.2.1 RAID 0, 1, 5, 6, 10, 50, and 60 lists the number of drives required by each
RAID level.
Workflow
Procedure
Step 1 Back up data on drives and access the Configuration Utility main screen. For
details, see 8.3.1 Logging In to the Configuration Utility.
Step 2 Log in to the Create Virtual Drive screen.
1. Press Ctrl+P or Ctrl+N to switch to the VD Mgmt tab.
2. Use ↑ and ↓ to select SAS3108 (Bus 0x01, Dev 0x00), and press F2.
3. On the screen displayed, select Create Virtual Drive and press Enter.
The screen shown in Figure 8-31 is displayed.
– If multiple virtual drives are not required, go to Step 8 after setting the virtual
drive name. The RAID capacity is set to the maximum value by default.
– If the drive group needs to be divided into multiple virtual drives, manually set the
virtual drive capacity. For details, see Step 6 to Step 7. Each drive group supports a
maximum of 16 virtual drives.
2. Use ↓ to select Name, enter the RAID name, and select OK.
A message is displayed asking you whether to set the RAID name.
3. Select OK and press Enter.
The VD Mgmt screen is displayed.
Item Description
Write Policy Write policy of the virtual drive. The write policy of
virtual drives varies with the firmware version of the
LSI SAS3108 RAID controller card.
– If the firmware version of the LSI SAS3108 RAID
controller card is 4.270.00-4382 or earlier, the
following write policies are supported:
Item Description
I/O Policy Data I/O of special virtual drives. This policy does
not affect cache prefetch.
– Direct:
Disk cache Policy Cache policy of physical drives (valid only for drives
with cache).
– Enable: Data is cached on drives to improve
write performance. However, if no protection
mechanism is available when the system is
powered off unexpectedly, data in the cache will
be lost.
– Disable: Data is not cached on drives during a
write process. Data will not be lost when the
system is powered off unexpectedly.
– Unchanged: uses the current cache policy.
The default value is Unchanged.
Item Description
NOTICE
Initialization will damage data on hard disks. If the original data on the drives
needs to be retained, do not select Initialize on the screen shown in Figure
8-33.
----End
Procedure
Step 1 Access the Configuration Utility screen. Press Ctrl+N to switch to the Ctrl Mgmt
tab.
Step 2 Select boot devices and press Enter.
Virtual and independent drives that can be configured as boot devices are listed
here, as shown in Figure 8-35.
----End
Related Operations
If a server is configured with drive controllers of different chips, set the drive boot
device on the BIOS.
● For details about the settings on the Grantley platform, see "Setting the Boot
Device" in the Huawei Server Grantley Platform BIOS Parameter
Reference.
● For details about the settings on the Brickland platform, see "Setting the Boot
Device" in the Huawei Server Brickland Platform BIOS Parameter
Reference.
Procedure
Step 1 Set the EFI mode. For details, see A.1.4 Setting the EFI/UEFI Mode.
Step 2 Log in to the management screen of the LSI SAS3108.
The position of the RAID controller card management interface in EFI/UEFI mode
varies with the BIOS platform.
● Brickland platform: The interface is integrated into the BIOS Setup. For details,
see A.1.1 Logging In to the RAID Controller Card Management Screen in
EFI/UEFI Mode (Brickland Platform).
● Grantley platform: The interface is integrated into the Device Manager. For
details, see A.1.2 Logging In to the RAID Controller Card Management
Screen in EFI/UEFI Mode (Grantley Platform).
Parameter Description
Main Menu Specifies the main menu of the RAID controller card.
All operations on the RAID controller card are
available.
View Server Profile Displays and manages RAID controller card properties.
Parameter Description
----End
Scenarios
NOTICE
● Data will be cleared from the drives added to a RAID array. Before creating an
array, check that the drives to be added have no data or that the data does not
need to be retained.
● The LSI SAS3108 supports SAS/SATA HDDs and SSDs. Drives in one RAID array
must be of the same type, but can have different capacities or be provided by
different vendors.
● 8.2.1 RAID 0, 1, 5, 6, 10, 50, and 60 lists the number of drives required by each
RAID level.
Procedure
Step 1 Log in to the Configuration Utility. For details, see 8.4.1 Logging In to the
Management Screen.
Virtual Drive Size Capacity of the virtual drive. The default value is the
current largest capacity supported.
Parameter Description
Read Policy Read policy of the virtual drive. The options are as
follows:
– No Read Ahead: disables the Read Ahead function.
– Read Ahead: enables the Read Ahead function. The
controller pre-reads sequential data or the data
predicted to be used and saves it in the cache.
Default value: Read Ahead
NOTE
To achieve optimal disk performance, set the policy to Read
Ahead for HDDs and No Read Ahead for SSDs.
Parameter Description
Write Policy Options for writing data on a virtual drive. The write
policy of virtual drives varies with the firmware version
of the LSI SAS3108 RAID controller card.
– If the firmware version of the LSI SAS3108 RAID
controller card is 4.270.00-4382 or earlier, the
following write policies are supported:
Parameter Description
I/O Policy I/O policy of the virtual drive. The policy does not affect
the Read Ahead function. The options are as follows:
– Direct:
Drive Cache Cache policy of physical drives (valid only for drives
with cache).
– Unchanged: The current drive cache policy remains
unchanged.
– Enable: Data is cached on drives to improve write
performance. However, if no protection mechanism is
available when the system is powered off
unexpectedly, data in the cache will be lost.
– Disable: Data is not cached on drives during a write
process. Data will not be lost when the system is
powered off unexpectedly.
Default value: Unchanged
Parameter Description
Emulation Type Sets the logical drive sector size reported to the OS.
If the member drive is 512 B/512 B:
– Default: The logical drive sector is 512 B/512 B.
– None: The logical drive sector is 512 B/512 B.
– Force: The logical drive sector is 512 B/4 KB.
If the member drive is 512 B/4 KB:
– Default: The logical drive sector is 512 B/4 KB.
– None: The logical drive sector is 512 B/512 B.
– Force: The logical drive sector is 512 B/4 KB.
Parameter Description
----End
NOTICE
● Data will be cleared from the drives added to a RAID array. Before creating an
array, check that the drives to be added have no data or that the data does not
need to be retained.
● The LSI SAS3108 supports SAS/SATA HDDs and SSDs. Drives in one RAID array
must be of the same type, but can have different capacities or be provided by
different vendors.
● 8.2.1 RAID 0, 1, 5, 6, 10, 50, and 60 lists the number of drives required by each
RAID level.
Procedure
Step 1 Log in to the Configuration Utility main screen. For details, see 8.4.1 Logging In
to the Management Screen.
Step 2 Access the Create Virtual Drive screen.
1. On the main screen, select Main Menu and press Enter.
2. Select Configuration Management and press Enter.
3. Select Create Virtual Drive and press Enter.
The RAID array configuration screen is displayed, as shown in Figure 8-39.
Table 8-17 describes the parameters on the screen.
Virtual Drive Size Capacity of the virtual drive. The default value is the
current largest capacity supported.
Parameter Description
Read Policy Read policy of the virtual drive. The options are as
follows:
– No Read Ahead: disables the Read Ahead function.
– Read Ahead: enables the Read Ahead function. The
controller pre-reads sequential data or the data
predicted to be used and saves it in the cache.
Default value: Read Ahead
NOTE
To achieve optimal disk performance, set the policy to Read
Ahead for HDDs and No Read Ahead for SSDs.
Parameter Description
Write Policy Options for writing data on a virtual drive. The write
policy of virtual drives varies with the firmware version
of the LSI SAS3108 RAID controller card.
– If the firmware version of the LSI SAS3108 RAID
controller card is 4.270.00-4382 or earlier, the
following write policies are supported:
Parameter Description
I/O Policy I/O policy of the virtual drive. The policy does not affect
the Read Ahead function. The options are as follows:
– Direct:
Drive Cache Cache policy of physical drives (valid only for drives
with cache).
– Unchanged: The current drive cache policy remains
unchanged.
– Enable: Data is cached on drives to improve write
performance. However, if no protection mechanism is
available when the system is powered off
unexpectedly, data in the cache will be lost.
– Disable: Data is not cached on drives during a write
process. Data will not be lost when the system is
powered off unexpectedly.
Default value: Unchanged
Parameter Description
Emulation Type Sets the logical drive sector size reported to the OS.
If the member drive is 512 B/512 B:
– Default: The logical drive sector is 512 B/512 B.
– None: The logical drive sector is 512 B/512 B.
– Force: The logical drive sector is 512 B/4 KB.
If the member drive is 512 B/4 KB:
– Default: The logical drive sector is 512 B/4 KB.
– None: The logical drive sector is 512 B/512 B.
– Force: The logical drive sector is 512 B/4 KB.
Parameter Description
----End
NOTICE
● Data will be cleared from the drives added to a RAID array. Before creating an
array, check that the drives to be added have no data or that the data does not
need to be retained.
● The LSI SAS3108 supports SAS/SATA HDDs and SSDs. Drives in one RAID array
must be of the same type, but can have different capacities or be provided by
different vendors.
● 8.2.1 RAID 0, 1, 5, 6, 10, 50, and 60 lists the number of drives required by each
RAID level.
Procedure
Step 1 Log in to the Configuration Utility main screen. For details, see 8.4.1 Logging In
to the Management Screen.
Step 2 Access the Create Virtual Drive screen.
1. On the main screen, select Main Menu and press Enter.
2. Choose Configuration Management and press Enter.
3. Select Create Virtual Drive and press Enter.
The RAID array configuration screen is displayed, as shown in Figure 8-41.
Table 8-19 describes the parameters on the screen.
Virtual Drive Size Capacity of the virtual drive. The default value is the
current largest capacity supported.
Parameter Description
Read Policy Read policy of the virtual drive. The options are as
follows:
– No Read Ahead: disables the Read Ahead function.
– Read Ahead: enables the Read Ahead function. The
controller pre-reads sequential data or the data
predicted to be used and saves it in the cache.
Default value: Read Ahead
NOTE
To achieve optimal disk performance, set the policy to Read
Ahead for HDDs and No Read Ahead for SSDs.
Parameter Description
Write Policy Options for writing data on a virtual drive. The write
policy of virtual drives varies with the firmware version
of the LSI SAS3108 RAID controller card.
– If the firmware version of the LSI SAS3108 RAID
controller card is 4.270.00-4382 or earlier, the
following write policies are supported:
Parameter Description
I/O Policy I/O policy of the virtual drive. The policy does not affect
the Read Ahead function. The options are as follows:
– Direct:
Drive Cache Cache policy of physical drives (valid only for drives
with cache).
– Unchanged: The current drive cache policy remains
unchanged.
– Enable: Data is cached on drives to improve write
performance. However, if no protection mechanism is
available when the system is powered off
unexpectedly, data in the cache will be lost.
– Disable: Data is not cached on drives during a write
process. Data will not be lost when the system is
powered off unexpectedly.
Default value: Unchanged
Parameter Description
Emulation Type Sets the logical drive sector size reported to the OS.
If the member drive is 512 B/512 B:
– Default: The logical drive sector is 512 B/512 B.
– None: The logical drive sector is 512 B/512 B.
– Force: The logical drive sector is 512 B/4 KB.
If the member drive is 512 B/4 KB:
– Default: The logical drive sector is 512 B/4 KB.
– None: The logical drive sector is 512 B/512 B.
– Force: The logical drive sector is 512 B/4 KB.
Parameter Description
----End
NOTICE
● Data will be cleared from the drives added to a RAID array. Before creating an
array, check that the drives to be added have no data or that the data does not
need to be retained.
● The LSI SAS3108 supports SAS/SATA HDDs and SSDs. Drives in one RAID array
must be of the same type, but can have different capacities or be provided by
different vendors.
● 8.2.1 RAID 0, 1, 5, 6, 10, 50, and 60 lists the number of drives required by each
RAID level.
Procedure
Step 1 Log in to the Configuration Utility main screen. For details, see 8.4.1 Logging In
to the Management Screen.
Step 2 Access the Create Virtual Drive screen.
1. On the main screen, select Main Menu and press Enter.
2. Select Configuration Management and press Enter.
3. Select Create Virtual Drive and press Enter.
The RAID array configuration screen is displayed, as shown in Figure 8-43.
Table 8-21 describes the parameters on the screen.
Virtual Drive Size Capacity of the virtual drive. The default value is the
current largest capacity supported.
Parameter Description
Read Policy Read policy of the virtual drive. The options are as
follows:
– No Read Ahead: disables the Read Ahead function.
– Read Ahead: enables the Read Ahead function. The
controller pre-reads sequential data or the data
predicted to be used and saves it in the cache.
Default value: Read Ahead
NOTE
To achieve optimal disk performance, set the policy to Read
Ahead for HDDs and No Read Ahead for SSDs.
Parameter Description
Write Policy Options for writing data on a virtual drive. The write
policy of virtual drives varies with the firmware version
of the LSI SAS3108 RAID controller card.
– If the firmware version of the LSI SAS3108 RAID
controller card is 4.270.00-4382 or earlier, the
following write policies are supported:
Parameter Description
I/O Policy I/O policy of the virtual drive. The policy does not affect
the Read Ahead function. The options are as follows:
– Direct:
Drive Cache Cache policy of physical drives (valid only for drives
with cache).
– Unchanged: The current drive cache policy remains
unchanged.
– Enable: Data is cached on drives to improve write
performance. However, if no protection mechanism is
available when the system is powered off
unexpectedly, data in the cache will be lost.
– Disable: Data is not cached on drives during a write
process. Data will not be lost when the system is
powered off unexpectedly.
Default value: Unchanged
Parameter Description
Emulation Type Sets the logical drive sector size reported to the OS.
If the member drive is 512 B/512 B:
– Default: The logical drive sector is 512 B/512 B.
– None: The logical drive sector is 512 B/512 B.
– Force: The logical drive sector is 512 B/4 KB.
If the member drive is 512 B/4 KB:
– Default: The logical drive sector is 512 B/4 KB.
– None: The logical drive sector is 512 B/512 B.
– Force: The logical drive sector is 512 B/4 KB.
Parameter Description
----End
NOTICE
● Data will be cleared from the drives added to a RAID array. Before creating an
array, check that the drives to be added have no data or that the data does not
need to be retained.
● The LSI SAS3108 supports SAS/SATA HDDs and SSDs. Drives in one RAID array
must be of the same type, but can have different capacities or be provided by
different vendors.
● 8.2.1 RAID 0, 1, 5, 6, 10, 50, and 60 lists the number of drives required by each
RAID level.
Procedure
Step 1 Log in to the Configuration Utility. For details, see 8.4.1 Logging In to the
Management Screen.
Step 2 Access the Create Virtual Drive screen.
1. On the main screen, select Main Menu and press Enter.
2. Select Configuration Management and press Enter.
3. Select Create Virtual Drive and press Enter.
The RAID array configuration screen is displayed, as shown in Figure 8-45.
Table 8-23 describes the parameters on the screen.
Virtual Drive Size Capacity of the virtual drive. The default value is the
current largest capacity supported.
Parameter Description
Read Policy Read policy of the virtual drive. The options are as
follows:
– No Read Ahead: disables the Read Ahead function.
– Read Ahead: enables the Read Ahead function. The
controller pre-reads sequential data or the data
predicted to be used and saves it in the cache.
Default value: Read Ahead
NOTE
To achieve optimal disk performance, set the policy to Read
Ahead for HDDs and No Read Ahead for SSDs.
Parameter Description
Write Policy Options for writing data on a virtual drive. The write
policy of virtual drives varies with the firmware version
of the LSI SAS3108 RAID controller card.
– If the firmware version of the LSI SAS3108 RAID
controller card is 4.270.00-4382 or earlier, the
following write policies are supported:
Parameter Description
I/O Policy I/O policy of the virtual drive. The policy does not affect
the Read Ahead function. The options are as follows:
– Direct:
Drive Cache Cache policy of physical drives (valid only for drives
with cache).
– Unchanged: The current drive cache policy remains
unchanged.
– Enable: Data is cached on drives to improve write
performance. However, if no protection mechanism is
available when the system is powered off
unexpectedly, data in the cache will be lost.
– Disable: Data is not cached on drives during a write
process. Data will not be lost when the system is
powered off unexpectedly.
Default value: Unchanged
Parameter Description
Emulation Type Sets the logical drive sector size reported to the OS.
If the member drive is 512 B/512 B:
– Default: The logical drive sector is 512 B/512 B.
– None: The logical drive sector is 512 B/512 B.
– Force: The logical drive sector is 512 B/4 KB.
If the member drive is 512 B/4 KB:
– Default: The logical drive sector is 512 B/4 KB.
– None: The logical drive sector is 512 B/512 B.
– Force: The logical drive sector is 512 B/4 KB.
NOTE
Configure multiple spans for RAID 10. Figure 8-47 shows the configuration screen.
At least two spans must be created for a RAID 10 array. A maximum of eight spans can be
created.
----End
NOTICE
● Data will be cleared from the drives added to a RAID array. Before creating an
array, check that the drives to be added have no data or that the data does not
need to be retained.
● The LSI SAS3108 supports SAS/SATA HDDs and SSDs. Drives in one RAID array
must be of the same type, but can have different capacities or be provided by
different vendors.
● 8.2.1 RAID 0, 1, 5, 6, 10, 50, and 60 lists the number of drives required by each
RAID level.
Procedure
Step 1 Log in to the Configuration Utility main screen. For details, see 8.4.1 Logging In
to the Management Screen.
Step 2 Access the Create Virtual Drive screen.
1. On the main screen, select Main Menu and press Enter.
2. Select Configuration Management and press Enter.
3. Select Create Virtual Drive and press Enter.
The RAID array configuration screen is displayed, as shown in Figure 8-48.
Table 8-25 describes the parameters on the screen.
Parameter Description
Virtual Drive Size Capacity of the virtual drive. The default value is the
current largest capacity supported.
Read Policy Read policy of the virtual drive. The options are as
follows:
– No Read Ahead: disables the Read Ahead function.
– Read Ahead: enables the Read Ahead function. The
controller pre-reads sequential data or the data
predicted to be used and saves it in the cache.
Default value: Read Ahead
NOTE
To achieve optimal disk performance, set the policy to Read
Ahead for HDDs and No Read Ahead for SSDs.
Parameter Description
Write Policy Options for writing data on a virtual drive. The write
policy of virtual drives varies with the firmware version
of the LSI SAS3108 RAID controller card.
– If the firmware version of the LSI SAS3108 RAID
controller card is 4.270.00-4382 or earlier, the
following write policies are supported:
Parameter Description
I/O Policy I/O policy of the virtual drive. The policy does not affect
the Read Ahead function. The options are as follows:
– Direct:
Drive Cache Cache policy of physical drives (valid only for drives
with cache).
– Unchanged: The current drive cache policy remains
unchanged.
– Enable: Data is cached on drives to improve write
performance. However, if no protection mechanism is
available when the system is powered off
unexpectedly, data in the cache will be lost.
– Disable: Data is not cached on drives during a write
process. Data will not be lost when the system is
powered off unexpectedly.
Default value: Unchanged
Parameter Description
Emulation Type Sets the logical drive sector size reported to the OS.
If the member drive is 512 B/512 B:
– Default: The logical drive sector is 512 B/512 B.
– None: The logical drive sector is 512 B/512 B.
– Force: The logical drive sector is 512 B/4 KB.
If the member drive is 512 B/4 KB:
– Default: The logical drive sector is 512 B/4 KB.
– None: The logical drive sector is 512 B/512 B.
– Force: The logical drive sector is 512 B/4 KB.
NOTE
At least two spans must be created for a RAID 50 array. A maximum of eight spans can be
created.
----End
NOTICE
● Data will be cleared from the drives added to a RAID array. Before creating an
array, check that the drives to be added have no data or that the data does not
need to be retained.
● The LSI SAS3108 supports SAS/SATA HDDs and SSDs. Drives in one RAID array
must be of the same type, but can have different capacities or be provided by
different vendors.
● 8.2.1 RAID 0, 1, 5, 6, 10, 50, and 60 lists the number of drives required by each
RAID level.
Procedure
Step 1 Log in to the Configuration Utility main screen. For details, see 8.4.1 Logging In
to the Management Screen.
Step 2 Access the Create Virtual Drive screen.
1. On the main screen, select Main Menu and press Enter.
2. Select Configuration Management and press Enter.
3. Select Create Virtual Drive and press Enter.
The RAID array configuration screen is displayed, as shown in Figure 8-51.
Table 8-27 describes the parameters on the screen.
Virtual Drive Size Capacity of the virtual drive. The default value is the
current largest capacity supported.
Parameter Description
Read Policy Read policy of the virtual drive. The options are as
follows:
– No Read Ahead: disables the Read Ahead function.
– Read Ahead: enables the Read Ahead function. The
controller pre-reads sequential data or the data
predicted to be used and saves it in the cache.
Default value: Read Ahead
NOTE
To achieve optimal disk performance, set the policy to Read
Ahead for HDDs and No Read Ahead for SSDs.
Parameter Description
Write Policy Options for writing data on a virtual drive. The write
policy of virtual drives varies with the firmware version
of the LSI SAS3108 RAID controller card.
– If the firmware version of the LSI SAS3108 RAID
controller card is 4.270.00-4382 or earlier, the
following write policies are supported:
Parameter Description
I/O Policy I/O policy of the virtual drive. The policy does not affect
the Read Ahead function. The options are as follows:
– Direct:
Drive Cache Cache policy of physical drives (valid only for drives
with cache).
– Unchanged: The current drive cache policy remains
unchanged.
– Enable: Data is cached on drives to improve write
performance. However, if no protection mechanism is
available when the system is powered off
unexpectedly, data in the cache will be lost.
– Disable: Data is not cached on drives during a write
process. Data will not be lost when the system is
powered off unexpectedly.
Default value: Unchanged
Parameter Description
Emulation Type Sets the logical drive sector size reported to the OS.
If the member drive is 512 B/512 B:
– Default: The logical drive sector is 512 B/512 B.
– None: The logical drive sector is 512 B/512 B.
– Force: The logical drive sector is 512 B/4 KB.
If the member drive is 512 B/4 KB:
– Default: The logical drive sector is 512 B/4 KB.
– None: The logical drive sector is 512 B/512 B.
– Force: The logical drive sector is 512 B/4 KB.
NOTE
At least two spans must be created for a RAID 60 array. A maximum of eight spans can be
created.
----End
● The HDDs and SSDs cannot be used as the hot spare drives of each other.
● The HDDs include SAS HDDs and SATA HDDS. If the member drives of a RAID array are
SAS drives, the SATA drives can be used as dedicated hot spare drives. If the member
drives are SATA drives, the SAS drives cannot be used as dedicated hot spare drives.
● An idle drive can be configured as a hot spare drive, but a RAID member drive cannot
be configured as a hot spare drive.
● The type of hot spare drives must be the same as that of the member drives in the RAID
array, and the capacity of hot spare drives must be greater than or equal to the
maximum capacity of the member drives in the RAID array.
● All RAID levels except RAID 0 support hot spare drives.
● You cannot directly change a global hot spare drive to a dedicated hot spare drive or
vice versa. You need to set the drive to idle state, and then set it as a global or
dedicated hot spare drive as required.
Procedure
Step 1 Access the Configuration Utility main screen. For details, see 8.3.1 Logging In to
the Configuration Utility.
Step 2 Configure a global hot spare drive.
1. Press Ctrl+N to switch to the PD Mgmt tab.
2. Select the drive to be configured.
3. Press F2 and choose Make Global HS from the shortcut menu.
4. Press Enter.
The hot spare drive status changes to Hotspare.
----End
Procedure
NOTE
If SAS drives are grouped as a RAID array, a SATA drive can be used as a dedicated hot
spare drive. If SATA drives are grouped as a RAID array, the SAS drive cannot be used as the
dedicated hot spare drive.
Step 1 Access the Configuration Utility main screen. For details, see 8.3.1 Logging In to
the Configuration Utility.
Step 2 Configure a dedicated hot spare drive.
1. On the VD Mgmt tab, select a RAID.
2. Press F2 and select Manage Ded. HS from the shortcut menu that is
displayed. See Figure 8-57.
3. Press Enter.
The screen for configuring the dedicated hot spare drive is displayed.
4. Select a hot spare drive and press Enter.
[X] is displayed on the left of the selected hot spare drive, as shown in Figure
8-58.
----End
Scenarios
If the number of member drives in a RAID array is insufficient, you can delete a
hot spare drive to enable it to function as a common drive.
Procedure
Step 1 Access the Configuration Utility main screen. For details, see 8.3.1 Logging In to
the Configuration Utility.
Step 2 Delete a hot spare drive.
1. On the PD Mgmt tab, choose the hot spare drive to be deleted.
2. Press F2 and choose Remove Hot Spare drive from the shortcut menu, as
shown in Figure 8-60.
3. Press Enter.
The drive status changes to UG as shown in Figure 8-61.
----End
Procedure
Step 1 Back up data on drives and access the Configuration Utility main screen. For
details, see 8.3.1 Logging In to the Configuration Utility.
Step 2 Expand the available space of a virtual drive.
1. On the VD Mgmt tab, select the virtual drive to be expanded.
2. Press F2 and choose Expand VD Size from the shortcut menu.
----End
Procedure
Step 1 Access the Configuration Utility main screen. For details, see 8.3.1 Logging In to
the Configuration Utility.
----End
Procedure
Step 1 Access the Configuration Utility main screen. For details, see 8.3.1 Logging In to
the Configuration Utility.
Step 2 Press Ctrl+P to switch to the Properties tab.
Step 3 View RAID controller card properties.
Figure 8-66 and Figure 8-67 show RAID controller card properties. Table 8-29
describes key parameters.
Parameter Description
Parameter Description
----End
NOTICE
● If you replace a RAID controller card when a RAID array has been configured
on the server, the RAID configuration will be considered as Foreign Config. If
you clear a foreign configuration, the RAID configuration will be lost. Exercise
caution when doing this operation.
● If the number of faulty or missing drives exceeds the maximum number
allowed by the RAID array, the RAID array cannot be imported.
● To avoid configuration import failure, replace the original RAID controller card
with a new card of the same type.
Procedure
Step 1 Log in to the Configuration Utility main screen. For details, see 8.3.1 Logging In
to the Configuration Utility.
Step 2 View foreign configurations.
Press Ctrl+P to switch to the Foreign View tab.
NOTE
If foreign configurations exist, the Foreign View tab is displayed on the Configuration
Utility, as shown in Figure 8-68.
----End
----End
Scenarios
If a CacheCade RAID key is configured for the LSI SAS3108, you can create a
CacheCade virtual drive on the Configuration Utility. The CacheCade virtual drive
serves as level-2 cache for the existing RAID.
Procedure
Step 1 Log in to the Configuration Utility main screen. For details, see 8.3.1 Logging In
to the Configuration Utility.
On the PD Mgmt screen, check that at least one idle SSD is available.
3. Set the name, RAID level, and read/write policy of the CacheCade virtual drive
based on Table 8-30.
4. Under Select SSD, select the SSDs to be added to the CacheCade virtual drive,
and press Enter.
If [X] is displayed before an SSD, the SSD is selected.
----End
Scenarios
You can enable SSD caching for a common virtual drive in any of the following
ways:
Procedure
Step 1 Log in to the Configuration Utility main screen. For details, see 8.3.1 Logging In
to the Configuration Utility.
The list under Virtual Drives shows the virtual drives for which you can enable
SSD caching.
----End
NOTICE
● If the CacheCade has been associated with a virtual drive, remove the
association between the CacheCade and the virtual drive before you delete the
virtual drive. If the CacheCade is not associated with any virtual drive, perform
Step 2.
● To prevent data loss, do not perform offline operations on the drives in the
CacheCade virtual drive.
Procedure
Step 1 Remove the association between the CacheCade and the virtual drive.
1. On the VD Mgmt screen, select SAS3108 and press F2. The screen shown in
Figure 8-75 is displayed.
3. Select the virtual drive and press Enter to clear X, as shown in Figure 8-77.
Figure 8-77 Removing the association between the CacheCade and virtual
drives
NOTE
If multiple virtual drives are associated with the CacheCade, remove the association
between the CacheCade and each virtual drive in the same way.
4. Select OK and press Enter.
After the association is removed, the VD Mgmt screen is displayed.
----End
Step 2 Select Drive Security and press Enter. In the dialog box displayed, select Enable
Security and press Enter, as shown in Figure 8-80.
----End
----End
Step 3 In Identifier, enter the new password name, for example, key2.
Step 4 Press the Enter key to set Use the Existing Security Key to [ ].
NOTE
Step 7 Press the Enter key to set Pause For Password to [X].
NOTE
Step 11 Enter the original password, select OK, and press Enter.
Information shown in Figure 8-88 is displayed.
----End
----End
----End
NOTE
● The HDDs and SSDs cannot be used as the hot spare drives of each other.
● The HDDs include SAS HDDs and SATA HDDS. If the member drives of a RAID array are
SAS drives, the SATA drives can be used as dedicated hot spare drives. If the member
drives are SATA drives, the SAS drives cannot be used as dedicated hot spare drives.
● An idle drive can be configured as a hot spare drive, but a RAID member drive cannot
be configured as a hot spare drive.
● The type of hot spare drives must be the same as that of the member drives in the RAID
array, and the capacity of hot spare drives must be greater than or equal to the
maximum capacity of the member drives in the RAID array.
● All RAID levels except RAID 0 support hot spare drives.
● You cannot directly change a global hot spare drive to a dedicated hot spare drive or
vice versa. You need to set the drive to idle state, and then set it as a global or
dedicated hot spare drive as required.
Procedure
Step 1 Log in to the Configuration Utility main screen. For details, see 8.4.1 Logging In
to the Management Screen.
Procedure
NOTE
If SAS drives are grouped as a RAID array, a SATA drive can be used as a dedicated hot
spare drive. If SATA drives are grouped as a RAID array, the SAS drive cannot be used as the
dedicated hot spare drive.
Step 1 Log in to the Configuration Utility. For details, see 8.4.1 Logging In to the
Management Screen.
Step 2 Access the Drive Management screen.
1. Select Main Menu and press Enter.
2. Select Drive Management, and press Enter.
3. Select a drive and press Enter. The drive detail screen is displayed, as shown
in Figure 8-95.
----End
Scenarios
If the number of member drives in a RAID array is insufficient, you can delete a
hot spare drive to enable it to function as a common drive.
Procedure
Step 1 Log in to the Configuration Utility main screen. For details, see 8.4.1 Logging In
to the Management Screen.
Step 2 Access the Drive Management screen.
1. Select Main Menu and press Enter.
2. Select Drive Management, and press Enter.
3. Select a drive and press Enter. The drive detail screen is displayed, as shown
in Figure 8-96.
----End
Procedure
Step 1 Back up data on drives and access the Configuration Utility main screen. For
details, see 8.4.1 Logging In to the Management Screen.
Step 2 Access the Virtual Drive Management screen.
1. Select Main Menu and press Enter.
2. Select Virtual Drive Management and press Enter.
3. Select a virtual drive and press Enter.
Parameter Description
Parameter Description
----End
NOTICE
● Only RAID 0, RAID 1, RAID 5, and RAID 6 support capacity expansion through
drive addition.
● RAID 10, 50, and 60 do not support capacity expansion through drive addition.
● If a RAID array contains two or more virtual drives, its capacity cannot be
expanded through drive addition.
● If a faulty drive exists and the RAID array still contains redundant data (for
example, a faulty drive exists during RAID 1 array expansion), the expansion
continues. When the expansion is complete, replace the faulty drive and
rebuild the RAID array.
Perform capacity expansion through drive addition with caution.
Prerequisites
Conditions
The following conditions must be met before you add a drive for RAID array
capacity expansion on a server:
● The server has drives that have not been added to a RAID array.
● You have logged in to the server through the Remote Virtual Console and can
manage the server on a real-time basis.
● You have logged in to the Configuration Utility.
● You have logged in to the Configuration Utility. For details, see 8.4.1 Logging
In to the Management Screen.
Data
Data preparation is not required for this operation.
Procedure
Step 1 Access the Virtual Drive Management screen.
1. Select Main Menu and press Enter.
2. Select Virtual Drive Management and press Enter.
The Virtual Drive Management screen is displayed.
The RAID level must be the same as the original RAID level.
6. Select Choose the Operation and press Enter.
The drive selection screen is displayed.
NOTE
– The types and specifications of the drive to be added must be the same as those of
the member drives in the RAID array. The capacity of the drive must be greater
than or equal to the capacity of the smallest drive in the RAID array.
– During capacity expansion, you need to add two drives to RAID 1 each time, and
only one drive to RAID 0, RAID 5, or RAID 6 each time.
– A RAID controller card cannot be used to expand the capacity of two or more RAID
arrays at the same time.
7. Select the drive to be added, select Apply Changes, and press Enter.
A confirmation dialog box is displayed.
8. Select Confirm and press Enter.
9. Select Yes and press Enter.
The system displays a message indicating that the operation is successful.
10. Select OK and press Enter.
The screen shown in Figure 8-102 is displayed.
----End
NOTE
The capacity expansion will be interrupted by a server restart, but it continues after the
server is restarted.
● If a faulty drive exists and the RAID array still contains redundant data (for
example, a faulty drive exists during RAID 1 level migration), the migration
continues. When the migration is complete, replace the faulty drive and
rebuild the RAID array.
Prerequisites
Conditions
The following conditions must be met before you migrate a RAID level:
● You have logged in to the server through the Remote Virtual Console and can
manage the server on a real-time basis.
● The current number of drives meets the requirements of the target RAID level.
● You have logged in to the Configuration Utility. For details, see 8.4.1 Logging
In to the Management Screen.
The LSI SAS3108 supports the following RAID level migration modes:
Table 8-5 lists the minimum numbers of drives to be added for RAID level
migration.
NOTE
● If a RAID array contains two or more VDs, it does not support RAID level migration.
● The RAID controller card does not allow you to reconfigure two RAID arrays (that is,
reconfigure virtual drives, including adding drives or migrating RAID levels) at the same
time. Perform operations on the next RAID array after the current process is complete.
● Only drives of the same type and specifications as the member drives of the RAID array
can be added.
● To avoid data loss, back up data in the current RAID array before RAID level migration.
● Only RAID arrays in the Optimal state can be migrated.
● When migrating a RAID level, ensure that the RAID capacity after the migration is
greater than or equal to that before the migration. For example, to migrate RAID 0 to
RAID 1, you need to add drives. Otherwise, the migration fails.
Procedure
Step 1 Access the Virtual Drive Management screen.
1. On the main screen, select Main Menu and press Enter.
2. Select Virtual Drive Management and press Enter.
The virtual drive option screen is displayed, as shown in Figure 8-104.
Table 8-5 lists the minimum numbers of drives to be added for various RAID levels.
3. Select Apply Changes and press Enter.
A confirmation dialog box is displayed.
4. Select Confirm and press Enter.
5. Select Yes and press Enter.
The system displays a message indicating that the operation is successful.
6. Press Enter.
The screen shown in Figure 8-107 is displayed.
Step 4 Migrate the RAID level.
1. Select Start Operation and press Enter.
The system displays a message indicating that the operation is successful.
2. Select OK and press Enter.
Step 5 Check the migration result.
1. Go to the Virtual Drive Management screen and check the result. The screen
shown in Figure 8-109 indicates that the migration is going on.
NOTE
– If the server is restarted during migration, the task continues after the server is
restarted.
– After migration is complete, the background initialization of the RAID array
automatically starts. The background initialization is a self-check of the RAID array
and does not result in configuration data loss.
2. Select a virtual drive and press Enter. The screen shown in Figure 8-110 is
displayed. The migration is successful when the RAID level changes from
RAID 1 to RAID 5, the RAID array capacity changes, and the value of Status
changes to Optimal.
----End
Procedure
Step 1 Log in to the Configuration Utility. For details, see 8.4.1 Logging In to the
Management Screen.
----End
Screen Introduction
On the Main Menu screen, select a drive under the Drive Management node and
press Enter to display the drive operation menu, as shown in Figure 8-112. Table
8-32 describes the operations that are available.
Operation Description
Mark drive as Deletes the offline drive from the RAID array.
Missing
Rebuilding RAID
Step 1 On the menu, choose Rebuild and press Enter.
----End
NOTE
To prevent data inconsistency of a RAID array, do not directly place an offline drive online.
Choose Rebuild and add the offline drive to the RAID array.
Procedure
Step 1 Log in to the Configuration Utility. For details, see 8.4.1 Logging In to the
Management Screen.
Parameter Description
Parameter Description
PCI Slot Number PCI slot number of the RAID controller card.
Alarm Control This parameter does not take effect if the LSI SAS3108 is
not configured with a buzzer.
Parameter Description
----End
NOTICE
● If you replace a RAID controller card when a RAID array has been configured
on the server, the RAID configuration will be considered as Foreign Config. If
you clear a foreign configuration, the RAID configuration will be lost. Exercise
caution when doing this operation.
● If the number of faulty or missing drives exceeds the maximum number
allowed by the RAID array, the RAID array cannot be imported.
● To avoid configuration import failure, replace the original RAID controller card
with a new card of the same type.
Procedure
Step 1 Log in to the Configuration Utility main screen. For details, see 8.4.1 Logging In
to the Management Screen.
Step 2 On the main screen, select Main Menu and press Enter.
Step 3 Select Configuration Management and press Enter.
Step 4 Select Manage Foreign Configuration and press Enter.
The foreign configuration management screen is displayed.
----End
Step 10 Select OK to finish the configuration and return to the previous screen.
----End
Scenarios
If a CacheCade RAID key is configured for the LSI SAS3108, you can create a
CacheCade virtual drive on the Configuration Utility. The CacheCade virtual drive
serves as level-2 cache for the existing RAID.
Procedure
Step 1 Log in to the Configuration Utility. For details, see 8.4.1 Logging In to the
Management Screen.
Logical Sector Sector size of the CacheCade virtual drive. This parameter
Size cannot be manually modified. It automatically changes
based on the selected SSD.
2. Set the name, RAID level, and read/write policy of the CacheCade virtual
drive. See Table 8-35.
3. Under SELECT SSD DRIVES, select the SSDs to be added to the CacheCade
virtual drive, and press Enter.
If [X] is displayed before an SSD, the SSD is selected.
----End
Scenarios
You can enable SSD caching for a common virtual drive in any of the following
ways:
● If a CacheCade RAID key is configured, the Configuration Utility asks you
whether to enable SSD Caching during the creation of a common virtual
drive.
● If a CacheCade RAID key is configured, you can modify the common virtual
drive properties to enable SSD caching.
Procedure
Step 1 Log in to the Configuration Utility. For details, see 8.4.1 Logging In to the
Management Screen.
Step 2 Enable SSD caching by modifying the common virtual drive properties.
1. Select Main Menu and press Enter.
2. Select Virtual Drive Management.
3. Select the desired virtual drive and press Enter.
The virtual drive properties screen is displayed, as shown in Figure 8-120.
Table 8-36 describes the parameters on the screen.
NOTICE
● If the CacheCade has been associated with a virtual drive, remove the
association between the CacheCade and the virtual drive before you delete the
virtual drive. If the CacheCade is not associated with any virtual drive, perform
Step 2.
● To prevent data loss, do not perform offline operations on the drives in the
CacheCade virtual drive.
Procedure
Step 1 Remove the association between the CacheCade and the virtual drive.
1. On the Virtual Drive Management screen, select the target virtual drive and
press Enter.
2. Select Operation and press Enter.
The screen shown in Figure 8-121 is displayed.
NOTE
If multiple virtual drives are associated with the CacheCade, remove the association
between the CacheCade and each virtual drive in the same way.
Parameter Description
Parameter Description
----End
Procedure
Step 1 Choose Main Menu > Controller Management > Advanced Controller
Management. The Advanced Controller Management screen is displayed, as
shown in Figure 8-123.
Step 3 Press the Enter key to set Local Key Management (LKM) to [X].
NOTE
Step 5 In Security Key Identifier, enter the password name, for example, key1.
Step 6 In Security Key, enter the password.
Step 7 In Confirm, enter the password again for confirmation.
NOTE
Step 8 Press the Enter key to set Pause for Password at Boot Time to [ ].
NOTE
Step 9 Select I Recorded the Security Settings for Future Reference and press Enter.
Step 10 Select Enable Drive Security and press Enter.
A confirm dialog box is displayed.
Step 11 Select Confirm and press Enter.
Step 12 Select Yes and press Enter.
Information is displayed indicating that the operation is successful.
Step 13 Select OK and press Enter.
----End
----End
Step 3 Press the Enter key to select Change Current Security Settings.
NOTE
The screen shown in Figure 8-130 is displayed. Table 8-39 describes the
parameters.
Parameter Description
Suggest Security Use the encryption key password created by the system.
Key
Parameter Description
Step 5 Press the Enter key to deselect Use the Existing Security Key Identifier.
NOTE
Step 6 Select Enter a New Security Key Identifier and press Enter.
The input box is displayed, as shown in Figure 8-131.
Step 7 Enter the name of the new password, for example, key2, as shown in Figure
8-131, and press Enter.
Step 8 Select Enter Existing Security Key and press Enter. In the dialog box displayed,
enter the current password and press Enter.
Step 9 Select Enter a New Security Key and press Enter. In the dialog box displayed,
enter the new password and press Enter.
Step 10 Select Confirm and press Enter. In the dialog box displayed, enter the new
password again and press Enter.
NOTE
Step 11 Press the Enter key to set Pause for Password at Boot Time to [X].
NOTE
Step 12 Select Password and press Enter. In the dialog box displayed, enter the new RAID
controller card password, and press Enter.
Step 13 Select Confirm and press Enter. In the dialog box displayed, enter the new RAID
controller card password again, and press Enter.
NOTE
Step 14 Press the Enter key to set I Recorded the Security Settings for Future Reference
to [X].
Step 15 Select Change Security Key and press Enter.
A confirm dialog box is displayed.
Step 16 Select Confirm and press Enter.
Step 17 Select Yes and press Enter.
Information is displayed indicating that the operation is successful.
Step 18 Select OK and press Enter.
----End
Step 3 Press the Enter key to select the JBOD disk to be encrypted.
Step 4 Select OK and press Enter.
A confirm dialog box is displayed.
Step 5 Select Confirm and press Enter.
Step 6 Select Yes and press Enter.
Information is displayed indicating that the operation is successful.
Step 7 Select OK and press Enter.
----End
----End
Step 3 If Secured is Yes, as shown in Figure 8-136, the RAID array is encrypted.
----End
Step 1 Choose Main Menu > Configuration Management > Create Virtual Drive. The
Create Virtual Drive screen is displayed, as shown in Figure 8-138.
Step 2 Set Select Drives From to Free Capacity, as shown in Figure 8-139.
Step 4 Select the RAID array and press Enter, as shown in Figure 8-141.
Virtual Drive Size Capacity of the virtual drive. The default value is the
current largest capacity supported.
Read Policy Read policy of the virtual drive. The options are as follows:
● No Read Ahead: disables the Read Ahead function.
● Read Ahead: enables the Read Ahead function. The
controller pre-reads sequential data or the data
predicted to be used and saves it in the cache.
Default value: Read Ahead
NOTE
To achieve optimal disk performance, set the policy to Read Ahead
for HDDs and No Read Ahead for SSDs.
Parameter Description
Write Policy Options for writing data on a virtual drive. The write policy
of virtual drives varies with the firmware version of the LSI
SAS3108 RAID controller card.
● If the firmware version of the LSI SAS3108 RAID
controller card is 4.270.00-4382 or earlier, the following
write policies are supported:
– Write Back: When the cache receives all data, the
RAID controller card signals the host that the data
transmission is complete.
– Write Through: When the drive subsystem receives
all data, the RAID controller card signals the host
that the data transmission is complete.
– Write Back with BBU: The controller card
automatically switches to the Write Through mode
when the controller card has no Battery Backup Unit
(BBU), the BBU is on charge or discharge, the BBU is
faulty, or the Pinned/Preserved cache reaches 50% of
the physical cache. It is recommended that you set
the write policy to this mode.
● If the firmware version of the LSI SAS3108 RAID
controller card is 4.650.00-6121 or later, the following
write policies are supported:
– Write Back: The controller card automatically
switches to the Write Through mode when the
controller card has no Battery Backup Unit (BBU), the
BBU is on charge or discharge, the BBU is faulty, or
the Pinned/Preserved cache reaches 50% of the
physical cache. It is recommended that you set the
write policy to this mode.
– Write Through: When the drive subsystem receives
all data, the RAID controller card signals the host
that the data transmission is complete.
– Always Write Back: When the cache receives all
data, the RAID controller card signals the host that
the data transmission is complete.
Default value: Write Back
NOTE
The Always Write Back mode is not recommended because DDR
write data of the RAID controller card will be lost if the server is
powered off and the supercapacitor is not installed or being
charged.
Parameter Description
I/O Policy I/O policy of the virtual drive. The policy does not affect
the Read Ahead function. The options are as follows:
● Direct:
– In a read scenario, data is directly read from drives.
(If Read Policy is set to Read Ahead, data is read
from the RAID cache.)
– In a write scenario, data is written into the RAID
cache. (If Write Policy is set to Write Through, data
is directly written into drives.)
● Cached: Data is read from or written to the cache. Use
this option only when configuring CacheCade 1.1.
Default value: Direct:
Drive Cache Cache policy of physical drives (valid only for drives with
cache).
● Unchanged: The current drive cache policy remains
unchanged.
● Enable: Data is cached on drives to improve write
performance. However, if no protection mechanism is
available when the system is powered off unexpectedly,
data in the cache will be lost.
● Disable: Data is not cached on drives during a write
process. Data will not be lost when the system is
powered off unexpectedly.
Default value: Unchanged
Parameter Description
Emulation Type Sets the logical drive sector size reported to the OS.
If the member drive is 512 B/512 B:
● Default: The logical drive sector is 512 B/512 B.
● None: The logical drive sector is 512 B/512 B.
● Force: The logical drive sector is 512 B/4 KB.
If the member drive is 512 B/4 KB:
● Default: The logical drive sector is 512 B/4 KB.
● None: The logical drive sector is 512 B/512 B.
● Force: The logical drive sector is 512 B/4 KB.
----End
NOTE
A RAID array supports a maximum of 16 virtual drives. To create another virtual drive,
repeat Step 1 to Step 11.
8.7 Troubleshooting
This section describes solutions to drive faults, RAID controller card faults, and
battery or capacitor faults. For other situations, see the Huawei Server
Maintenance Guide.
Symptoms
A drive is faulty if any of the following occurs:
● The drive fault indicator is on.
● After the server is powered on, the drive indicator is off.
● A drive fault alarm was generated.
Solution
Step 1 Determine the slot number of the faulty drive.
NOTE
If a drive in JBOD mode is in UBAD status, the Fault indicator on the drive will be lit and
the iBMC will generate an alarm.
● Locate the faulty drive based on the fault indicator, which is steady orange.
For details, see the drive numbering section in the user guide of the server
you use.
● Locate the faulty drive based on the iMana/iBMC drive alarm information. For
details, see iMana/iBMC Alarm Handling.
● Locate the faulty drive using the RAID controller card GUI. For details, see
8.8.3 PD Mgmt or 8.9.2.4 Drive Management.
● Locate the faulty drive using the RAID controller card CLI tool. For details, see
8.11.2.29 Querying RAID Controller Card, RAID Array, or Physical Drive
Information.
Step 2 Check for and delete the Preserved Cache data from the server.
NOTICE
● If the RAID array containing the Preserved Cache data is invalid (the number of
faulty drives in the RAID array exceeds the maximum number of faulty drives
supported by the RAID array), the RAID array will be deleted when
PreservedCache data is deleted.
● If the drive fault is caused by manual removal and installation of the drive in
the RAID array, remove the drive and then delete the Preserved Cache data. By
doing this, the RAID array will not be deleted.
● To clear the Preserved Cache data on the GUI, perform the following steps:
On the VD Mgmt screen, move the cursor to SAS3108 (Bus 0x01, Dev 0x00)
and press F2.
– If Manage Preserved Cache is available, as shown in Figure 8-144, the
Preserved Cache data exists. Delete the Preserved Cache data by referring
to 8.5.6 Discarding Preserved Cache.
NOTICE
Remove the faulty drive and install a new drive. The new drive can be restored in
the following ways based on the RAID configuration of the faulty drive:
● If the RAID array has a hot spare drive, copyback will be performed after the
hot spare drive is rebuilt. After data is copied to the new drive, the hot spare
drive restores to the hot backup state.
● If the RAID array has redundancy feature and has no hot spare drive, the
newly installed drive automatically rebuilds data. If more than one faulty
drive exists in a RAID array, replace the faulty drives one by one based on the
drive fault time. Replace the next drive only after the current drive data is
rebuilt.
● If the faulty drive is a pass-through drive, replace it.
● If the faulty drive belongs to a RAID array without redundancy (RAID 0),
create RAID 0 again.
– For details about how to create a RAID 0 array in Legacy mode, see 8.3.2
Creating RAID 0.
– For details about how to create a RAID 0 array in UEFI mode, see 8.4.2
Creating RAID 0.
– For details about how to create a RAID 0 array by running commands,
see 8.11.2.8 Creating and Deleting a RAID Array.
----End
Symptom
After the emergency spare function is enabled for a RAID array that supports
redundancy and has no hot spare drive specified, a spare drive in the fail or
prefail state will automatically replace a failed member drive of the same type
and rebuild data to avoid data loss.
If emergency hot spare is not configured for the RAID controller card and the
status of member drive is displayed as fail or prefail, perform the following
operations.
Solution
Step 1 Use the StorCLI tool. For details, see 8.11.1 Downloading and Installing StorCLI.
Step 2 Run the following command to set the status of the member drives in the RAID
array to offline. For details, see 8.11.2.27 Setting Drive Status.
NOTE
----End
Step 3 Import or clear the foreign configuration as required. For details, see 8.6.8
Importing or Clearing a Foreign Configuration.
● After the foreign configuration is imported, the drive changes to Online state.
● After the foreign configuration is cleared, the drive changes to UG state.
----End
Step 1 Select the drive in Unconfigured Bad state and press Enter.
The drive property screen is displayed.
Step 2 Select Operation and press Enter.
The screen shown in Figure 8-148 is displayed.
Step 5 Import or clear the foreign configuration as required. For details, see 8.6.8
Importing or Clearing a Foreign Configuration.
● After the foreign configuration is imported, the drive changes to Online state.
● After the foreign configuration is cleared, the drive changes to Unconfigured
Good state.
----End
Symptoms
A RAID controller card is faulty if any of the following occurs:
Solution
Step 1 Replace the controller card. Then check whether the alarm is cleared.
For details about how to replace a RAID controller card, see the user guide
delivered with your server.
● If yes, go to Step 2.
● If no, go to Step 3.
Step 2 Import the original RAID information and check whether data can be written to
the drives controlled by the RAID controller.
For details about how to import the original RAID information, see 8.6.8
Importing or Clearing a Foreign Configuration.
● If yes, no further action is required.
● If no, go to Step 3.
Step 3 Contact Huawei technical support.
----End
Solution
Step 1 Go to the Configuration Utility of the controller card and check whether the
supercapacitor state is Optimal:
You can obtain capacitor status information from Battery Status on the
Properties screen.
● If yes, no further action is required.
● If no, go to Step 2.
Step 2 Power off the server, open the chassis cover, and check whether the controller card
is connected to a supercapacitor.
● If yes, go to Step 4.
● If no, go to Step 3.
Step 3 Install the supercapacitor based on the system configuration, and check whether
the supercapacitor state is Optimal.
● If yes, no further action is required.
● If no, go to Step 6.
Step 4 Remove and install the supercapacitor and check whether the supercapacitor state
is Optimal.
● If yes, no further action is required.
● If no, go to Step 5.
Step 5 Replace the supercapacitor, and check whether the supercapacitor state is
Optimal and whether the fault is rectified.
● If yes, no further action is required.
● If no, go to Step 6.
Step 6 Contact Huawei technical support.
----End
Solution
Step 1 Select Some drivers are not healthy and press Enter.
The health status screen is displayed, as shown in Figure 8-151.
----End
NOTE
For description of messages displayed during startup of the RAID controller card and
handling suggestions displayed on the management screen, see A.3 Common Boot Error
Messages for RAID Controller Cards.
8.7.4.1 Error Message: "Some Configured Disks Have Been Removed from
Your System/All of the Disks from Your Previous Configuration Are Gone"
Symptom
When you select Repair the whole platform to open the Critical Message
screen, the message "Some configured disks have been removed from your
system" or "All of the disks from your previous configuration are gone" is
displayed, as shown in Figure 8-152.
Solution
The message indicates that drives are not detected by the RAID controller card,
which is not caused by the RAID controller card. You are advised to perform the
following operations based on actual needs:
● Power off the server and check whether cables are properly connected and
whether any drive is removed. If there is no abnormalities, power on the
server and check whether the alarm is cleared. If the alarm is cleared, the
fault is rectified. If the alarm persists, contact technical support.
● Ignore the message and perform the following steps to go to the RAID
controller card management screen:
a. On the screen shown in Figure 8-152, select Enter Your Input Here and
press Enter.
An input box is displayed.
b. Type c, select Yes or Ok, and press Enter.
Symptom
When you open the Critical Message screen, the message "There are offline or
missing virtual drives with preserved cache" is displayed, as shown in Figure
8-153.
Solution
Step 1 On the screen shown in Figure 8-153, select Enter Your Input Here and press
Enter.
The screen shown in Figure 8-154 is displayed.
Step 6 Select AVAGO MegaRAID<SAS3108> driver Health Protocol Utility and press
Enter.
The Dashboard View screen is displayed, as shown in Figure 8-157.
Step 12 The Device Manager screen is displayed. Check whether the message "The
platform is healthy" is displayed on the screen.
● If yes, no further action is required.
● If no, contact Huawei technical support.
----End
Symptom
On the Driver Healthy Protocol Utility screen, the message "The following VDs
have missing disks" is displayed.
Solution
Step 1 Repair the RAID controller card.
----End
Symptom
Memory/battery problems were detected. The adapter has recovered, but
cached data was lost is displayed on the Driver Healthy Protocol Utility screen.
Solution
Step 1 Select Enter Your Input Here and press Enter.
An input box is displayed.
Step 2 Enter any content, select Yes, and press Enter.
The message "Critical Message handling completed. Please exit" is displayed.
Step 3 Restart the server and check whether the message "The platform is healthy" is
displayed on the Device Manager screen.
● If the message is displayed, no further action is required.
● If the message is not displayed, contact Huawei technical support.
----End
Solution
Set Personality Mode to RAID-Mode as follows:
Step 1 Log in to the RAID controller card setup utility. For details, see 8.3.1 Logging In to
the Configuration Utility.
Step 2 Press Ctrl+P to switch to the Ctrl Mgmt screen, as shown in Figure 8-159.
Related Operations
For details about how to set the JBOD mode, see Configuring JBOD Mode.
Procedure
Step 1 Set the Legacy mode. For details, see A.1.3 Setting the Legacy Mode.
Step 2 Log in to the Configuration Utility of the LSI SAS3108 controller card.
During server startup, press Ctrl+R when the message "Press <Ctrl><R> to Run
MegaRAID Configuration Utility" is displayed.
Tab Description
----End
8.8.2 VD Mgmt
The VD Mgmt screen displays controller card information, such as drive groups
and unconfigured drives, and enables controller card and virtual drive (VD)
configurations.
Screen Description
Figure 8-163 shows the VD Mgmt screen. Table 8-42 and Table 8-43 describe
the parameters in the left and right areas of the screen.
Parameter Description
Parameter Description
Virtual Drives Basic information of VDs, including the drive ID, name,
and capacity.
NOTE
Virtual drives in the same drive group must have the same
RAID level.
Available size Available space of a drive group and the space occupied
by each VD.
Hot spare drives Basic information of hot spare drives, including the IDs,
types, and status of hot spare drives.
NOTE
Select an item marked with + and press → to expand the item. Select an item marked with
- and press ← to fold the item.
Procedure
Step 1 Select the managed object and press F2.
----End
8.8.2.1 SAS3108
Add or delete RAID configurations, import or clear foreign configurations, enable
or disable data protection, and configure advanced functions.
Screen Description
On the VD Mgmt screen, select SAS3108 and press F2. The menu shown in
Figure 8-164 is displayed. Table 8-44 describes the operation options on the
menu.
Operation Description
Operation Description
----End
On the menu shown in Figure 8-164, choose Foreign Config and press Enter.
----End
----End
Screen Introduction
On the VD Mgmt screen, select the drive under the Drives node and press F2. The
menu shown in Figure 8-166 is displayed. Table 8-45 describes the operation
options on the menu.
Add New VD Creates a RAID array if the current drive group has free
space.
Manage Ded. HS Manages the dedicated hot spare drive of the current drive
group.
Virtual drives in the same drive group must have the same RAID level.
Parameter Description
Write Policy Options for writing data on a virtual drive. The write
policy of virtual drives varies with the firmware
version of the LSI SAS3108 RAID controller card.
– If the firmware version of the LSI SAS3108 RAID
controller card is 4.270.00-4382 or earlier, the
following write policies are supported:
Parameter Description
I/O Policy Options for data I/O of special virtual drives. This
policy does not affect cache prefetch. The options
are as follows:
– Direct:
----End
----End
----End
Screen Description
On the VD Mgmt screen, select the virtual drive under Virtual Drive and press F2.
The menu shown in Figure 8-170 is displayed. Table 8-47 describes the operation
options on the menu.
Operation Description
Operation Description
Expand VD Size Expands the capacity of the current virtual drive if the
corresponding drive group has free space.
The basic information about the virtual drive is displayed in the right pane, as
shown in Figure 8-170. Table 8-48 describes the parameters.
NOTICE
Initialization will cause data loss. Exercise caution when performing this operation.
Step 1 On the menu shown in Figure 8-170, choose Initialization and press Enter.
----End
Checking Consistency
Step 1 On the menu shown in Figure 8-170, choose Consistency Check and press Enter.
----End
----End
General Virtual drive basic information, including the virtual drive name,
capacity, strip size, and status.
You can change the virtual drive name.
Parameter Description
Parameter Description
I/O Policy Options for data I/O of special virtual drives. This policy
does not affect cache prefetch. The options are as follows:
● Direct:
– In a read scenario, data is directly read from drives.
(If Read Policy is set to Ahead, data is read from the
RAID cache.)
– In a write scenario, data is written into the RAID
cache. (If Write Policy is set to Write Through, data
is directly written into drives.)
● Cached: Data is read from or written to the cache. Use
this option only when configuring CacheCade 1.1.
Reason for -
difference in Write
Policy
Parameter Description
----End
----End
----End
8.8.2.4 Drives
Manage member drives of a drive group, for example, rebuild data, copy back
data, locate drives, and make drives offline.
Screen Introduction
On the VD Mgmt screen, select the drive under the Drives node and press F2. The
menu shown in Figure 8-174 is displayed. Table 8-51 describes the operation
options on the menu.
Operation Description
Operation Description
Rebuilding RAID
Step 1 On the menu, choose Rebuild and press Enter.
The Rebuild menu is displayed.
Step 2 Select an operation type and press Enter.
A confirmation dialog box is displayed.
Step 3 Select YES and press Enter.
----End
Step 3 Select the source drive for copyback and press Enter.
NOTE
----End
Locating a Drive
Step 1 On the menu, choose Locate and press Enter.
----End
8.8.3 PD Mgmt
The PD Mgmt screen displays basic information about existing physical drives and
allows you to manage physical drives.
Screen Introduction
The PD Mgmt screen is shown in Figure 8-176, Figure 8-177, and Figure 8-178.
The drive properties are displayed in the right pane. For details about the
properties, see Table 8-52.
Parameter Description
Figure 8-179 shows the drive management menu on the PD Mgmt screen. Table
8-53 describes the menu options.
Rebuilding RAID
NOTE
To prevent data inconsistency of a RAID array, do not directly place an offline drive online.
Choose Rebuild and add the offline drive to the RAID array.
----End
Step 3 Select the source drive for copyback and press Enter.
NOTE
----End
Locating a Drive
Step 1 On the menu, choose Locate and press Enter.
The Locate menu is displayed.
Step 2 Select the operation and press Enter.
----End
----End
The PAGE-2 screen is displayed, as shown in Figure 8-182. You can view the erase
process of the drive.
----End
Pre-removing a Drive
On the menu, choose Prepare for Removal and press Enter.
Screen Description
Figure 8-183 and Figure 8-184 show the Ctrl Mgmt screen. Table 8-54 describes
the parameters on the screen.
Alarm Control This parameter does not take effect if the LSI SAS3108
is not configured with a buzzer.
Onboard buzzer status.
● Enable
● Disable
● Silence
Default value: Enable
Parameter Description
Boot Device System boot option. You can set an existing RAID as
the boot device.
Default value: None.
Cache flush Interval Interval (in seconds) for cache data updates.
Default value: 4
Parameter Description
Emergency Spare Emergency spare drive type when a drive fault occurs.
● None
● UG
● GHS
● UG & GHS
Enable Emergency for Indicates whether to configure a hot spare drive for a
SMARTer drive where a SMART error is detected.
Default value: Disabled
Parameter Description
----End
Step 3 To change a drive in JBOD mode to the Ugood state, select the drive and press F2.
Then, choose Make unconfigured good and press Enter.
----End
----End
8.8.5 Properties
This screen displays LSI SAS3108 basic information, including product information
and controller status.
Screen Overview
Figure 8-187 and Figure 8-188 show the Properties screen. Table 8-55 describes
the parameters on the screen.
Parameter Description
Parameter Description
Screen Overview
Figure 8-189 shows the Foreign View screen.
8.8.7 Exit
The Exit screen allows you to exit the Configuration Utility.
Screen Overview
Press Esc on the VD Mgmt, PD Mgmt, Ctrl Mgmt, Properties, or Foreign View
screen. The Exit screen is displayed.
Procedure
Step 1 Set the EFI mode. For details, see A.1.4 Setting the EFI/UEFI Mode.
Step 2 Log in to the management screen of the LSI SAS3108.
The position of the RAID controller card management interface in EFI/UEFI mode
varies with the BIOS platform.
● Brickland platform: The interface is integrated into the BIOS Setup. For details,
see A.1.1 Logging In to the RAID Controller Card Management Screen in
EFI/UEFI Mode (Brickland Platform).
● Grantley platform: The interface is integrated into the Device Manager. For
details, see A.1.2 Logging In to the RAID Controller Card Management
Screen in EFI/UEFI Mode (Grantley Platform).
Step 3 Select the LSI SAS3108 and press Enter.
The main screen is displayed, as shown in Figure 8-191. Table 8-56 describes the
parameters on the screen.
Main Menu Specifies the main menu of the RAID controller card.
All operations on the RAID controller card are
available.
View Server Profile Displays and manages RAID controller card properties.
----End
Screen Description
Figure 8-192 shows the Main Menu screen. Table 8-57 describes the parameters
on the screen.
Screen Description
Figure 8-193 shows the Configuration Management screen. Table 8-58
describes the parameters on the screen.
Create Profile Creates a RAID array quickly. The RAID array parameters
Based Virtual are automatically configured.
Drive
----End
Virtual Drive Size Capacity of the virtual drive. The default value is the
current largest capacity supported.
Read Policy Read policy of the virtual drive. The options are as
follows:
– No Read Ahead: disables the Read Ahead function.
– Read Ahead: enables the Read Ahead function. The
controller pre-reads sequential data or the data
predicted to be used and saves it in the cache.
Default value: Read Ahead
NOTE
To achieve optimal disk performance, set the policy to Read
Ahead for HDDs and No Read Ahead for SSDs.
Parameter Description
Write Policy Options for writing data on a virtual drive. The write
policy of virtual drives varies with the firmware version
of the LSI SAS3108 RAID controller card.
– If the firmware version of the LSI SAS3108 RAID
controller card is 4.270.00-4382 or earlier, the
following write policies are supported:
Parameter Description
I/O Policy I/O policy of the virtual drive. The policy does not affect
the Read Ahead function. The options are as follows:
– Direct:
Drive Cache Cache policy of physical drives (valid only for drives
with cache).
– Unchanged: The current drive cache policy remains
unchanged.
– Enable: Data is cached on drives to improve write
performance. However, if no protection mechanism is
available when the system is powered off
unexpectedly, data in the cache will be lost.
– Disable: Data is not cached on drives during a write
process. Data will not be lost when the system is
powered off unexpectedly.
Default value: Unchanged
Parameter Description
Emulation Type Sets the logical drive sector size reported to the OS.
If the member drive is 512 B/512 B:
– Default: The logical drive sector is 512 B/512 B.
– None: The logical drive sector is 512 B/512 B.
– Force: The logical drive sector is 512 B/4 KB.
If the member drive is 512 B/4 KB:
– Default: The logical drive sector is 512 B/4 KB.
– None: The logical drive sector is 512 B/512 B.
– Force: The logical drive sector is 512 B/4 KB.
Parameter Description
----End
The RAID array configuration screen is displayed, as shown in Figure 8-196. Table
8-61 describes the parameters on the screen.
Parameter Description
Create Whether to create dedicated hot spare drives for the RAID
Dedicated Hot array.
Spare
----End
Figure 8-197 shows the displayed page. Table 8-62 describes the parameters.
----End
----End
----End
Screen Description
Figure 8-200 shows the Controller Management screen. Table 8-63 describes
the parameters on the screen.
Parameter Description
Parameter Description
----End
Alarm Control This parameter does not take effect if the LSI SAS3108 is not
configured with a buzzer.
Parameter Description
----End
To change a drive in JBOD mode to the Ugood state, select the drive and press
Enter. Then, choose Make Unconfigured Good from Select operation and press
Go, as shown in Figure 8-205.
----End
Screen Description
Figure 8-206 shows the Virtual Drive Management screen that lists the virtual
drives.
Select a virtual drive and press Enter. The drive detail screen is displayed, as
shown in Figure 8-207. Table 8-66 describes the parameters on the screen.
Parameter Description
----End
Step 2 Select the member drive to be viewed and set its status to Enabled.
Step 3 Select View Drive Properties and press Enter.
The drive properties screen is displayed, as shown in Figure 8-210. Table 8-68
describes the parameters on the screen.
Parameter Description
----End
The screen shown in Figure 8-211 is displayed. Table 8-69 describes the
parameters on the screen.
Default Default write caching policy. The write policy of virtual drives
Write Cache varies with the firmware version of the LSI SAS3108 RAID
Policy controller card.
● If the firmware version of the LSI SAS3108 RAID controller
card is 4.270.00-4382 or earlier, the following write policies
are supported:
– Write Back: When the cache receives all data, the RAID
controller card signals the host that the data transmission
is complete.
– Write Through: When the drive subsystem receives all
data, the RAID controller card signals the host that the
data transmission is complete.
– Write Back with BBU: The controller card automatically
switches to the Write Through mode when the controller
card has no Battery Backup Unit (BBU), the BBU is on
charge or discharge, the BBU is faulty, or the Pinned/
Preserved cache reaches 50% of the physical cache. It is
recommended that you set the write policy to this mode.
● If the firmware version of the LSI SAS3108 RAID controller
card is 4.650.00-6121 or later, the following write policies are
supported:
– Write Back: The controller card automatically switches to
the Write Through mode when the controller card has no
Battery Backup Unit (BBU), the BBU is on charge or
discharge, the BBU is faulty, or the Pinned/Preserved cache
reaches 50% of the physical cache. It is recommended that
you set the write policy to this mode.
– Write Through: When the drive subsystem receives all
data, the RAID controller card signals the host that the
data transmission is complete.
– Always Write Back: When the cache receives all data, the
RAID controller card signals the host that the data
transmission is complete.
NOTE
The Always Write Back mode is not recommended because DDR write
data of the RAID controller card will be lost if the server is powered off
and the supercapacitor is not installed or being charged.
----End
Screen Description
Figure 8-212 shows the Drive Management screen, which lists the drives.
NOTE
Select a drive and press Enter. The drive detail screen is displayed, as shown in
Figure 8-213. Table 8-70 describes the parameters on the screen.
Parameter Description
----End
----End
This operation will delete data on the drive. Exercise caution when performing this
operation.
----End
Step 2 On the screen shown in Figure 8-212, select the drive to be set and press Enter.
----End
Parameter Description
Parameter Description
FDE Capable Whether the drive supports the Full Disk Encryption (FDE)
technology.
----End
Screen Introduction
Figure 8-217 shows the Hardware Components screen. Table 8-72 describes the
parameters on the screen.
Step 6 Under TEMPERATURE SENSOR, select Select Temperature Sensor and press
Enter to select a temperature sensor.
Temperature Sensor Status shows the status of the selected sensor.
Step 7 Under FAN, select Select Fan and press Enter to select a fan sensor.
Fan Status shows the status of the selected sensor.
Step 8 Under POWER SUPPLY, select Select Power Supply and press Enter to select a
power sensor.
Power Supply Status shows the status of the selected sensor.
----End
Screen Description
The View Server Profile screen is displayed, as shown in Figure 8-220. Table 8-73
describes the parameters on the screen.
Parameter Description
UEFI Spec Version Specifies the UEFI specification version supported by the
server.
Virtual Drive Manages virtual drives. For details, see 8.9.2.3 Virtual
Management Drive Management.
Screen Introduction
Figure 8-221 shows the View Foreign Configuration screen.
----End
----End
8.9.5 Configure
This screen allows you to clear the current configurations.
Screen Introduction
Figure 8-222 shows the Configure screen.
----End
----End
Screen Introduction
Figure 8-223 shows the Manage MegaRAID Advanced Software Options screen.
Table 8-74 describes the parameters on the screen.
Parameter Description
Parameter Description
8.9.8 Exit
Exiting the Configuration Utility
Step 1 On the LSI SAS3108 main screen, press ESC. The Advanced screen shown in
Figure 8-224 is displayed.
----End
MegaRAID Storage Manager provides a GUI and does not support CLIs.
Table 8-75 lists the MegaRAID Storage Manager and user guide download links.
NOTICE
Download the MegaRAID Storage Manager software for the OS you are using
(Linux or Windows).
1. Click the software user guide download link listed in Table 8-75.
2. Click User Guide.
3. Click 12Gb/s MegaRAID SAS Software User Guide.
----End
Installing StorCLI
The StorCLI installation method varies depending on the OS type. The following
uses Windows, Linux, and VMware as examples to describe the StorCLI installation
procedure. For the installation procedures for other OSs, see the Readme file in
the software package.
The StorCLI tool supported by LSI SAS3108 is storcli64.
● Installing StorCLI in Windows
a. Upload the tool package applicable to the Windows to a directory (such
as C:\tmp) on the server.
b. In the Run box, enter cmd and press Enter to open the Command
Prompt.
c. Run the cd Directory where the tool package is stored command, such
as cd C:\tmp.
For Windows, StorCLI does not require installation. You can directly run
RAID controller card management commands.
● Installing StorCLI in Linux
a. Use a file transfer tool (such as PuTTY) to upload the package applicable
to Linux to a directory (such as the /tmp directory) on the server OS.
b. On the Linux CLI, run the rpm -ivh /tmp/StorCLIxxx.rpm command to
install the StorCLI tool.
NOTE
Function
Query the ID of an LSI SAS3108 RAID controller card.
Format
storcli64 show
Example
# Query the LSI SAS3108 RAID controller card ID.
[root@localhost ~]# ./storcli64 show
CLI Version = 007.0409.0000.0000 Nov 06, 2017
Operating system = Linux3.10.0-693.el7.x86_64
Status Code = 0
Status = Success
Description = None
Number of Controllers = 1
Host Name = localhost.localdomain
Operating System = Linux3.10.0-693.el7.x86_64
StoreLib IT Version = 07.0500.0200.0300
StoreLib IR3 Version = 15.02-0
System Overview :
===============
--------------------------------------------------------------------
Ctl Model Ports PDs DGs DNOpt VDs VNOpt BBU sPR DS EHS ASOs Hlth
--------------------------------------------------------------------
0 SAS3108 8 3 0 0 0 0 Msng Off - Y 3 Opt
--------------------------------------------------------------------
NOTE
In the command output, the number in the Ctl column is the ID of the RAID controller card.
In this example, the RAID controller card ID is 0.
Function
Query and set the interval between the spinup of drives and the number of drives
that spin up simultaneously upon power-on.
Format
storcli64 /ccontroller_id show spinupdelay
storcli64 /ccontroller_id show spinupdrivecount
storcli64 /ccontroller_id set spinupdelay=time
storcli64 /ccontroller_id set spinupdrivecount=count
Parameters
Parameter Description Value
For details about how to query the RAID controller card ID, see 8.11.2.1 Querying
the RAID Controller Card ID.
Usage Guidelines
Retain the default settings.
Example
# Query the Spinup parameters of a RAID controller card.
domino:~# ./storcli64 /c0 show spinupdelay
Controller = 0
Status = Success
Description = None
Controller Properties:
=====================
--------------------------
Ctrl_Prop Value
--------------------------
Spin Up Delay 2 second(s)
--------------------------
domino:~# ./storcli64 /c0 show spinupdrivecount
Controller = 0
Status = Success
Description = None
Controller Properties:
=====================
--------------------------
Ctrl_Prop Value
--------------------------
Spin Up Drive Count 4
--------------------------
Function
Set the PowerSave parameters for idle drives and hot spare drives.
Format
storcli64 /ccontroller_id set ds= state type= disktype spindowntime= time
Parameters
Parameter Description Value
For details about how to query the RAID controller card ID, see 8.11.2.1 Querying
the RAID Controller Card ID.
Usage Guidelines
None
Example
# Enable the power saving mode for an idle drive.
domino:~# ./storcli64 /c0 set ds=on type=1 spindowntime=30
Controller = 0
Status = Success
Description = None
Controller Properties:
=====================
--------------------------
Ctrl_Prop Value
--------------------------
SpnDwnUncDrv Enable
SpnDwnTm 30 minutes
--------------------------
8.11.2.4 Setting the Initialization Function for a Physical Drive and Viewing
the Initialization Progress
Function
Set the initialization function for physical drives and view the initialization
progress.
Format
storcli64 /ccontroller_id/eenclosure_id/sslot_id action initialization
Parameters
Parameter Description Value
For details about how to query the IDs, see 8.11.2.29 Querying RAID Controller
Card, RAID Array, or Physical Drive Information.
Usage Guidelines
None
Example
# Initialize the drive in slot 3 and view the initialization progress.
domino:~# ./storcli64 /c0/e252/s3 start initialization
Controller = 0
Status = Success
Description = Start Drive Initialization Succeeded.
domino:~# ./storcli64 /c0/e252/s3 show initialization
Controller = 0
Status = Success
Description = Show Drive Initialization Status Succeeded.
------------------------------------------------------
Drive-ID Progress% Status Estimated Time Left
------------------------------------------------------
/c0/e252/s3 0 In progress 0 Seconds
------------------------------------------------------
8.11.2.5 Setting the Data Erasing Mode for a Drive and Viewing the Erasing
Progress
Function
Set the data erasing mode for a drive and view the erasing progress.
Format
storcli64 /ccontroller_id/eenclosure_id/sslot_id show erase
storcli64 /ccontroller_id/eenclosure_id/sslot_id stop erase
storcli64 /ccontroller_id/eenclosure_id/sslot_id start erase mode
Parameters
Parameter Description Value
For details about how to query the IDs, see 8.11.2.29 Querying RAID Controller
Card, RAID Array, or Physical Drive Information.
Usage Guidelines
None
Example
# Erase the data in simple mode from the drive in slot 3 and view the erasing
progress.
domino:~# ./storcli64 /c0/e252/s3 start erase simple
Controller = 0
Status = Success
Description = Start Drive Erase Succeeded.
domino:~# ./storcli64 /c0/e252/s3 show erase
Controller = 0
Status = Success
Description = Show Drive Erse Status Succeeded.
------------------------------------------------------
Drive-ID Progress% Status Estimated Time Left
------------------------------------------------------
/c0/e252/s3 0 In progress 0 Seconds
------------------------------------------------------
domino:~# ./storcli64 /c0/e252/s3 stop erase
Controller = 0
Status = Success
Description = Stop Drive Erase Succeeded.
Function
Set the background initialization rate, consistency check rate, drive patrol rate,
RAID rebuilding rate, and RAID capacity expansion and migration rate.
Format
storcli64 /ccontroller_id set action=value
Parameters
Parameter Description Value
For details about how to query the RAID controller card ID, see 8.11.2.1 Querying
the RAID Controller Card ID.
Usage Guidelines
None
Example
# Set the drive patrol rate to 30.
domino:~# ./storcli64 /c0 set prrate=30
Controller = 0
Status = Success
Description = None
Controller Properties :
=====================
-----------------------
Ctrl_Prop Value
-----------------------
Patrol Read Rate 30%
-----------------------
Function
Enable the Stop On Error function so that the controller BIOS stops startup when
it detects an error.
Format
storcli64 /ccontroller_id set bios mode=action
Parameters
Parameter Description Value
For details about how to query the RAID controller card ID, see 8.11.2.1 Querying
the RAID Controller Card ID.
Usage Guidelines
None
Example
# Enable the Stop On Error function.
domino:~# ./storcli64 /c0 set bios mode=soe
Controller = 0
Status = Success
Description = None
Controller Properties :
=====================
----------------
Ctrl_Prop Value
----------------
BIOS Mode SOE
----------------
Function
Create and delete a RAID array.
Syntax
Syntax Descripti
on
NOTE
Parameters
Parameter Description Value
NOTE
● For details about how to query the IDs, see 8.11.2.29 Querying RAID Controller Card,
RAID Array, or Physical Drive Information.
● Use a comma (,) to separate multiple drives to be added to a RAID array. The format of
a single drive is enclosure_id:slot_id. The format of drives in consecutive slots is
enclosure_id:startid-endid.
Usage Guidelines
None
Example
# Create a RAID 0 array.
domino:~# ./storcli64 /c0 add vd r0 size=100GB drives=252:0-3
Controller = 0
Status = Success
Description = Add VD Succeeded
8.11.2.9 Setting the Cache Read and Write Policies of a RAID Array
Function
Set the cache read and write properties for a RAID array.
Format
storcli64 /ccontroller_id/vraid_id set wrcache=mode
Parameters
Parameter Description Value
For details about how to query the IDs, see 8.11.2.29 Querying RAID Controller
Card, RAID Array, or Physical Drive Information.
Usage Guidelines
None
Example
# Set the cache read/write mode to wt.
domino:~# ./storcli64 /c0/v0 set wrcache=wt
Controller = 0
Status = Success
Description = None
Details Status :
==============
---------------------------------------
VD Property Value Status ErrCd ErrMsg
---------------------------------------
0 wrCache WT Success 0-
---------------------------------------
Function
Set an I/O policy for a RAID group.
Syntax
storcli64 /ccontroller_id/vraid_id set iopolicy=mode
Parameters
Parameter Description Value
For details about how to query the IDs, see 8.11.2.29 Querying RAID Controller
Card, RAID Array, or Physical Drive Information.
Usage Guidelines
None
Example
# Set the I/O policy of a RAID array to direct.
[root@linux ~]# storcli64 /c0/v1 set iopolicy=direct
CLI Version = 007.1416.0000.0000 July 24, 2020
Operating system = Linux 4.14.0-115.el7a.0.1.aarch64
Controller = 0
Status = Success
Description = None
Detailed Status :
===============
----------------------------------------
Function
Set an access policy for a RAID array.
Format
storcli64 /ccontroller_id/vraid_id set accesspolicy=mode
Parameters
Parameter Description Value
For details about how to query the IDs, see 8.11.2.29 Querying RAID Controller
Card, RAID Array, or Physical Drive Information.
Usage Guidelines
None
Example
# Set the RAID access policy to rw.
domino:~# ./storcli64 /c0/v0 set accesspolicy=rw
Controller = 0
Status = Success
Description = None
Details Status :
==============
---------------------------------------
VD Property Value Status ErrCd ErrMsg
---------------------------------------
0 AccPolicy RW Success 0-
---------------------------------------
Function
Set RAID foreground initialization.
Format
storcli64 /ccontroller_id/vraid_id start mode
Parameters
Parameter Description Value
For details about how to query the IDs, see 8.11.2.29 Querying RAID Controller
Card, RAID Array, or Physical Drive Information.
Usage Guidelines
None
Example
# Quickly initialize a RAID array.
domino:~# ./storcli64 /c0/v0 start init
Controller = 0
Status = Success
Description = Start INIT Operation Success
Function
Pause, resume, and stop RAID background initialization and view the initialization
progress.
Format
storcli64 /ccontroller_id/vraid_id action bgi
Parameters
Parameter Description Value
For details about how to query the IDs, see 8.11.2.29 Querying RAID Controller
Card, RAID Array, or Physical Drive Information.
Usage Guidelines
None
Example
# View the background initialization progress.
domino:~# ./storcli64 /c0/v0 show bgi
Controller = 0
Status = Success
Description = Noe
VD Operation Status :
===================
------------------------------------------------------
Function
Set a virtual drive or physical drive as a boot drive.
Format
storcli64 /ccontroller_id/vvd_id set bootdrive=on
storcli64 /ccontroller_id/eenclosure_id/sslot_id set bootdrive=on
Parameters
Parameter Description Value
For details about how to query the IDs, see 8.11.2.29 Querying RAID Controller
Card, RAID Array, or Physical Drive Information.
Usage Guidelines
None
Example
# Set VD 0 to boot drive.
domino:~# ./storcli64 /c0/v0 set bootdrive=on
Controller = 0
Status = Success
Description = Noe
Detailed Status :
===============
----------------------------------------
VD Property Value Staus ErrCd ErrMsg
----------------------------------------
0 Boot Drive On Success 0 -
----------------------------------------
Function
Enable the emergency hot spare function and allow the emergency hot spare
function to be used when a SMART error occurs.
Format
storcli64 /ccontroller_id set eghs eug=state
storcli64 /ccontroller_id set eghs smarter=state
Parameters
Parameter Description Value
For details about how to query the RAID controller card ID, see 8.11.2.1 Querying
the RAID Controller Card ID.
Usage Guidelines
None
Example
# Enable the emergency hot spare function and allow the emergency hot spare
function to be used when a SMART error occurs.
domino:~# ./storcli64 /c0 set eghs eug=on
Controller = 0
Status = Success
Description = None
Controller Properties :
=====================
------------------
Ctrl_Prop Value
------------------
EmergencyUG ON
------------------
domino:~# ./storcli64 /c0 set eghs smarter=on
Controller = 0
Status = Success
Description = None
Controller Properties :
=====================
-----------------------
Ctrl_Prop Value
-----------------------
EmergencySmarter ON
-----------------------
Function
Set the hot spare drive status to global or dedicated.
Format
storcli64 /ccontroller_id/eenclosure_id/sslot_id add hotsparedrive [dgs=vd_id]
Parameters
Parameter Description Value
For details about how to query the IDs, see 8.11.2.29 Querying RAID Controller
Card, RAID Array, or Physical Drive Information.
Usage Guidelines
None
Example
# Set the drive in slot 3 to a global hot spare drive.
domino:~# ./storcli64 /c0/e252/s3 add hotsparedrive
Controller = 0
Status = Success
Description = Add Hot Spare Succeeded.
# Set the drive in slot 3 to the dedicated hot spare drive for VD 0.
domino:~# ./storcli64 /c0/e252/s3 add hotsparedrive dgs=0
Controller = 0
Status = Success
Description = Add Hot Spare Succeeded.
Function
Start, pause, resume, and stop RAID rebuild, and query the progress.
Syntax
storcli64 /ccontroller_id/eenclosure_id/sslot_id action rebuild
Parameter Description
Parameter Description Value
For details about how to query the IDs, see 8.11.2.29 Querying RAID Controller
Card, RAID Array, or Physical Drive Information.
Usage Guidelines
None
Examples
# Manually start rebuild.
-----------------------------------------------------
Drive-ID Progress% Status Estimated Time Left
-----------------------------------------------------
/c0/e70/s7 5 In progress 26 Minutes
-----------------------------------------------------
Function
Start, pause, resume, and stop copyback, and query the progress.
Syntax
Syntax Description
Parameter Description
Parameter Description Value
For details about how to query the IDs, see 8.11.2.29 Querying RAID Controller
Card, RAID Array, or Physical Drive Information.
Usage Guidelines
None
Examples
# Manually start copyback.
[root@localhost ~]# ./storcli64 /c0/e70/s7 start copyback target=70:3
Controller = 0
Status = Success
Description = Start Drive Copyback Succeeded.
-----------------------------------------------------
Drive-ID Progress% Status Estimated Time Left
-----------------------------------------------------
/c0/e70/s3 17 In progress 16 Minutes
-----------------------------------------------------
Function
Set a SMART scan interval.
Format
storcli64 /ccontroller_id set smartpollinterval=value
Parameters
Parameter Description Value
For details about how to query the RAID controller card ID, see 8.11.2.1 Querying
the RAID Controller Card ID.
Usage Guidelines
None
Example
# Set the SMART scan interval to 60 seconds.
domino:~# ./storcli64 /c0 set smartpollinterval=60
Controller = 0
Status = Success
Description = None
Controller Properties :
=====================
-------------------------------
Ctrl_Prop Value
-------------------------------
SmartPollInterval 60 second(s)
-------------------------------
Function
Adjust the available space of the virtual drive to expand its capacity if it does not
use all the capacity of member drives.
Format
storcli64 /ccontroller_id/vvd_id expand size=capacity [expandarray]
Parameters
Parameter Description Value
For details about how to query the IDs, see 8.11.2.29 Querying RAID Controller
Card, RAID Array, or Physical Drive Information.
Usage Guidelines
None
Example
# Query the VD's capacity that can be used for expansion.
[root@localhost ~]# storcli64 /c0/v1 show expansion
CLI Version = 007.1416.0000.0000 July 24, 2020
Operating system = Linux 4.19.36-vhulk1907.1.0.h691.eulerosv2r8.aarch64
Controller = 0
Status = Success
Description = None
EXPANSION INFORMATION :
=====================
--------------------------------------------
VD Size OCE NoArrExp WithArrExp Status
--------------------------------------------
1 10.000 GB Y 3.627 TB 5.447 TB -
--------------------------------------------
NOTE
In the example, if the expansion capacity is less than 3.627 TB, the expandarray parameter
is not required; but if the expanded capacity is greater than 3.627 TB and less than 5.446
TB, the expandarray parameter is required.
NOTE
● You can also add drives to the RAID array. For details, see 8.11.2.21 Adding Drives to a
RAID Array or Migrating RAID Level After Expansion.
● The RAID controller card adjusts the capacity to be added based on the drive type.
Therefore, the expanded capacity may be varied.
Function
Add drives to a RAID array to expand the RAID capacity.
Format
storcli64 /ccontroller_id/vvd_id start migrate type=rlevel option=add
drives=enclosure_id:slot_id
Parameters
Parameter Description Value
For details about how to query the IDs, see 8.11.2.29 Querying RAID Controller
Card, RAID Array, or Physical Drive Information.
Usage Guidelines
None
Example
# Add the drive in slot 2 to the RAID 0 array for capacity expansion.
domino:~# ./storcli64 /c0/v0 start migrate type=r0 option=add drives=252:2
Controller = 0
Status = Success
Description = Start MIGRATE Operation Success.
domino:~# ./storcli64 /c0/v0 show migrate
Controller = 0
Status = Success
Description = None
VD Operation Status :
===================
-------------------------------------------------------
VD Operation Progress% Status Estimated Time Left
-------------------------------------------------------
0 Migrate 1 In progress 13 Minutes
-------------------------------------------------------
# Add the drive to a single-disk RAID 0 array and change the RAID level to RAID 1.
domino:~# ./storcli64 /c0/v0 start migrate type=r1 option=add drives=252:3
Controller = 0
Status = Success
Description = Start MIGRATE Operation Success.
domino:~# ./storcli64 /c0/v0 show migrate
Controller = 0
Status = Success
Description = None
VD Operation Status :
===================
-------------------------------------------------------
VD Operation Progress% Status Estimated Time Left
-------------------------------------------------------
0 Migrate 1 In progress 14 Minutes
-------------------------------------------------------
NOTE
You can also expand the RAID capacity by adding the available capacity for its member
drives. For details, see 8.11.2.20 Increasing Member Drive Available Space to Expand
RAID.
Function
Query and clear PreservedCache data.
Format
storcli64 /ccontroller_id show preservedcache
storcli64 /ccontroller_id/vvd_id delete preservedcache force
Parameters
Parameter Description Value
For details about how to query the IDs, see 8.11.2.29 Querying RAID Controller
Card, RAID Array, or Physical Drive Information.
Usage Guidelines
None
Example
# Query PreservedCache data.
domino:~# ./storcli64 /c0 show preservedcache
Controller = 0
Status = Success
Description = No Virtual Drive has Preserved Cache Data.
Function
Set consistency check parameters.
Format
storcli64 /ccontroller_id[/vvd_id]show cc
Parameters
Parameter Description Value
For details about how to query the IDs, see 8.11.2.29 Querying RAID Controller
Card, RAID Array, or Physical Drive Information.
Usage Guidelines
If the show command does not contain /vvd_id, the consistency check parameters
are queried.
If the show command contains /vvd_id, the consistency check progress is queried.
Example
# Set automatic consistency check parameters.
domino:~# ./storcli64 /c0 set cc=conc delay=1 starttime=2016/07/14 22:00:00 excludevd=0
Controller = 0
Status = Success
Description = None
Controller Properties :
=====================
------------------------------------
Ctrl_Prop Value
------------------------------------
CC Mode CONC
CC delay 1
CC Starttime 2016/07/14 22:00:00
CC ExcludeVD(0) Success
------------------------------------
VD Operation Status :
===================
-----------------------------------------------------------
VD Operation Progress% Status Estimited Time Left
-----------------------------------------------------------
0 CC - Not in progress -
Function
Query and set patrolread parameters.
Format
storcli64 /ccontroller_id set patrolread starttime=time
maxconcurrentpd=number
storcli64 /ccontroller_id set patrolread delay=delaytime
storcli64 /ccontroller_id set patrolread={on mode=<auto|manual>}|{off}
storcli64 /ccontroller_id show patrolread
Parameters
Parameter Description Value
For details about how to query the RAID controller card ID, see 8.11.2.1 Querying
the RAID Controller Card ID.
Usage Guidelines
None
Example
# Set the patrolread start time to 2016/07/15 23:00:00 and the number of drives
to be checked concurrently to 2.
domino:~# ./storcli64 /c0 set patrolread starttime=2016/07/15 23:00:00 maxconcurrentpd=2
Controller = 0
Status = Success
Description = None
Controller Properties :
=====================
---------------------------------------
Ctrl_Prop Value
---------------------------------------
PR Starttime 2016/07/15 23:00:00
PR MaxConcurrentPd 2
---------------------------------------
Controller Properties :
=====================
---------------------------------------
Ctrl_Prop Value
---------------------------------------
Patrol Read Mode auto
---------------------------------------
Function
Query and set CacheFlush parameters.
Format
storcli64 /ccontroller_id show cacheflushint
Parameters
Parameter Description Value
For details about how to query the RAID controller card ID, see 8.11.2.1 Querying
the RAID Controller Card ID.
Usage Guidelines
None
Example
# Set the CacheFlush interval to 10.
domino:~# ./storcli64 /c0 set cacheflushint=10
Controller = 0
Status = Success
Description = None
Controller Properties :
=====================
---------------------------
Ctrl_Prop Value
---------------------------
Cache Flush Interval 10 sec
---------------------------
Function
Enable the transparent transmission function of a RAID controller card and specify
the pass-through drive. After the configuration, the OS can directly manage the
drive.
Format
storcli64 /ccontroller_id set jbod=state
Parameters
Parameter Description Value
For details about how to query the IDs, see 8.11.2.29 Querying RAID Controller
Card, RAID Array, or Physical Drive Information.
Usage Guidelines
None
Example
# Enable the JBOD function of a RAID controller card and set the drive in slot 7 as
the JBOD drive.
domino:~# ./storcli64 /c0 set jbod=on
Controller = 0
Status = Success
Description = None
Controller Properties :
=====================
----------------
Ctrl_Prop Value
----------------
JBOD ON
----------------
domino:~# ./storcli64 /c0/e252/s7 set JBOD
Controller = 0
Status = Success
Description = Set Drive JBOD Succeeded.
Function
Set drive status.
Format
storcli64 /ccontroller_id/eenclosure_id/sslot_id set state
Parameters
Parameter Description Value
– Enables a drive in
JBOD state to turn
into the ugood
state. The drive in
ugood state can
be used to create
RAID or a hot
spare drive.
For details about how to query the IDs, see 8.11.2.29 Querying RAID Controller
Card, RAID Array, or Physical Drive Information.
Usage Guidelines
None
Example
# Change the state of the drive in slot 1 from Unconfigured Bad to
Unconfigured Good.
domino:~# ./storcli64 /c0/e0/s1 set good force
Controller = 0
Status = Success
Description = Set Drive Good Succeeded.
Function
Turn on and off the UID indicator of a specified drive.
Format
storcli64 /ccontroller_id/eenclosure_id/sslot_id action locate
Parameters
Parameter Description Value
For details about how to query the IDs, see 8.11.2.29 Querying RAID Controller
Card, RAID Array, or Physical Drive Information.
Usage Guidelines
None
Example
# Turn on the UID indicator of the drive in slot 7.
domino:~# ./storcli64 /c0/e252/s7 start locate
Controller = 0
Status = Success
Description = Start Drive Locate Succeeded.
Function
Query detailed information about RAID controller cards, physical drives, and
virtual drives.
Format
storcli64 /ccontroller_id show
storcli64 /ccontroller_id/eenclosure_id/sslot_id show all
storcli64 /ccontroller_id/vvd_id show all
Parameters
Parameter Description Value
Usage Guidelines
Table 8-76 describes the fields in the command output.
Example
# Query detailed information about RAID controller card 0.
[root@localhost ~]# ./storcli64 /c0 show
Generating detailed summary of the adapter, it may take a while to complete.
TOPOLOGY :
========
-----------------------------------------------------------------------------
DG Arr Row EID:Slot DID Type State BT Size PDC PI SED DS3 FSpace TR
-----------------------------------------------------------------------------
0- - - - RAID1 Optl N 744.125 GB dflt N N none N N
00 - - - RAID1 Optl N 744.125 GB dflt N N none N N
0 0 0 252:1 18 DRIVE Onln N 744.125 GB dflt N N none - N
0 0 1 252:0 48 DRIVE Onln N 744.125 GB dflt N N none - N
-----------------------------------------------------------------------------
Virtual Drives = 1
VD LIST :
=======
---------------------------------------------------------------
DG/VD TYPE State Access Consist Cache Cac sCC Size Name
---------------------------------------------------------------
0/0 RAID1 Optl RW Yes RWBD - ON 744.125 GB
---------------------------------------------------------------
Cac=CacheCade|Rec=Recovery|OfLn=OffLine|Pdgd=Partially Degraded|Dgrd=Degraded
Optl=Optimal|RO=Read Only|RW=Read Write|HD=Hidden|TRANS=TransportReady|B=Blocked|
Consist=Consistent|R=Read Ahead Always|NR=No Read Ahead|WB=WriteBack|
AWB=Always WriteBack|WT=WriteThrough|C=Cached IO|D=Direct IO|sCC=Scheduled
Check Consistency
Physical Drives = 6
PD LIST :
=======
---------------------------------------------------------------------------------
EID:Slt DID State DG Size Intf Med SED PI SeSz Model Sp Type
---------------------------------------------------------------------------------
252:0 48 Onln 0 744.125 GB SATA SSD N N 512B INTEL SSDSC2BB800G4 U -
252:1 18 Onln 0 744.125 GB SATA SSD N N 512B INTEL SSDSC2BB800G4 U -
252:2 19 UGood - 744.125 GB SATA SSD N N 512B INTEL SSDSC2BA800G4 U -
252:3 20 UGood - 744.125 GB SATA SSD N N 512B INTEL SSDSC2BA800G4 U -
252:4 49 UGood - 744.125 GB SATA SSD N N 512B INTEL SSDSC2BB800G4 U -
252:5 47 UGood - 744.125 GB SATA SSD N N 512B INTEL SSDSC2BB800G4 U -
---------------------------------------------------------------------------------
Cachevault_Info :
===============
------------------------------------
Model State Temp Mode MfgDate
------------------------------------
CVPM02 Optimal 28C - 2016/11/04
------------------------------------
Drive /c0/e252/s0 :
=================
---------------------------------------------------------------------------------
EID:Slt DID State DG Size Intf Med SED PI SeSz Model Sp Type
---------------------------------------------------------------------------------
252:0 48 Onln 0 744.125 GB SATA SSD N N 512B INTEL SSDSC2BB800G4 U -
---------------------------------------------------------------------------------
Port Information :
================
-----------------------------------------
Port Status Linkspeed SAS address
-----------------------------------------
0 Active 6.0Gb/s 0x4433221100000000
-----------------------------------------
Inquiry Data =
40 00 ff 3f 37 c8 10 00 00 00 00 00 3f 00 00 00
00 00 00 00 48 50 4c 57 31 35 36 37 31 30 52 59
30 38 52 30 4e 47 20 20 00 00 00 00 00 00 32 44
31 30 33 30 30 37 4e 49 45 54 20 4c 53 53 53 44
32 43 42 42 30 38 47 30 20 34 20 20 20 20 20 20
20 20 20 20 20 20 20 20 20 20 20 20 20 20 01 80
00 40 00 2f 00 40 00 00 00 00 07 00 ff 3f 10 00
3f 00 10 fc fb 00 01 bf ff ff ff 0f 00 00 07 00
/c0/v0 :
======
---------------------------------------------------------
DG/VD TYPE State Access Consist Cache sCC Size Name
---------------------------------------------------------
1/0 RAID1 Optl RW Yes RWTD - 1.089 TB
---------------------------------------------------------
Cac=CacheCade|Rec=Recovery|OfLn=OffLine|Pdgd=Partially Degraded|dgrd=Degraded
Optl=Optimal|RO=Read Only|RW=Read Write|B=Blocked|Consist=Consistent|
R=Read Ahead Always|NR=No Read Ahead|WB=WriteBack|
AWB=Always WriteBack|WT=WriteThrough|C=Cached IO|D=Direct IO|sCC=Scheduled
Check Consistency
PDs for VD 0 :
============
-----------------------------------------------------------------------
EID:Slt DID State DG Size Intf Med SED PI SeSz Model Sp
-----------------------------------------------------------------------
25:22 14 Onln 1 1.089 TB SAS HDD N N 512B ST1200MM0007 U
25:23 26 Onln 1 1.089 TB SAS HDD N N 512B ST1200MM0007 U
-----------------------------------------------------------------------
VD0 Properties :
==============
Strip Size = 256 KB
Number of Blocks = 2341795840
VD has Emulated PD = No
Span Depth = 1
Number of Drives Per Span = 2
Write Cache(initial setting) = WriteThrough
Disk Cache Policy = Disk's Default
Encryption = None
Data Protection = Disabled
Active Operations = None
Exposed to OS = Yes
Creation Date = 04-01-2018
Creation Time = 12:38:35 PM
Emulation type = None
Function
Create a CacheCade virtual drive on an SSD.
Format
storcli64 /ccontroller_id add vd cc raidlevel drives=enclosure_id:disk
storcli64 /ccontroller_id/vvd_id del cachecade
Parameters
Parameter Description Value
For details about how to query the IDs, see 8.11.2.29 Querying RAID Controller
Card, RAID Array, or Physical Drive Information.
Usage Guidelines
None
Example
# Configure a single-drive CacheCade RAID 0.
Function
Restore Frn-Bad drives in a RAID array to Online.
Format
storcli64 /ccontroller_id/eenclosure_id/sslot_id set good
Parameters
Parameter Description Value
For details about how to query the IDs, see 8.11.2.29 Querying RAID Controller
Card, RAID Array, or Physical Drive Information.
Usage Guidelines
None
Example
The drives in slots 1 and 5 are in the UBad F state, as shown in Figure 8-225.
Perform the following steps to restore the state to online:
3. If the RAID array needs to be rebuilt, run the following command for slot 1 as
an example:
domino:~# ./storcli64 /c0/e0/s1 start rebuild
Controller = 0
Status = Success
Description = Start Drive Rebuild Succeeded.
Function
Query supercapacitor information, such as the supercapacitor name and the cache
capacity of the TFM Flash card.
Format
storcli64 /ccontroller_id/cv show all
Parameters
Parameter Description Value
For details about how to query the RAID controller card ID, see 8.11.2.1 Querying
the RAID Controller Card ID.
Example
# Query supercapacitor information.
[root@localhost ~]# ./storcli64 /c0/cv show all
CLI Version = 007.0409.0000.0000 Nov 06, 2017
Operating system = Linux3.10.0-514.el7.x86_64
Controller = 0
Status = Success
Description = None
Cachevault_Info :
===============
--------------------
Property Value
--------------------
Type CVPM02
Temperature 28 C
State Optimal
--------------------
Firmware_Status :
===============
---------------------------------------
Property Value
---------------------------------------
Replacement required No
No space to cache offload No
Module microcode update required No
---------------------------------------
GasGaugeStatus :
==============
------------------------------
Property Value
------------------------------
Pack Energy 294 J
Capacitance 108 %
Remaining Reserve Space 0
------------------------------
Design_Info :
===========
------------------------------------
Property Value
------------------------------------
Date of Manufacture 04/11/2016
Serial Number 22417
Manufacture Name LSI
Design Capacity 288 J
Device Name CVPM02
tmmFru N/A
CacheVault Flash Size 8.0 GB
tmmBatversionNo 0x05
tmmSerialNo 0xee7d
tmm Date of Manufacture 09/12/2016
tmmPcbAssmNo 022544412A
tmmPCBversionNo 0x03
tmmBatPackAssmNo 49571-13A
scapBatversionNo 0x00
scapSerialNo 0x5791
Properties :
==========
--------------------------------------------------------------
Property Value
--------------------------------------------------------------
Auto Learn Period 27d (2412000 seconds)
Next Learn time 2018/08/03 17:48:38 (586633718 seconds)
Learn Delay Interval 0 hour(s)
Auto-Learn Mode Transparent
--------------------------------------------------------------
NOTE
● In the command output, Device Name CVPM02 indicates that the supercapacitor name
is CVPM02, and CacheVault Flash Size 8.0GB indicates that the cache capacity of the
TFM Flash card is 8.0 GB.
● If State is FAILED, replace the supercapacitor.
Function
Upgrade the drive firmware.
Format
./storcli64 /ccontroller_id /eenclosure_id/sslot_id download src=FW_name.bin
Parameters
Parameter Description Value
For details about how to query the IDs, see 8.11.2.29 Querying RAID Controller
Card, RAID Array, or Physical Drive Information.
Example
# Upgrade the drive firmware.
[root@localhost ~]# ./storcli64 /c0/e64/s5 download src=5200_D1MU004_Releasefullconcatenatedbinary.bin
Starting microcode update .....please wait...
Flashing PD image ..... please wait...
CLI Version = 007.0504.0000.0000 Nov 22,2017
Operation system = Linux 3.10.0-514.el7.x86_64
Controller = 0
Status = Success
Description = Firmware Download succeeded.
Function
Enable and disable the encryption function, set a password, modify the encryption
settings, and encrypt a JBOD drive or RAID array.
Format
storcli64 /ccontroller_id set securitykey=key
Parameters
Parameter Description Value
For details about how to query the IDs, see 8.11.2.29 Querying RAID Controller
Card, RAID Array, or Physical Drive Information.
Usage Guidelines
None
Examples
# Enable the encryption function.
domino:~# ./storcli64 /c0 set securitykey=Huawei12#$
Controller = 0
Status = Success
Description = None
Controller Properties :
=====================
---------------------
Ctrl Method Result
---------------------
0 set Key Success
---------------------
[root@localhost ~]#
# Set a password.
domino:~# ./storcli64 /c0 set securitykey=Huawei12#$ passphrase=@Qwerty123
Controller = 0
Status = Success
Description = None
Controller Properties :
=====================
---------------------
Ctrl Method Result
---------------------
0 set Key Success
---------------------
[root@localhost ~]#
Function
Rebuild the RAID array manually.
Syntax
storcli64 /ccontroller_id/eenclosure_id/sslot_id insert dg=DG array=Arr row=Row
storcli64 /ccontroller_id/eenclosure_id/sslot_id start rebuild
Parameters
Parameter Description Value
For details about how to query the IDs, see 8.11.2.29 Querying RAID Controller
Card, RAID Array, or Physical Drive Information.
Usage Guidelines
To rebuild a RAID array, perform the following steps:
1. Run the ./storcli64/c0 show command to query the DG, Arr, and Row
information of the faulty drive.
Example
# Add the drive to the RAID array.
[root@localhost ~]# storcli64 /c0/e252/s1 insert dg=0 array=0 row=0
CLI Version = 007.0504.0000.0000 Nov 22, 2017
Operating system = Linux 3.10.0-693.el7.x86_64
Controller = 0
Status = Success
Description = Insert Drive Succeeded.
Function
The status of a hot spare drive changes to Ubad after it is reinstalled.
If auto-restore of hot spare drives is enabled, the hot spare drive automatically
changes to hotspare state. If auto-restore of hot spare drives is disabled, you need
to manually restore the hot spare drive.
Format
storcli64 /ccontroller_id set restorehotspare=on
storcli64 /ccontroller_id set restorehotspare=off
Parameters
Parameter Description Value
For details about how to query the RAID controller card ID, see 8.11.2.1 Querying
the RAID Controller Card ID.
Usage Guidelines
None
Example
# Enable auto-restore of hot spare drives.
Function
Set the cache status of member drives in a RAID array.
Format
storcli64 /ccontroller_id/vvd_id set pdcache=action
Parameter description
Parameter Description Value
For details about how to query the IDs, see 8.11.2.37 Setting the Cache Status of
a Drive.
Usage Guidelines
None
Example
# Enable the drive cache of RAID array 0.
./storcli64 /c0/v0 set pdcache=on
Controller = 0
Status = Success
Description = None
Detailed Status :
===============
---------------------------------------
VD Property Value Status ErrCd ErrMsg
---------------------------------------
0 PdCac On Success 0-
---------------------------------------
Function
View, import, and delete foreign configurations of a RAID controller card.
Format
storcli64 /ccontroller_id/fall import preview
storcli64 /ccontroller_id/fall import
storcli64 /ccontroller_id/fall delete
Parameters
Parameter Description Value
For details about how to query the RAID controller card ID, see 8.11.2.1 Querying
the RAID Controller Card ID.
Usage Guidelines
None.
Example
# View foreign configurations of a RAID controller card.
[root@localhost ~]# ./storcli64 /c0/fall import preview
CLI Version = 007.0504.0000.0000 Nov 22, 2017
Operating system = Linux 3.10.0-957.el7.x86_64
Controller = 0
Status = Success
Description = Operation on foreign configuration Succeeded
FOREIGN PREVIEW :
===============
9 SoftRAID
9.1 Overview
SoftRAID is a simplified RAID solution that supports RAID 0, 1, 5, and 10. It uses
the Intel Platform Controller Hub (PCH) to output SATA signals and uses the LSI
MegaRAID software solution.
Item Description
Number of 8
supported RAID
arrays
Item Description
● SoftRAID requires licenses. If your server does not have licenses, contact Huawei
technical support to purchase licenses and obtain the installation guide.
● A special key is required for SoftRAID to support RAID 5. To configure RAID 5, contact
Huawei technical support to purchase the key and obtain the installation guide.
● Replacing the mainboard will invalidate the configured SoftRAID.
9.2 Functions
RAID 0 1 to 6 0
RAID 1 2 1
RAID 5 3 to 6 1
NOTE
● The failed drives cannot be adjacent. For details, see A.2 RAID Levels.
● Each span of a RAID 10 array allows for only one failed drive.
When using the SoftRAID, you need to install a RAID key to support RAID 5.
9.2.2 Initialization
● Fast initialization: In this foreground process, the firmware writes zeros to the
first 100 MB of the VD, which changes to Optimal status once initialized.
● Slow initialization: In this foreground process, the firmware writes zeros to the
entire VD, during which the VD is in initialization status.
● Background initialization:
– For RAID 1 and RAID 10: When data is inconsistent between primary and
secondary member drives, data will be copied from the primary drive to
the secondary drive during background initialization to overwrite the
original data on the secondary drive.
– For RAID 5: Reads and checks parity of data on all member drives. If the
result is inconsistent with the parity drive, the newly generated data will
overwrite the original data on the parity drive.
NOTE
Drive Initialization
Drive initialization is a process of formatting a drive by repeatedly writing zeros to
it.
Consistency check applies only to initialized RAID arrays. For details, see 9.4.5
Configuring Drive Properties.
● When a hot spare drive is available, if a member drive of a RAID array with
redundancy is faulty, the RAID controller card automatically adds the hot
spare drive to the RAID array and uses the data on other member drives or
redundant data to rebuild data (same as the data on the original member
drive) on the new member drive. During the rebuild process, the RAID array is
in the degraded or partially degraded state, and the new member drive (the
original backup or idle drive) is in the rebuild state.
● If the hot spare or emergency spare function is not enabled and a member
drive of a RAID array with redundancy is faulty, the RAID array enters the
degraded or partially degraded state. After the faulty drive is removed and a
new drive is inserted, the RAID controller card automatically adds the new
drive to the RAID group and performs rebuild.
The LSI SoftRAID supports a maximum of six hot spare drives. If one or two
member drives of the same type as hot spare drives fail, hot spare drives take over
data processing to prevent data loss or fault deterioration.
For details about how to configure hot spare drives, see 9.4.1 Configuring a Hot
Spare Drive.
NOTE
The HDDs and SSDs cannot be used as the hot spare drives of each other.
SoftRAID does not support import of foreign configurations. Therefore, the newly
installed drives cannot contain foreign RAID configurations.
Scenarios
The LSI SoftRAID Configuration Utility runs independently from the operating
system and allows you to configure and manage LSI SoftRAID in an easy way.
Prerequisites
Conditions
● You have logged in to the server through the Remote Virtual Console and can
manage the server on a real-time basis.
● On the BIOS, the controller card working mode is set to RAID.
– On the Grantley platform, two SoftRAID controllers are available.
Configure either of them as required.
Data
Data preparation is not required for this operation.
Procedure
Step 1 Access the server using the Remote Virtual Console.
Step 2 Restart the server.
On the remote console shown in Figure 9-2, click Forced System Reset.
NOTE
● If "SAS/SATA RAID key is Detected" shown in Figure 9-3 is displayed during server
startup, a RAID key has been imported, and LSI SoftRAID supports RAID 0, 1, 10 and 5.
● If a RAID key is not configured, LSI SoftRAID supports only RAID 0, 1, and 10.
----End
Scenarios
For details about the number of drives supported by different RAID levels, see
Table 9-2.
The LSI SoftRAID BIOS allows you to create a RAID array using any of the
following methods:
● Easy Configuration: This is a fast creation mode. In this mode, the existing
RAID configuration data on drives is not deleted during RAID creation, but the
RAID storage capacity cannot be configured.
● New Configuration: This is a common creation mode. In this mode, the
existing RAID configuration data on drives is deleted during RAID creation,
and the RAID storage capacity can be configured.
● View/Add Configuration: This creation mode does not affect the existing
RAID configuration data, and the RAID storage capacity can be configured.
The procedures and screens of the three modes are similar. This section describes
only the New Configuration mode as an example.
NOTICE
● The LSI SoftRAID supports SATA HDDs and SATA SSDs. Drives in one RAID array
must be of the same type, but can have different capacities or be provided by
different vendors.
● Data on a drive will be deleted after the drive is added to a RAID array. Before
creating a RAID array, check that there is no data on drives or that the data on
drives is not required.
Procedure
Step 1 Back up data on drives and access the Configuration Utility main screen. For
details, see 9.6.1 Logging In to the Configuration Utility.
Step 2 On the Configuration Utility main screen, choose Configure > New
Configuration.
Step 4 Use ↑ and ↓ to select drives to be added to the RAID array and press the space bar
to change the drive status from READY to ONLINE, as shown in Figure 9-7.
NOTE
You can select an idle drive and press F4 to configure it as a hot spare drive.
Step 6 Press the space bar to select the drive array to be configured, as shown in Figure
9-9.
Step 8 Use ↑ and ↓ to select a property and press Enter to set the value.
Table 9-4 describes the RAID array properties.
RAID Select RAID and press Enter to display the available RAID
levels.
For details about the number of drives supported by different
RAID levels, see Table 9-2.
Units Select Units and press Enter to set the storage capacity unit.
The unit can be MB, GB, or TB.
Size Select Size and press Enter to set the RAID storage capacity.
NOTE
The size of a RAID 10 array is automatically generated and cannot be
configured.
DWC (Disk Indicates whether drive write caching is enabled for member
Write Cache) drives. It is disabled by default. Select DWC and press Enter to
set this parameter.
Step 10 Configure the other RAID parameters. For details, see Table 9-4.
The Virtual Drive(s) Configured pane displays the RAID configuration. Determine
whether to save the configuration based on the displayed information.
2. Press the space bar to select the virtual drive to be initialized and press F10.
A confirmation dialog box is displayed.
3. Select Yes and press Enter to start initialization.
When the initialization is complete, "100% Completed" is displayed, as shown
in Figure 9-13.
----End
NOTICE
● The LSI SoftRAID supports SATA HDDs and SATA SSDs. Drives in one RAID array
must be of the same type, but can have different capacities or be provided by
different vendors.
● Data on a drive will be deleted after the drive is added to a RAID array. Before
creating a RAID array, check that there is no data on drives or that the data on
drives is not required.
Procedure
Step 1 Back up data on drives and access the Configuration Utility main screen. For
details, see 9.6.1 Logging In to the Configuration Utility.
Step 2 On the Configuration Utility main screen, choose Configure > New
Configuration.
A confirmation dialog box is displayed, as shown in Figure 9-14.
Step 4 Use ↑ and ↓ to select drives to be added to the RAID array and press the space bar
to change the drive status from READY to ONLINE, as shown in Figure 9-16.
NOTE
You can select an idle drive and press F4 to configure it as a hot spare drive.
Step 6 Press the space bar to select the drive array to be configured, as shown in Figure
9-18.
Step 8 Use ↑ and ↓ to select a property and press Enter to set the value.
Table 9-5 describes the RAID array properties.
RAID Select RAID and press Enter to display the available RAID
levels.
For details about the number of drives supported by different
RAID levels, see Table 9-2.
Units Select Units and press Enter to set the storage capacity unit.
The unit can be MB, GB, or TB.
Size Select Size and press Enter to set the RAID storage capacity.
NOTE
The size of a RAID 10 array is automatically generated and cannot be
configured.
Step 10 Configure the other RAID parameters. For details, see Table 9-5.
The Virtual Drive(s) Configured pane displays the RAID configuration. Determine
whether to save the configuration based on the displayed information.
2. Press the space bar to select the virtual drive to be initialized and press F10.
A confirmation dialog box is displayed.
3. Select Yes and press Enter to start initialization.
When the initialization is complete, "100% Completed" is displayed, as shown
in Figure 9-22.
----End
Scenarios
For details about the number of drives supported by different RAID levels, see
Table 9-2.
The LSI SoftRAID BIOS allows you to create a RAID array using any of the
following methods:
● Easy Configuration: This is a fast creation mode. In this mode, the existing
RAID configuration data on drives is not deleted during RAID creation, but the
RAID storage capacity cannot be configured.
● New Configuration: This is a common creation mode. In this mode, the
existing RAID configuration data on drives is deleted during RAID creation,
and the RAID storage capacity can be configured.
● View/Add Configuration: This creation mode does not affect the existing
RAID configuration data, and the RAID storage capacity can be configured.
The procedures and screens of the three modes are similar. This section describes
only the New Configuration mode as an example.
NOTICE
● The LSI SoftRAID supports SATA HDDs and SATA SSDs. Drives in one RAID array
must be of the same type, but can have different capacities or be provided by
different vendors.
● Data on a drive will be deleted after the drive is added to a RAID array. Before
creating a RAID array, check that there is no data on drives or that the data on
drives is not required.
● When using the SoftRAID, you need to install a RAID key to support RAID 5.
Procedure
Step 1 Back up data on drives and access the Configuration Utility main screen. For
details, see 9.6.1 Logging In to the Configuration Utility.
Step 2 On the Configuration Utility main screen, choose Configure > New
Configuration.
Step 4 Use ↑ and ↓ to select drives to be added to the RAID array and press the space bar
to change the drive status from READY to ONLINE, as shown in Figure 9-25.
NOTE
You can select an idle drive and press F4 to configure it as a hot spare drive.
Step 6 Press the space bar to select the drive array to be configured, as shown in Figure
9-27.
Step 8 Use ↑ and ↓ to select a property and press Enter to set the value.
Table 9-6 describes the RAID array properties.
RAID Select RAID and press Enter to display the available RAID
levels.
For details about the number of drives supported by different
RAID levels, see Table 9-2.
Units Select Units and press Enter to set the storage capacity unit.
The unit can be MB, GB, or TB.
Size Select Size and press Enter to set the RAID storage capacity.
NOTE
The size of a RAID 10 array is automatically generated and cannot be
configured.
Step 10 Configure the other RAID parameters. For details, see Table 9-6.
The Virtual Drive(s) Configured pane displays the RAID configuration. Determine
whether to save the configuration based on the displayed information.
2. Press the space bar to select the virtual drive to be initialized and press F10.
A confirmation dialog box is displayed.
3. Select Yes and press Enter to start initialization.
When the initialization is complete, "100% Completed" is displayed, as shown
in Figure 9-31.
----End
NOTICE
● The LSI SoftRAID supports SATA HDDs and SATA SSDs. Drives in one RAID array
must be of the same type, but can have different capacities or be provided by
different vendors.
● Data on a drive will be deleted after the drive is added to a RAID array. Before
creating a RAID array, check that there is no data on drives or that the data on
drives is not required.
Procedure
Step 1 Back up data on drives and access the Configuration Utility main screen. For
details, see 9.6.1 Logging In to the Configuration Utility.
Step 2 On the Configuration Utility main screen, choose Configure > New
Configuration.
A confirmation dialog box is displayed, as shown in Figure 9-32.
Step 4 Use ↑ and ↓ to select drives to be added to the RAID array and press the space bar
to change the drive status from READY to ONLINE, as shown in Figure 9-34.
NOTE
You can select an idle drive and press F4 to configure it as a hot spare drive.
Step 6 Press the space bar to select the drive array to be configured, as shown in Figure
9-36.
Step 8 Use ↑ and ↓ to select a property and press Enter to set the value.
Table 9-7 describes the RAID array properties.
RAID Select RAID and press Enter to display the available RAID
levels.
For details about the number of drives supported by different
RAID levels, see Table 9-2.
Units Select Units and press Enter to set the storage capacity unit.
The unit can be MB, GB, or TB.
Size Select Size and press Enter to set the RAID storage capacity.
NOTE
The size of a RAID 10 array is automatically generated and cannot be
configured.
Step 10 Configure the other RAID parameters. For details, see Table 9-7.
The Virtual Drive(s) Configured pane displays the RAID configuration. Determine
whether to save the configuration based on the displayed information.
2. Press the space bar to select the virtual drive to be initialized and press F10.
A confirmation dialog box is displayed.
3. Select Yes and press Enter to start initialization.
When the initialization is complete, "100% Completed" is displayed, as shown
in Figure 9-40.
----End
Prerequisites
Conditions
● You have logged in to the Configuration Utility. For details, see 9.6.1 Logging
In to the Configuration Utility.
● A RAID has been created.
Data
Data preparation is not required for this operation.
Procedure
Step 1 On the main screen, select Configure > Select Boot Drive and press Enter.
The boot options are displayed, as shown in Figure 9-41.
Step 3 Press Esc to exit the boot options and return to the Configuration Utility main
screen.
----End
Scenarios
You can configure a hot spare drive on the LSI SoftRAID Configuration Utility in
either of the following ways:
Prerequisites
Conditions
NOTICE
● An idle drive that is not added to a RAID array can be configured as a hot spare
drive.
● A hot spare drive must be a SATA HDD or SSD, and its capacity cannot be less
than that of the smallest member drive in the RAID array.
Data
Procedure
Step 1 Log in to the Configuration Utility. For details, see 9.6.1 Logging In to the
Configuration Utility.
Step 2 On the Configuration Utility main screen, choose Objects > Physical Drive.
Step 3 Use ↑ and ↓ to select a drive in READY state and press Enter.
The drive setting menu is displayed, as shown in Figure 9-43. Table 9-8 describes
the options.
Parameter Description
Parameter Description
----End
Additional Information
Related Tasks
To cancel the global hot spare drive, see 9.4.2 Deleting a Hot Spare Drive.
Related Concepts
None
Procedure
Step 1 Log in to the Configuration Utility. For details, see 9.6.1 Logging In to the
Configuration Utility.
Step 2 On the Configuration Utility main screen, choose Objects > Physical Drive.
The drive list is displayed. See Figure 9-45.
Step 3 Use ↑ and ↓ to select a drive in HOTSP state and press Enter.
The drive setting menu is displayed, as shown in Figure 9-46. Table 9-9 describes
the options.
----End
Procedure
Step 1 Log in to the Configuration Utility. For details, see 9.6.1 Logging In to the
Configuration Utility.
Step 2 On the Configuration Utility main screen, choose Objects > Adapter > Adapter-0
and press Enter.
The controller property screen is displayed, as shown in Figure 9-48.
Step 3 Use ↑ and ↓ to select the parameters to be set and press Enter.
Property Description
Chk Const Rate I/O duty cycle during consistency check. The
default value is 30%.
Property Description
----End
Procedure
Step 1 Log in to the Configuration Utility main screen. For details, see 9.6.1 Logging In
to the Configuration Utility.
Step 2 On the main screen, choose Objects > Virtual Drive and press Enter.
Step 3 Select the virtual drive to be queried or configured and press Enter.
The virtual drive operation menu is displayed, as shown in Figure 9-50.
Step 5 Use ↑ and ↓ to select the parameters to be set and press Enter.
Table 9-11 describes the virtual drive properties.
Stripe Size Size of the data strip on each member drive of the
virtual drive. Default value: 64 KB
Property Description
Step 7 Press Esc to exit the virtual drive selection screen and return to the Configuration
Utility main screen.
----End
Scenarios
You can view drive properties and perform related operations.
Procedure
Step 1 Log in to the Configuration Utility. For details, see 9.6.1 Logging In to the
Configuration Utility.
Step 2 On the Configuration Utility main screen, choose Objects > Physical Drive and
press Enter.
Parameter Description
Step 5 Press Esc to exit the drive list and return to the Configuration Utility main screen.
Step 6 Exit the Configuration Utility.
1. Press Esc to exit the Configuration Utility.
2. Restart the server.
----End
Procedure
Step 1 Log in to the Configuration Utility main screen. For details, see 9.6.1 Logging In
to the Configuration Utility.
Step 2 On the Configuration Utility main screen, choose Rebuild and press Enter.
The drive list is displayed. See Figure 9-54.
Step 3 Use ↑ and ↓ to locate the drive to be synchronized, and press Space to select it.
The selected drive turns red. See Figure 9-55.
NOTE
Step 6 Press Esc to exit the synchronization menu and return to the Configuration Utility
main screen.
----End
Scenarios
The consistency check operation verifies inconsistent data between mirrored drives
or between the primary data blocks and redundant data.
The LSI SoftRAID management software provides two data consistency check
methods:
The procedures and displayed information of the two methods are similar. This
topic describes the first method.
Procedure
Step 1 Log in to the Configuration Utility. For details, see 9.6.1 Logging In to the
Configuration Utility.
Step 2 On the Configuration Utility main screen, choose Check Consistency and press
Enter.
The virtual drive list is displayed, as shown in Figure 9-56.
Step 3 Press the space bar to select the virtual drive to be checked and press F10.
The consistency check confirmation dialog box is displayed.
NOTE
Step 4 Select Yes and press Enter to start the consistency check.
After the consistency check is complete, the message "100% Completed" is
displayed.
Step 5 Press Esc to exit the consistency check screen and return to the Configuration
Utility main screen.
Step 6 Press Esc to exit the Configuration Utility.
Step 7 Restart the server.
----End
Procedure
Step 1 Log in to the Configuration Utility. For details, see 9.6.1 Logging In to the
Configuration Utility.
Step 2 Clear all RAID information.
1. On the main screen, select Configure > Clear Configuration and press Enter.
A confirmation dialog box is displayed.
2. Select Yes and press Enter to clear all RAID configurations.
3. Press Esc to exit the configuration menu and return to the Configuration
Utility main screen.
Clearing a RAID Array
4. On the main screen, choose Objects > Virtual Drive and press Enter.
The virtual drive list is displayed, as shown in Figure 9-57.
----End
9.5 Troubleshooting
This section describes solutions to drive faults. For other situations, see the
Huawei Server Maintenance Guide.
Solution
NOTE
● If the original RAID array has a hot spare drive, the newly installed drive
becomes the hot spare drive.
● If the RAID array with redundancy does not have a spare drive, the newly
installed drive automatically rebuilds data of the faulty drive.
----End
Scenarios
The LSI SoftRAID Configuration Utility runs independently from the operating
system and allows you to configure and manage LSI SoftRAID in an easy way.
Prerequisites
Conditions
● You have logged in to the server through the Remote Virtual Console and can
manage the server on a real-time basis.
● On the BIOS, the controller card working mode is set to RAID.
– On the Grantley platform, two SoftRAID controllers are available.
Configure either of them as required.
Data
Procedure
Step 1 Access the server using the Remote Virtual Console.
On the remote console shown in Figure 9-58, click Forced System Reset.
NOTE
● If "SAS/SATA RAID key is Detected" shown in Figure 9-59 is displayed during server
startup, a RAID key has been imported, and LSI SoftRAID supports RAID 0, 1, 10 and 5.
● If a RAID key is not configured, LSI SoftRAID supports only RAID 0, 1, and 10.
----End
9.6.2 Configure
The Configure menu allows you to configure RAID and boot devices.
Screen Introduction
Figure 9-61 shows the Configure menu. Table 9-14 describes the operation items
on the Configure menu.
View/Add This creation mode does not affect the existing RAID
Configuration configuration data, and the RAID storage capacity can be
configured.
NOTE
ONLINE indicates that the drive has been configured with RAID, and READY indicates that
the drive is idle.
NOTE
In Figure 9-63, A-0 is the array formed by the remaining RAID space, and A-1 is the array
formed by the newly selected drives.
Step 4 Press the space bar to select the array to be configured, for example, A-1, and
press F10.
The RAID property configuration screen is displayed.
Step 5 Use ↑ and ↓ to select a property and press Enter to set the value.
Table 9-15 describes the parameters in Figure 9-64.
----End
Step 3 Use ↑ and ↓ to select drives to be added to the RAID array and press the space bar
to change the drive status from READY to ONLINE, as shown in Figure 9-67.
NOTE
You can select an idle drive and press F4 to configure it as a hot spare drive.
Step 5 Press the space bar to select the drive array to be configured, as shown in Figure
9-69.
Step 7 Use ↑ and ↓ to select a property and press Enter to set the value.
Table 9-16 describes the RAID array properties.
RAID Select RAID and press Enter to display the available RAID
levels.
For details about the number of drives supported by different
RAID levels, see Table 9-2.
Units Select Units and press Enter to set the storage capacity unit.
The unit can be MB, GB, or TB.
Size Select Size and press Enter to set the RAID storage capacity.
NOTE
The size of a RAID 10 array is automatically generated and cannot be
configured.
----End
----End
Screen Introduction
After you select Set Boot Drive, the boot device list is displayed, as shown in
Figure 9-72.
----End
9.6.3 Initialize
Initialize is used to initialize a RAID array.
Screen Description
Select Initialize. The RAID array list is displayed, as shown in Figure 9-73.
----End
9.6.4 Objects
Objects allows you to view or modify properties of controllers, virtual drives, and
physical drives.
Screen Description
Figure 9-75 shows the menu.
9.6.4.1 Adapter
Adapter allows you to view or modify controller properties.
Screen Introduction
After you select Adapter, the screen shown in Figure 9-76 is displayed. Table 9-17
describes the controller properties.
Chk Const Rate I/O duty cycle during consistency check. The
default value is 30%.
Parameter Description
----End
Screen Description
Figure 9-77 shows the menu displayed after Virtual Drive is selected. Table 9-18
describes the available operations.
Operation Description
Step 6 Press Esc to exit the virtual drive selection screen and return to the Configuration
Utility main screen.
----End
NOTE
Step 4 Select Yes and press Enter to start the consistency check.
After the consistency check is complete, the message "100% Completed" is
displayed.
Step 5 Press Esc to exit the consistency check screen.
Step 6 Press Esc to exit the virtual drive selection screen and return to the Configuration
Utility main screen.
----End
Stripe Size Size of the data strip on each member drive of the
virtual drive. Default value: 64 KB
Step 4 Use ↑ and ↓ to select the parameters to be set and press Enter.
Step 6 Press Esc to exit the virtual drive selection screen and return to the Configuration
Utility main screen.
----End
Screen Introduction
Figure 9-79 shows the menu displayed after Physical Drive is selected. Table
9-20 describes the available operations.
Parameter Description
Step 2 Use ↑ and ↓ to select a drive in READY state and press Enter.
Step 3 Select Make Hot Spare and press Enter.
A confirmation dialog box is displayed.
Step 4 Select Yes and press Enter.
The drive status changes to HOTSP.
Step 6 Press Esc to exit the Objects configuration menu and return to the Configuration
Utility main screen.
----End
Step 6 Press Esc to exit the Objects configuration menu and return to the Configuration
Utility main screen.
----End
Step 3 On the menu, choose Change Drv State and press Enter.
NOTE
You can click Change Drv State to change the status of a drive from ONLINE to FAIL or
from HOTSP to READY.
Step 6 Press Esc to exit the Objects configuration menu and return to the Configuration
Utility main screen.
----End
Step 4 Press any key to exit the Physical Drive Information screen.
Step 5 Press Esc to exit the drive list.
Step 6 Press Esc to exit the Objects configuration menu and return to the Configuration
Utility main screen.
----End
9.6.5 Rebuild
Rebuild is used to rebuild RAID data when a drive is replaced with a new drive or
with the hot spare drive.
Rebuilding RAID
Step 1 On the Configuration Utility main screen, choose Rebuild and press Enter.
The drive list is displayed. See Figure 9-83.
Step 2 Use ↑ and ↓ to locate the drive to be synchronized, and press Space to select it.
The selected drive turns red. See Figure 9-84.
NOTE
Step 5 Press Esc to exit the synchronization menu and return to the Configuration Utility
main screen.
----End
Checking Consistency
Step 1 On the Configuration Utility main screen, choose Check Consistency and press
Enter.
The virtual drive list is displayed, as shown in Figure 9-85.
Step 2 Press the space bar to select the virtual drive to be checked and press F10.
The consistency check confirmation dialog box is displayed.
NOTE
Step 3 Select Yes and press Enter to start the consistency check.
After the consistency check is complete, the message "100% Completed" is
displayed.
Step 4 Press Esc to exit the consistency check screen and return to the Configuration
Utility main screen.
----End
----End
Function
Query information about the RAID array of the LSI SoftRAID.
Format
MegaCli64 -ldinfo -lall -acontroller_id
MegaCli64 -ldinfo -lVD_id -acontroller_id
Parameters
Parameter Description Value
Example
# Query information about all RAID arrays.
[root@localhost ~]# MegaCli64 -ldinfo -lall -a1
Function
Query information about all physical drives.
Format
MegaCli64 -pdlist -aall
MegaCli64 -pdInfo -PhysDrv[:slot_id] -acontroller_id
Parameters
Parameter Description Value
Example
# Query information about all physical drives.
[root@localhost ~]# MegaCli64 -pdlist -aall
Adapter #0
Adapter #1
Shield Counter: 0
Successful diagnostics completion on : N/A
SAS Address(0): 0x0
Connected Port Number: 0
Inquiry Data: S3F4NY0HB00373 SAMSUNG MZ7KM480HMHQ-00005 GXM5104Q
FDE Capable: Not Capable
FDE Enable: Disable
Secured: Unsecured
Locked: Unlocked
Needs EKM Attention: No
Foreign State: None
Device Speed: 6.0Gb/s
Link Speed: 6.0Gb/s
Media Type: Solid State Device
Drive Temperature :32C (89.60 F)
PI Eligibility: No
Drive is formatted for PI information: No
PI: No PI
Port-0 :
Port status: Active
Port's Linkspeed: 6.0Gb/s
Drive has flagged a S.M.A.R.T alert : No
DXM9203Q
FDE Capable: Not Capable
FDE Enable: Disable
Secured: Unsecured
Locked: Unlocked
Needs EKM Attention: No
Foreign State: None
Device Speed: 6.0Gb/s
Link Speed: 6.0Gb/s
Media Type: Solid State Device
Drive Temperature :33C (91.40 F)
PI Eligibility: No
Drive is formatted for PI information: No
PI: No PI
Port-0 :
Port status: Active
Port's Linkspeed: 6.0Gb/s
Drive has flagged a S.M.A.R.T alert : No
Function
Create a RAID 0, RAID 1, or RAID 5 array.
NOTE
For details about how to create a RAID 10 array, see 9.7.2.4 Creating a RAID 10 Array.
Format
MegaCli64 -cfgldadd -rlevel[:slot_id,:slot_id] -acontroller_id
Parameters
Parameter Description Value
Example
# Create a RAID 1 array.
[root@localhost ~]# MegaCli64 -cfgldadd -r1[:2,:3] -a1
Adapter 1: Created VD 1
Function
Create a RAID 10 array.
Format
MegaCli64 -cfgspanadd -rlevel -array0[:slot_id,:slot_id] -array1[:slot_id,:slot_id]
-acontroller_id
Parameters
Parameter Description Value
Example
# Create a RAID 10 array.
[root@localhost ~]# MegaCli64 -cfgspanadd -r10 -array0[:2,:3] -array1[:4,:5] -a1
Adapter 1: Created VD 1
Function
Delete a RAID array.
Format
MegaCli64 -cfglddel -lVD_id -acontroller_id
Parameters
Parameter Description Value
Example
# Delete a RAID array.
[root@localhost ~]# MegaCli64 -cfglddel -l1 -a1
Function
Pinpoint the position of a drive by setting its indicator status.
Format
MegaCli64 -PDLocate -Start -PhysDrv[:slot_id] -acontroller_id
MegaCli64 -PDLocate -Stop -PhysDrv[:slot_id] -acontroller_id
Parameters
Parameter Description Value
Example
# Turn on the indicator of the drive in slot 0.
[root@localhost ~]# MegaCli64 -PDLocate -Start -PhysDrv[:0] -a1
Adapter: 1: Device at EnclId-N/A SlotId-0 -- PD Locate Start Command was successfully sent to Firmware
Adapter: 1: Device at EnclId-N/A SlotId-0 -- PD Locate Stop Command was successfully sent to Firmware
Function
Configure or delete a global hot spare drive.
Format
MegaCli64 -pdhsp -set -physdrv[:slot_id] -acontroller_id
MegaCli64 -pdhsp -rmv -physdrv[:slot_id] -acontroller_id
Parameters
Parameter Description Value
Example
# Set the drive in slot 2 as a global hot spare drive.
[root@localhost ~]# MegaCli64 -pdhsp -set -physdrv[:2] -a1
10 PM8060
10.1 Overview
A PM8060 RAID controller card provides two 12 Gbit/s SAS wide ports and
supports PCIe 3.0 ports.
In addition to better system performance, the controller card supports fault-
tolerant data storage in multiple drive partitions and read/write operations on
multiple drives at the same time. This makes accessing data on drives faster.
A PM8060 RAID controller card supports boot and configuration in legacy and
UEFI modes.
The built-in cache improves performance as follows:
● Data is directly written to the cache. The RAID controller card updates data to
drives after data is accumulated to some extent in the cache. This implements
data writing in batches. In addition, the cache improves the overall data write
speed due to its higher speed than a drive.
● Data is directly read from the cache, reducing the response time from 6 ms to
less than 1 ms.
Figure 10-1 shows a PM8060, which is installed in a PCIe slot of a server.
Indicators
Table 10-1 describes indicators on the PM8060.
10.2 Functions
NOTE
Arrays are also called RAID arrays. Each array can contain one or more LDs.
Table 10-2 lists the RAID levels and number of drives supported by the PM8060.
Volume 1 0
RAID 0 2 to 128 0
RAID 1 2 1
RAID 5 3 to 32 1
RAID 6 4 to 32 2
NOTE
● An array consists of multiple LDs. For example, a RAID 50 array consists of two RAID 5
LDs and the number of LDs is 2.
● Failed drives cannot be adjacent.
● Each LD of a RAID 10 or RAID 50 array allows for only one failed drive.
● Each LD of a RAID 60 array allows for a maximum of two failed drives.
● The PM8060 supports non-standard RAID 1E, which allows for only one failed drive and
therefore is not recommended.
NOTE
● The HDDs and SSDs cannot be used as the hot spare drives of each other.
● The HDDs include SAS HDDs and SATA HDDS. If the member drives of a RAID array are
SAS drives, the SATA drives can be used as dedicated hot spare drives. If the member
drives are SATA drives, the SAS drives cannot be used as dedicated hot spare drives.
● An idle drive can be configured as a hot spare drive, but a RAID member drive cannot.
● A hot spare drive must be a SATA or SAS drive, and it must have at least the capacity of
the RAID member drive with the largest capacity.
● All RAID levels except RAID 0 support hot spare drives.
● You cannot directly change a global hot spare drive to a dedicated hot spare drive or
vice versa. You need to set the drive to idle state, and then set it as a global or
dedicated hot spare drive as required.
NOTE
After removing a drive, install it after at least 30 seconds. Otherwise, the drive cannot be
identified.
feature, you can install the OS only on the virtual drives configured under the
RAID controller card.
NOTE
If a drive in passthrough mode is faulty (not in the healthy status), the Fault indicator on
the drive will be lit and the iBMC will generate an alarm.
10.2.7 Initialization
Drives mounted to the PM8060 support the following initialization modes:
Drives in the Ready state can be used as member drives of an array or initialized to pass-
through drives in the Raw state.
● Secure Erase Drives: erases data from a specified drive and formats the drive
completely, which takes a long time.
● Secure ATA Erase: erases data from a specified SATA drive and formats the
drive completely, which takes a long time.
● Uninitialize Drives: initializes specified drives to Raw.
● Adding drives: Adds new drives to an existing array for capacity expansion.
● Increasing available array space: Increases the percentage of available space
of an array for capacity expansion if the capacity of all member drives in the
array is not fully utilized.
Before using this feature, download the ARCCONF Command Line Utility released
by PMC. For details, see the documents downloaded from the PMC official
website.
● Always Read Ahead: The PM8060 caches the data that follows the data
being read for faster access. This policy reduces drive seeks and shortens read
time from over 6 ms to less than 1 ms.
● Write Back: After the cache receives host data, the PM8060 signals the host
that the data transmission is complete.
Data is directly written to the cache. The RAID controller card updates data to
drives after data is accumulated to some extent in the cache. This implements
data writing in batches. In addition, the cache improves the overall data write
speed due to its higher speed than a drive.
Procedure
Step 1 Set the Legacy mode. For details, see A.1.3 Setting the Legacy Mode.
Step 2 Log in to the Configuration Utility screen of the PM8060.
1. During the server startup, press Ctrl+A when the message "Press <Ctrl><A>
for PMC RAID Configuration Utility" is displayed.
The screen shown in Figure 10-4 is displayed. For details about the
parameters, see Table 10-3.
Disk Utilities Views the current drive list and performs operations
on specific drives, such as turning on indicators,
formatting drives, and verifying data.
----End
Additional Information
Related Tasks
When the server is restarted and before the RAID configuration screen is
displayed, you can view the firmware version, PCIe slot information, capacitor
information, drives connected to the controller, and information about created
RAID arrays. See Figure 10-5.
Related Concepts
None
Workflow
Procedure
Step 1 Back up data on drives and access the Configuration Utility main screen. For
details, see 10.3.2 Logging In to the Configuration Utility.
Step 2 Initialize member drives.
Before an array is created, if the predetermined drive contains partition
information or is totally used for creating another array, the drive is gray and
cannot be selected. Initialization is required if you want to use this drive as a
member drive of the new array.
1. Select Logical Device Configuration and press Enter.
The array configuration main screen is displayed.
3. In the Select drives for initialization area, select the drives to be initialized
by pressing the space or Insert button.
The selected drives are displayed in the Selected Drives area, as shown in
Figure 10-8.
To cancel the selected drives, you can press Del.
4. Press Enter.
A confirmation dialog box is displayed.
NOTICE
5. Enter Y.
The Initialization begins and lasts for about 10 seconds.
6. Press any key to return to the screen shown in Figure 10-6.
Step 3 Select the member drives.
1. Select Create Array and press Enter.
The drive list is displayed.
2. In the Select drives to create Array area, select the drives to be added to the
array by pressing the space or Insert button.
The selected drives are displayed in the Selected Drives area.
NOTE
Parameter Description
Parameter Description
Stripe Size Indicates the stripe size (64 KB, 128 KB, 256 KB, 512
KB, or 1024 KB).
When Array Type is RAID 1 or Volume, the value of
this parameter cannot be changed.
Write Caching Indicates the write cache policy, which can be set as
follows:
– Enable Always: RAID write cache is still enabled in
case of no capacitor protection, and data loss may
be caused if a power failure occurs.
NOTE
Exercise caution when selecting Enable Always.
– Enable With Backup Unit: RAID write cache is
disabled in case of no capacitor protection or in case
that capacitor protection is not ready. This setting is
recommended.
– Disable: RAID write cache is always disabled.
Create RAID via Specifies the actions performed after creating an array.
– Quick Init and Skip Init can be performed for RAID
0, 1, and volumes. Quick Init allows you to quickly
initialize the array data, and Skip Init allows you to
recover the array and to set the RAID properties
among different drives without clearing the data.
– Besides Quick Init and Skip Init, Build, Verify, and
Clear can also be performed for arrays with a
redundancy function and allow you to initialize and
clear the array data.
NOTE
● This operation can be performed only when the array size is smaller than the maximum
value.
● A maximum of 64 logical drives can be created for each array.
1. After the array is created, repeat Step 2 to Step 4, as shown in Figure 10-6.
You can create multiple logical drives based on the actual situation.
Step 6 Check the configuration result.
1. On the screen shown in Figure 10-6, select Manage Array and press Enter.
The array list is displayed.
----End
The PM8060 supports SAS/SATA HDDs and SSDs. Drives in one RAID array must be of the
same type, but can have different capacities or be provided by different vendors.
Workflow
Procedure
Step 1 Back up data on drives and access the Configuration Utility main screen. For
details, see 10.3.2 Logging In to the Configuration Utility.
Step 2 Initialize member drives.
Before an array is created, if the predetermined drive contains partition
information or is totally used for creating another array, the drive is gray and
cannot be selected. Initialization is required if you want to use this drive as a
member drive of the new array.
1. Select Logical Device Configuration and press Enter.
The array configuration main screen is displayed, as shown in Figure 10-12.
3. In the Select drives for initialization area, select the drives to be initialized
by pressing the space or Insert button.
The selected drives are displayed in the Selected Drives area, as shown in
Figure 10-14.
To cancel the selected drives, you can press Del.
4. Press Enter.
A confirmation dialog box is displayed.
NOTICE
5. Enter Y.
The Initialization begins and lasts for about 10 seconds.
6. Press any key to return to the screen shown in Figure 10-12.
Step 3 Select the member drives.
1. Select Create Array and press Enter.
The drive list is displayed.
2. In the Select drives to create Array area, select the drives to be added to the
array by pressing the space or Insert button.
The selected drives are displayed in the Selected Drives area, as shown in
Figure 10-15.
Parameter Description
Stripe Size Indicates the stripe size (64 KB, 128 KB, 256 KB, 512
KB, or 1024 KB).
When Array Type is RAID 1 or Volume, the value of
this parameter cannot be changed.
Write Caching Indicates the write cache policy, which can be set as
follows:
– Enable Always: RAID write cache is still enabled in
case of no capacitor protection, and data loss may
be caused if a power failure occurs.
NOTE
Exercise caution when selecting Enable Always.
– Enable With Backup Unit: RAID write cache is
disabled in case of no capacitor protection or in case
that capacitor protection is not ready. This setting is
recommended.
– Disable: RAID write cache is always disabled.
Create RAID via Specifies the actions performed after creating an array.
– Quick Init and Skip Init can be performed for RAID
0, 1, and volumes. Quick Init allows you to quickly
initialize the array data, and Skip Init allows you to
recover the array and to set the RAID properties
among different drives without clearing the data.
– Besides Quick Init and Skip Init, Build, Verify, and
Clear can also be performed for arrays with a
redundancy function and allow you to initialize and
clear the array data.
2. Select RAID 0.
3. Configure the other array parameters according to Table 10-5.
4. Select Done and press Enter.
The system creates an array and performs the operations defined in Create
RAID via.
After the creation is complete, the array management screen is displayed.
Step 5 (Optional) Create multiple logical drives.
NOTE
● This operation can be performed only when the array size is smaller than the maximum
value.
● A maximum of 64 logical drives can be created for each array.
1. After the array is created, repeat Step 3 to Step 4, as shown in Figure 10-12.
You can create multiple logical drives based on the actual situation.
Step 6 Check the configuration result.
1. On the screen shown in Figure 10-12, select Manage Array and press Enter.
The array list is displayed, as shown in Table 10-5.
----End
The PM8060 supports SAS/SATA HDDs and SSDs. Drives in one RAID array must be of the
same type, but can have different capacities or be provided by different vendors.
Workflow
Procedure
Step 1 Back up data on drives and access the Configuration Utility main screen. For
details, see 10.3.2 Logging In to the Configuration Utility.
Step 2 Initialize member drives.
Before an array is created, if the predetermined drive contains partition
information or is totally used for creating another array, the drive is gray and
cannot be selected. Initialization is required if you want to use this drive as a
member drive of the new array.
1. Select Logical Device Configuration and press Enter.
The array configuration main screen is displayed, as shown in Figure 10-18.
3. In the Select drives for initialization area, select the drives to be initialized
by pressing the space or Insert button.
The selected drives are displayed in the Selected Drives area, as shown in
Figure 10-20.
To cancel the selected drives, you can press Del.
4. Press Enter.
A confirmation dialog box is displayed.
NOTICE
5. Enter Y.
The Initialization begins and lasts for about 10 seconds.
6. Press any key to return to the screen shown in Figure 10-18.
Step 3 Select the member drives.
1. Select Create Array and press Enter.
The drive list is displayed.
2. In the Select drives to create Array area, select the drives to be added to the
array by pressing the space or Insert button.
The selected drives are displayed in the Selected Drives area, as shown in
Figure 10-21.
Parameter Description
Stripe Size Indicates the stripe size (64 KB, 128 KB, 256 KB, 512
KB, or 1024 KB).
When Array Type is RAID 1 or Volume, the value of
this parameter cannot be changed.
Write Caching Indicates the write cache policy, which can be set as
follows:
– Enable Always: RAID write cache is still enabled in
case of no capacitor protection, and data loss may
be caused if a power failure occurs.
NOTE
Exercise caution when selecting Enable Always.
– Enable With Backup Unit: RAID write cache is
disabled in case of no capacitor protection or in case
that capacitor protection is not ready. This setting is
recommended.
– Disable: RAID write cache is always disabled.
Create RAID via Specifies the actions performed after creating an array.
– Quick Init and Skip Init can be performed for RAID
0, 1, and volumes. Quick Init allows you to quickly
initialize the array data, and Skip Init allows you to
recover the array and to set the RAID properties
among different drives without clearing the data.
– Besides Quick Init and Skip Init, Build, Verify, and
Clear can also be performed for arrays with a
redundancy function and allow you to initialize and
clear the array data.
2. Select RAID 1.
3. Configure the other array parameters according to Table 10-6.
4. Select Done and press Enter.
The system creates an array and performs the operations defined in Create
RAID via.
After the creation is complete, the array management screen is displayed.
Step 5 (Optional) Create multiple logical drives.
NOTE
● This operation can be performed only when the array size is smaller than the maximum
value.
● A maximum of 64 logical drives can be created for each array.
1. After the array is created, repeat Step 3 to Step 4, as shown in Figure 10-18.
You can create multiple logical drives based on the actual situation.
Step 6 Check the configuration result.
1. On the screen shown in Figure 10-18, select Manage Array and press Enter.
The array list is displayed, as shown in Table 10-6.
----End
The PM8060 supports SAS/SATA HDDs and SSDs. Drives in one RAID array must be of the
same type, but can have different capacities or be provided by different vendors.
Workflow
Procedure
Step 1 Back up data on drives and access the Configuration Utility main screen. For
details, see 10.3.2 Logging In to the Configuration Utility.
Step 2 Initialize member drives.
Before an array is created, if the predetermined drive contains partition
information or is totally used for creating another array, the drive is gray and
cannot be selected. Initialization is required if you want to use this drive as a
member drive of the new array.
1. Select Logical Device Configuration and press Enter.
The array configuration main screen is displayed, as shown in Figure 10-24.
3. In the Select drives for initialization area, select the drives to be initialized
by pressing the space or Insert button.
The selected drives are displayed in the Selected Drives area, as shown in
Figure 10-26.
To cancel the selected drives, you can press Del.
4. Press Enter.
A confirmation dialog box is displayed.
NOTICE
5. Enter Y.
The Initialization begins and lasts for about 10 seconds.
6. Press any key to return to the screen shown in Figure 10-24.
Step 3 Select the member drives.
1. Select Create Array and press Enter.
The drive list is displayed.
2. In the Select drives to create Array area, select the drives to be added to the
array by pressing the space or Insert button.
The selected drives are displayed in the Selected Drives area, as shown in
Figure 10-27.
Parameter Description
Stripe Size Indicates the stripe size (64 KB, 128 KB, 256 KB, 512
KB, or 1024 KB).
When Array Type is RAID 1 or Volume, the value of
this parameter cannot be changed.
Write Caching Indicates the write cache policy, which can be set as
follows:
– Enable Always: RAID write cache is still enabled in
case of no capacitor protection, and data loss may
be caused if a power failure occurs.
NOTE
Exercise caution when selecting Enable Always.
– Enable With Backup Unit: RAID write cache is
disabled in case of no capacitor protection or in case
that capacitor protection is not ready. This setting is
recommended.
– Disable: RAID write cache is always disabled.
Create RAID via Specifies the actions performed after creating an array.
– Quick Init and Skip Init can be performed for RAID
0, 1, and volumes. Quick Init allows you to quickly
initialize the array data, and Skip Init allows you to
recover the array and to set the RAID properties
among different drives without clearing the data.
– Besides Quick Init and Skip Init, Build, Verify, and
Clear can also be performed for arrays with a
redundancy function and allow you to initialize and
clear the array data.
2. Select RAID 5.
3. Configure the other array parameters according to Table 10-7.
4. Select Done and press Enter.
The system creates an array and performs the operations defined in Create
RAID via.
After the creation is complete, the array management screen is displayed.
Step 5 (Optional) Create multiple logical drives.
NOTE
● This operation can be performed only when the array size is smaller than the maximum
value.
● A maximum of 64 logical drives can be created for each array.
1. After the array is created, repeat Step 3 to Step 4, as shown in Figure 10-24.
You can create multiple logical drives based on the actual situation.
Step 6 Check the configuration result.
1. On the screen shown in Figure 10-24, select Manage Array and press Enter.
The array list is displayed, as shown in Table 10-7.
----End
The PM8060 supports SAS/SATA HDDs and SSDs. Drives in one RAID array must be of the
same type, but can have different capacities or be provided by different vendors.
Workflow
Procedure
Step 1 Back up data on drives and access the Configuration Utility main screen. For
details, see 10.3.2 Logging In to the Configuration Utility.
Step 2 Initialize member drives.
Before an array is created, if the predetermined drive contains partition
information or is totally used for creating another array, the drive is gray and
cannot be selected. Initialization is required if you want to use this drive as a
member drive of the new array.
1. Select Logical Device Configuration and press Enter.
The array configuration main screen is displayed, as shown in Figure 10-30.
3. In the Select drives for initialization area, select the drives to be initialized
by pressing the space or Insert button.
The selected drives are displayed in the Selected Drives area, as shown in
Figure 10-32.
To cancel the selected drives, you can press Del.
4. Press Enter.
A confirmation dialog box is displayed.
NOTICE
5. Enter Y.
The Initialization begins and lasts for about 10 seconds.
6. Press any key to return to the screen shown in Figure 10-30.
Step 3 Select the member drives.
1. Select Create Array and press Enter.
The drive list is displayed.
2. In the Select drives to create Array area, select the drives to be added to the
array by pressing the space or Insert button.
The selected drives are displayed in the Selected Drives area, as shown in
Figure 10-33.
Parameter Description
Stripe Size Indicates the stripe size (64 KB, 128 KB, 256 KB, 512
KB, or 1024 KB).
When Array Type is RAID 1 or Volume, the value of
this parameter cannot be changed.
Write Caching Indicates the write cache policy, which can be set as
follows:
– Enable Always: RAID write cache is still enabled in
case of no capacitor protection, and data loss may
be caused if a power failure occurs.
NOTE
Exercise caution when selecting Enable Always.
– Enable With Backup Unit: RAID write cache is
disabled in case of no capacitor protection or in case
that capacitor protection is not ready. This setting is
recommended.
– Disable: RAID write cache is always disabled.
Create RAID via Specifies the actions performed after creating an array.
– Quick Init and Skip Init can be performed for RAID
0, 1, and volumes. Quick Init allows you to quickly
initialize the array data, and Skip Init allows you to
recover the array and to set the RAID properties
among different drives without clearing the data.
– Besides Quick Init and Skip Init, Build, Verify, and
Clear can also be performed for arrays with a
redundancy function and allow you to initialize and
clear the array data.
2. Select RAID 6.
3. Configure the other array parameters according to Table 10-8.
4. Select Done and press Enter.
The system creates an array and performs the operations defined in Create
RAID via.
After the creation is complete, the array management screen is displayed.
Step 5 (Optional) Create multiple logical drives.
NOTE
● This operation can be performed only when the array size is smaller than the maximum
value.
● A maximum of 64 logical drives can be created for each array.
1. After the array is created, repeat Step 3 to Step 4, as shown in Figure 10-30.
You can create multiple logical drives based on the actual situation.
Step 6 Check the configuration result.
1. On the screen shown in Figure 10-30, select Manage Array and press Enter.
The array list is displayed, as shown in Table 10-8.
----End
The PM8060 supports SAS/SATA HDDs and SSDs. Drives in one RAID array must be of the
same type, but can have different capacities or be provided by different vendors.
Workflow
Procedure
Step 1 Back up data on drives and access the Configuration Utility main screen. For
details, see 10.3.2 Logging In to the Configuration Utility.
Step 2 Initialize member drives.
Before an array is created, if the predetermined drive contains partition
information or is totally used for creating another array, the drive is gray and
cannot be selected. Initialization is required if you want to use this drive as a
member drive of the new array.
1. Select Logical Device Configuration and press Enter.
The array configuration main screen is displayed, as shown in Figure 10-36.
3. In the Select drives for initialization area, select the drives to be initialized
by pressing the space or Insert button.
The selected drives are displayed in the Selected Drives area, as shown in
Figure 10-38.
To cancel the selected drives, you can press Del.
4. Press Enter.
A confirmation dialog box is displayed.
NOTICE
5. Enter Y.
The Initialization begins and lasts for about 10 seconds.
6. Press any key to return to the screen shown in Figure 10-36.
Step 3 Select the member drives.
1. Select Create Array and press Enter.
The drive list is displayed.
2. In the Select drives to create Array area, select the drives to be added to the
array by pressing the space or Insert button.
The selected drives are displayed in the Selected Drives area, as shown in
Figure 10-39.
Parameter Description
Stripe Size Indicates the stripe size (64 KB, 128 KB, 256 KB, 512
KB, or 1024 KB).
When Array Type is RAID 1 or Volume, the value of
this parameter cannot be changed.
Write Caching Indicates the write cache policy, which can be set as
follows:
– Enable Always: RAID write cache is still enabled in
case of no capacitor protection, and data loss may
be caused if a power failure occurs.
NOTE
Exercise caution when selecting Enable Always.
– Enable With Backup Unit: RAID write cache is
disabled in case of no capacitor protection or in case
that capacitor protection is not ready. This setting is
recommended.
– Disable: RAID write cache is always disabled.
Create RAID via Specifies the actions performed after creating an array.
– Quick Init and Skip Init can be performed for RAID
0, 1, and volumes. Quick Init allows you to quickly
initialize the array data, and Skip Init allows you to
recover the array and to set the RAID properties
among different drives without clearing the data.
– Besides Quick Init and Skip Init, Build, Verify, and
Clear can also be performed for arrays with a
redundancy function and allow you to initialize and
clear the array data.
NOTE
● This operation can be performed only when the array size is smaller than the maximum
value.
● A maximum of 64 logical drives can be created for each array.
1. After the array is created, repeat Step 3 to Step 4, as shown in Figure 10-36.
You can create multiple logical drives based on the actual situation.
Step 6 Check the configuration result.
1. On the screen shown in Figure 10-36, select Manage Array and press Enter.
The array list is displayed, as shown in Table 10-9.
----End
The PM8060 supports SAS/SATA HDDs and SSDs. Drives in one RAID array must be of the
same type, but can have different capacities or be provided by different vendors.
Workflow
Procedure
Step 1 Back up data on drives and access the Configuration Utility main screen. For
details, see 10.3.2 Logging In to the Configuration Utility.
Step 2 Initialize member drives.
Before an array is created, if the predetermined drive contains partition
information or is totally used for creating another array, the drive is gray and
cannot be selected. Initialization is required if you want to use this drive as a
member drive of the new array.
1. Select Logical Device Configuration and press Enter.
The array configuration main screen is displayed, as shown in Figure 10-42.
3. In the Select drives for initialization area, select the drives to be initialized
by pressing the space or Insert button.
The selected drives are displayed in the Selected Drives area, as shown in
Figure 10-44.
To cancel the selected drives, you can press Del.
4. Press Enter.
A confirmation dialog box is displayed.
NOTICE
5. Enter Y.
The Initialization begins and lasts for about 10 seconds.
6. Press any key to return to the screen shown in Figure 10-42.
Step 3 Select the member drives.
1. Select Create Array and press Enter.
The drive list is displayed.
2. In the Select drives to create Array area, select the drives to be added to the
array by pressing the space or Insert button.
The selected drives are displayed in the Selected Drives area, as shown in
Figure 10-45.
Parameter Description
Stripe Size Indicates the stripe size (64 KB, 128 KB, 256 KB, 512
KB, or 1024 KB).
When Array Type is RAID 1 or Volume, the value of
this parameter cannot be changed.
Write Caching Indicates the write cache policy, which can be set as
follows:
– Enable Always: RAID write cache is still enabled in
case of no capacitor protection, and data loss may
be caused if a power failure occurs.
NOTE
Exercise caution when selecting Enable Always.
– Enable With Backup Unit: RAID write cache is
disabled in case of no capacitor protection or in case
that capacitor protection is not ready. This setting is
recommended.
– Disable: RAID write cache is always disabled.
Create RAID via Specifies the actions performed after creating an array.
– Quick Init and Skip Init can be performed for RAID
0, 1, and volumes. Quick Init allows you to quickly
initialize the array data, and Skip Init allows you to
recover the array and to set the RAID properties
among different drives without clearing the data.
– Besides Quick Init and Skip Init, Build, Verify, and
Clear can also be performed for arrays with a
redundancy function and allow you to initialize and
clear the array data.
NOTE
● This operation can be performed only when the array size is smaller than the maximum
value.
● A maximum of 64 logical drives can be created for each array.
1. After the array is created, repeat Step 3 to Step 4, as shown in Figure 10-42.
You can create multiple logical drives based on the actual situation.
Step 6 Check the configuration result.
1. On the screen shown in Figure 10-42, select Manage Array and press Enter.
The array list is displayed, as shown in Table 10-10.
----End
The PM8060 supports SAS/SATA HDDs and SSDs. Drives in one RAID array must be of the
same type, but can have different capacities or be provided by different vendors.
Workflow
Procedure
Step 1 Back up data on drives and access the Configuration Utility main screen. For
details, see 10.3.2 Logging In to the Configuration Utility.
Step 2 Initialize member drives.
Before an array is created, if the predetermined drive contains partition
information or is totally used for creating another array, the drive is gray and
cannot be selected. Initialization is required if you want to use this drive as a
member drive of the new array.
1. Select Logical Device Configuration and press Enter.
The array configuration main screen is displayed, as shown in Figure 10-48.
3. In the Select drives for initialization area, select the drives to be initialized
by pressing the space or Insert button.
The selected drives are displayed in the Selected Drives area, as shown in
Figure 10-50.
To cancel the selected drives, you can press Del.
4. Press Enter.
A confirmation dialog box is displayed.
NOTICE
5. Enter Y.
The Initialization begins and lasts for about 10 seconds.
6. Press any key to return to the screen shown in Figure 10-48.
Step 3 Select the member drives.
1. Select Create Array and press Enter.
The drive list is displayed.
2. In the Select drives to create Array area, select the drives to be added to the
array by pressing the space or Insert button.
The selected drives are displayed in the Selected Drives area, as shown in
Figure 10-51.
Parameter Description
Stripe Size Indicates the stripe size (64 KB, 128 KB, 256 KB, 512
KB, or 1024 KB).
When Array Type is RAID 1 or Volume, the value of
this parameter cannot be changed.
Write Caching Indicates the write cache policy, which can be set as
follows:
– Enable Always: RAID write cache is still enabled in
case of no capacitor protection, and data loss may
be caused if a power failure occurs.
NOTE
Exercise caution when selecting Enable Always.
– Enable With Backup Unit: RAID write cache is
disabled in case of no capacitor protection or in case
that capacitor protection is not ready. This setting is
recommended.
– Disable: RAID write cache is always disabled.
Create RAID via Specifies the actions performed after creating an array.
– Quick Init and Skip Init can be performed for RAID
0, 1, and volumes. Quick Init allows you to quickly
initialize the array data, and Skip Init allows you to
recover the array and to set the RAID properties
among different drives without clearing the data.
– Besides Quick Init and Skip Init, Build, Verify, and
Clear can also be performed for arrays with a
redundancy function and allow you to initialize and
clear the array data.
NOTE
● This operation can be performed only when the array size is smaller than the maximum
value.
● A maximum of 64 logical drives can be created for each array.
1. After the array is created, repeat Step 3 to Step 4, as shown in Figure 10-48.
You can create multiple logical drives based on the actual situation.
Step 6 Check the configuration result.
1. On the screen shown in Figure 10-48, select Manage Array and press Enter.
The array list is displayed, as shown in Table 10-11.
NOTICE
A single drive working as a boot device has a higher priority than an array.
Prerequisites
Conditions
Data
Procedure
Setting Boot Devices
Step 1 On the configuration tool main screen, select Logical Device Configuration and
press Enter.
Step 3 Select an array that you want to configure as a boot device, and press Ctrl+B to
change the boot order.
NOTE
When an array is in the rebuild or initialization state, its boot order cannot be changed.
Step 4 Press Esc when prompted to exit the configuration screen and restart the server.
----End
Related Operations
If a server is configured with drive controllers of different chips, set the drive boot
device on the BIOS.
● For details about the settings on the Grantley platform, see "Setting the Boot
Device" in the Huawei Server Grantley Platform BIOS Parameter
Reference.
● For details about the settings on the Brickland platform, see "Setting the Boot
Device" in the Huawei Server Brickland Platform BIOS Parameter
Reference.
Prerequisites
Conditions
None
Data
Data preparation is not required for this operation.
Documents
For details about how to log in to the screen by using the remote virtual console,
see the user guides of specific servers.
Procedure
Step 1 Set the EFI mode. For details, see A.1.4 Setting the EFI/UEFI Mode.
The position of the RAID controller card management interface in EFI/UEFI mode
varies with the BIOS platform.
● Brickland platform: The interface is integrated into the BIOS Setup. For details,
see A.1.1 Logging In to the RAID Controller Card Management Screen in
EFI/UEFI Mode (Brickland Platform).
● Grantley platform: The interface is integrated into the Device Manager. For
details, see A.1.2 Logging In to the RAID Controller Card Management
Screen in EFI/UEFI Mode (Grantley Platform).
Device scanning starts, and the controller list is displayed, as shown in Figure
10-56.
Operation Description
----End
Scenarios
Create SIMPLE_VOL (single-drive RAID 0).
Workflow
Procedure
Step 1 Back up data on drives and access the Configuration Utility main screen. For
details, see 10.4.1 Logging In to the Management Screen.
NOTICE
5. Press Enter.
The Initialization begins and lasts for about 10 seconds.
6. Press any key to return to the screen shown in Figure 10-58.
2. Select a drive that you want to add to the array and press Enter, as shown in
Figure 10-62.
NOTE
Stripe Size Indicates the stripe size (64 KB, 128 KB, 256 KB, 512
KB, or 1024 KB).
When Array Type is RAID 1 or Volume, the value of
this parameter cannot be changed.
Parameter Description
Write Cache Indicates the write cache policy, which can be set as
follows:
– Enable Always: RAID write cache is still enabled in
case of no capacitor protection, and data loss may
be caused if a power failure occurs.
NOTE
Exercise caution when selecting Enable Always.
– Enable With Backup Unit: RAID write cache is
disabled in case of no capacitor protection or in case
that capacitor protection is not ready. This setting is
recommended.
– Disable: RAID write cache is always disabled.
Create RAID via Specifies the actions performed after creating an array.
– Quick Init and Skip Init can be performed for RAID
0, 1, and volumes. Quick Init allows you to quickly
initialize the array data, and Skip Init allows you to
recover the array and to set the RAID properties
among different drives without clearing the data.
– Besides Quick Init and Skip Init, Build, Verify, and
Clear can also be performed for arrays with a
redundancy function and allow you to initialize and
clear the array data.
– This operation can be performed only when the array size is smaller than the
maximum value.
– A maximum of 64 logical drives can be created for each array.
5. Repeat Step 2 to Step 4, as shown in Figure 10-58.
You can create multiple logical drives based on the actual situation.
----End
The PM8060 supports SAS/SATA HDDs and SSDs. Drives in one RAID array must be of the
same type, but can have different capacities or be provided by different vendors.
Workflow
Procedure
Step 1 Back up data on drives and access the Configuration Utility main screen. For
details, see 10.4.1 Logging In to the Management Screen.
Step 2 Initialize member drives.
Before being added to a RAID, a drive must be initialized. The procedure is as
follows:
1. In Figure 10-57, select Logical Device Configuration and press Enter.
The Logical Device Configuration screen is displayed, as shown in Figure
10-65.
NOTICE
5. Press Enter.
The Initialization begins and lasts for about 10 seconds.
6. Press any key to return to the screen shown in Figure 10-65.
2. Select a drive that you want to add to the array and press Enter.
Select member drives, as shown in Figure 10-69.
NOTE
The [X] symbol after a drive indicates that the drive is selected.
Stripe Size Indicates the stripe size (64 KB, 128 KB, 256 KB, 512
KB, or 1024 KB).
When Array Type is RAID 1 or Volume, the value of
this parameter cannot be changed.
Parameter Description
Write Cache Indicates the write cache policy, which can be set as
follows:
– Enable Always: RAID write cache is still enabled in
case of no capacitor protection, and data loss may
be caused if a power failure occurs.
NOTE
Exercise caution when selecting Enable Always.
– Enable With Backup Unit: RAID write cache is
disabled in case of no capacitor protection or in case
that capacitor protection is not ready. This setting is
recommended.
– Disable: RAID write cache is always disabled.
Create RAID via Specifies the actions performed after creating an array.
– Quick Init and Skip Init can be performed for RAID
0, 1, and volumes. Quick Init allows you to quickly
initialize the array data, and Skip Init allows you to
recover the array and to set the RAID properties
among different drives without clearing the data.
– Besides Quick Init and Skip Init, Build, Verify, and
Clear can also be performed for arrays with a
redundancy function and allow you to initialize and
clear the array data.
● This operation can be performed only when the array size is smaller than the maximum
value.
● A maximum of 64 logical drives can be created for each array.
----End
The PM8060 supports SAS/SATA HDDs and SSDs. Drives in one RAID array must be of the
same type, but can have different capacities or be provided by different vendors.
Workflow
Procedure
Step 1 Back up data on drives and access the Configuration Utility main screen. For
details, see 10.4.1 Logging In to the Management Screen.
Step 2 Initialize member drives.
Before being added to a RAID, a drive must be initialized. The procedure is as
follows:
1. In Figure 10-57, select Logical Device Configuration and press Enter.
The Logical Device Configuration screen is displayed, as shown in Figure
10-73.
NOTICE
5. Press Enter.
The Initialization begins and lasts for about 10 seconds.
6. Press any key to return to the screen shown in Figure 10-73.
2. Select a drive that you want to add to the array and press Enter.
Select member drives, as shown in Figure 10-77.
NOTE
The [X] symbol after a drive indicates that the drive is selected.
Stripe Size Indicates the stripe size (64 KB, 128 KB, 256 KB, 512
KB, or 1024 KB).
When Array Type is RAID 1 or Volume, the value of
this parameter cannot be changed.
Parameter Description
Write Cache Indicates the write cache policy, which can be set as
follows:
– Enable Always: RAID write cache is still enabled in
case of no capacitor protection, and data loss may
be caused if a power failure occurs.
NOTE
Exercise caution when selecting Enable Always.
– Enable With Backup Unit: RAID write cache is
disabled in case of no capacitor protection or in case
that capacitor protection is not ready. This setting is
recommended.
– Disable: RAID write cache is always disabled.
Create RAID via Specifies the actions performed after creating an array.
– Quick Init and Skip Init can be performed for RAID
0, 1, and volumes. Quick Init allows you to quickly
initialize the array data, and Skip Init allows you to
recover the array and to set the RAID properties
among different drives without clearing the data.
– Besides Quick Init and Skip Init, Build, Verify, and
Clear can also be performed for arrays with a
redundancy function and allow you to initialize and
clear the array data.
● This operation can be performed only when the array size is smaller than the maximum
value.
● A maximum of 64 logical drives can be created for each array.
----End
The PM8060 supports SAS/SATA HDDs and SSDs. Drives in one RAID array must be of the
same type, but can have different capacities or be provided by different vendors.
Workflow
Procedure
Step 1 Back up data on drives and access the Configuration Utility main screen. For
details, see 10.4.1 Logging In to the Management Screen.
Step 2 Initialize member drives.
Before being added to a RAID, a drive must be initialized. The procedure is as
follows:
1. In Figure 10-57, select Logical Device Configuration and press Enter.
The Logical Device Configuration screen is displayed, as shown in Figure
10-81.
NOTICE
5. Press Enter.
The Initialization begins and lasts for about 10 seconds.
6. Press any key to return to the screen shown in Figure 10-81.
2. Select a drive that you want to add to the array and press Enter.
Select member drives, as shown in Figure 10-85.
NOTE
The [X] symbol after a drive indicates that the drive is selected.
Stripe Size Indicates the stripe size (64 KB, 128 KB, 256 KB, 512
KB, or 1024 KB).
When Array Type is RAID 1 or Volume, the value of
this parameter cannot be changed.
Parameter Description
Write Cache Indicates the write cache policy, which can be set as
follows:
– Enable Always: RAID write cache is still enabled in
case of no capacitor protection, and data loss may
be caused if a power failure occurs.
NOTE
Exercise caution when selecting Enable Always.
– Enable With Backup Unit: RAID write cache is
disabled in case of no capacitor protection or in case
that capacitor protection is not ready. This setting is
recommended.
– Disable: RAID write cache is always disabled.
Create RAID via Specifies the actions performed after creating an array.
– Quick Init and Skip Init can be performed for RAID
0, 1, and volumes. Quick Init allows you to quickly
initialize the array data, and Skip Init allows you to
recover the array and to set the RAID properties
among different drives without clearing the data.
– Besides Quick Init and Skip Init, Build, Verify, and
Clear can also be performed for arrays with a
redundancy function and allow you to initialize and
clear the array data.
● This operation can be performed only when the array size is smaller than the maximum
value.
● A maximum of 64 logical drives can be created for each array.
----End
The PM8060 supports SAS/SATA HDDs and SSDs. Drives in one RAID array must be of the
same type, but can have different capacities or be provided by different vendors.
Workflow
Procedure
Step 1 Back up data on drives and access the Configuration Utility main screen. For
details, see 10.4.1 Logging In to the Management Screen.
Step 2 Initialize member drives.
Before being added to a RAID, a drive must be initialized. The procedure is as
follows:
1. In Figure 10-57, select Logical Device Configuration and press Enter.
The Logical Device Configuration screen is displayed, as shown in Figure
10-89.
NOTICE
5. Press Enter.
The Initialization begins and lasts for about 10 seconds.
6. Press any key to return to the screen shown in Figure 10-89.
2. Select a drive that you want to add to the array and press Enter.
Select member drives, as shown in Figure 10-93.
NOTE
The [X] symbol after a drive indicates that the drive is selected.
Stripe Size Indicates the stripe size (64 KB, 128 KB, 256 KB, 512
KB, or 1024 KB).
When Array Type is RAID 1 or Volume, the value of
this parameter cannot be changed.
Parameter Description
Write Cache Indicates the write cache policy, which can be set as
follows:
– Enable Always: RAID write cache is still enabled in
case of no capacitor protection, and data loss may
be caused if a power failure occurs.
NOTE
Exercise caution when selecting Enable Always.
– Enable With Backup Unit: RAID write cache is
disabled in case of no capacitor protection or in case
that capacitor protection is not ready. This setting is
recommended.
– Disable: RAID write cache is always disabled.
Create RAID via Specifies the actions performed after creating an array.
– Quick Init and Skip Init can be performed for RAID
0, 1, and volumes. Quick Init allows you to quickly
initialize the array data, and Skip Init allows you to
recover the array and to set the RAID properties
among different drives without clearing the data.
– Besides Quick Init and Skip Init, Build, Verify, and
Clear can also be performed for arrays with a
redundancy function and allow you to initialize and
clear the array data.
● This operation can be performed only when the array size is smaller than the maximum
value.
● A maximum of 64 logical drives can be created for each array.
----End
The PM8060 supports SAS/SATA HDDs and SSDs. Drives in one RAID array must be of the
same type, but can have different capacities or be provided by different vendors.
Workflow
Procedure
Step 1 Back up data on drives and access the Configuration Utility main screen. For
details, see 10.4.1 Logging In to the Management Screen.
Step 2 Initialize member drives.
Before being added to a RAID, a drive must be initialized. The procedure is as
follows:
1. In Figure 10-57, select Logical Device Configuration and press Enter.
The Logical Device Configuration screen is displayed, as shown in Figure
10-97.
NOTICE
5. Press Enter.
The Initialization begins and lasts for about 10 seconds.
6. Press any key to return to the screen shown in Figure 10-97.
2. Select a drive that you want to add to the array and press Enter.
Select member drives, as shown in Figure 10-101.
NOTE
The [X] symbol after a drive indicates that the drive is selected.
Stripe Size Indicates the stripe size (64 KB, 128 KB, 256 KB, 512
KB, or 1024 KB).
When Array Type is RAID 1 or Volume, the value of
this parameter cannot be changed.
Parameter Description
Write Cache Indicates the write cache policy, which can be set as
follows:
– Enable Always: RAID write cache is still enabled in
case of no capacitor protection, and data loss may
be caused if a power failure occurs.
NOTE
Exercise caution when selecting Enable Always.
– Enable With Backup Unit: RAID write cache is
disabled in case of no capacitor protection or in case
that capacitor protection is not ready. This setting is
recommended.
– Disable: RAID write cache is always disabled.
Create RAID via Specifies the actions performed after creating an array.
– Quick Init and Skip Init can be performed for RAID
0, 1, and volumes. Quick Init allows you to quickly
initialize the array data, and Skip Init allows you to
recover the array and to set the RAID properties
among different drives without clearing the data.
– Besides Quick Init and Skip Init, Build, Verify, and
Clear can also be performed for arrays with a
redundancy function and allow you to initialize and
clear the array data.
● This operation can be performed only when the array size is smaller than the maximum
value.
● A maximum of 64 logical drives can be created for each array.
----End
The PM8060 supports SAS/SATA HDDs and SSDs. Drives in one RAID array must be of the
same type, but can have different capacities or be provided by different vendors.
Workflow
Procedure
Step 1 Back up data on drives and access the Configuration Utility main screen. For
details, see 10.4.1 Logging In to the Management Screen.
Step 2 Initialize member drives.
Before being added to a RAID, a drive must be initialized. The procedure is as
follows:
1. In Figure 10-57, select Logical Device Configuration and press Enter.
The Logical Device Configuration screen is displayed, as shown in Figure
10-105.
NOTICE
5. Press Enter.
The Initialization begins and lasts for about 10 seconds.
6. Press any key to return to the screen shown in Figure 10-105.
2. Select a drive that you want to add to the array and press Enter.
Select member drives, as shown in Figure 10-109.
NOTE
The [X] symbol after a drive indicates that the drive is selected.
Stripe Size Indicates the stripe size (64 KB, 128 KB, 256 KB, 512
KB, or 1024 KB).
When Array Type is RAID 1 or Volume, the value of
this parameter cannot be changed.
Parameter Description
Write Cache Indicates the write cache policy, which can be set as
follows:
– Enable Always: RAID write cache is still enabled in
case of no capacitor protection, and data loss may
be caused if a power failure occurs.
NOTE
Exercise caution when selecting Enable Always.
– Enable With Backup Unit: RAID write cache is
disabled in case of no capacitor protection or in case
that capacitor protection is not ready. This setting is
recommended.
– Disable: RAID write cache is always disabled.
Create RAID via Specifies the actions performed after creating an array.
– Quick Init and Skip Init can be performed for RAID
0, 1, and volumes. Quick Init allows you to quickly
initialize the array data, and Skip Init allows you to
recover the array and to set the RAID properties
among different drives without clearing the data.
– Besides Quick Init and Skip Init, Build, Verify, and
Clear can also be performed for arrays with a
redundancy function and allow you to initialize and
clear the array data.
● This operation can be performed only when the array size is smaller than the maximum
value.
● A maximum of 64 logical drives can be created for each array.
----End
The PM8060 supports SAS/SATA HDDs and SSDs. Drives in one RAID array must be of the
same type, but can have different capacities or be provided by different vendors.
Workflow
Procedure
Step 1 Back up data on drives and access the Configuration Utility main screen. For
details, see 10.4.1 Logging In to the Management Screen.
Step 2 Initialize member drives.
Before being added to a RAID, a drive must be initialized. The procedure is as
follows:
1. In Figure 10-57, select Logical Device Configuration and press Enter.
The Logical Device Configuration screen is displayed, as shown in Figure
10-113.
NOTICE
5. Press Enter.
The Initialization begins and lasts for about 10 seconds.
6. Press any key to return to the screen shown in Figure 10-113.
2. Select a drive that you want to add to the array and press Enter.
Select member drives, as shown in Figure 10-117.
NOTE
The [X] symbol after a drive indicates that the drive is selected.
Stripe Size Indicates the stripe size (64 KB, 128 KB, 256 KB, 512
KB, or 1024 KB).
When Array Type is RAID 1 or Volume, the value of
this parameter cannot be changed.
Parameter Description
Write Cache Indicates the write cache policy, which can be set as
follows:
– Enable Always: RAID write cache is still enabled in
case of no capacitor protection, and data loss may
be caused if a power failure occurs.
NOTE
Exercise caution when selecting Enable Always.
– Enable With Backup Unit: RAID write cache is
disabled in case of no capacitor protection or in case
that capacitor protection is not ready. This setting is
recommended.
– Disable: RAID write cache is always disabled.
Create RAID via Specifies the actions performed after creating an array.
– Quick Init and Skip Init can be performed for RAID
0, 1, and volumes. Quick Init allows you to quickly
initialize the array data, and Skip Init allows you to
recover the array and to set the RAID properties
among different drives without clearing the data.
– Besides Quick Init and Skip Init, Build, Verify, and
Clear can also be performed for arrays with a
redundancy function and allow you to initialize and
clear the array data.
● This operation can be performed only when the array size is smaller than the maximum
value.
● A maximum of 64 logical drives can be created for each array.
----End
● For details about the settings on the Grantley platform, see "Setting the Boot
Device" in the Huawei Server Grantley Platform BIOS Parameter
Reference.
● For details about the settings on the Brickland platform, see "Setting the Boot
Device" in the Huawei Server Brickland Platform BIOS Parameter
Reference.
Scenarios
After drives of a server are configured with array properties, hot spare drives are
configured to increase security and reduce impact on services caused by drive
faults.
When the PM8060 is in Legacy or Dual mode, hot spare drives cannot be
configured on the management screen.
Prerequisites
Conditions
Data
Tools
The hot spare drive must be configured by using the official ARCCONF released by
the PMC and installed on the operating system.
Procedure
Step 1 Open the arcconf tool in the operating system.
● channel id: indicates the channel ID of the drive to be configured. The default
value is 0.
● device id: indicates the slot ID of the drive to be configured. (When the drive
backplane is a pass-through one, the device id is the same as the slot ID of
the drive. When the drive backplane is an expander one, the device id equals
the slot ID of the drive plus eight.)
● LM, LN: indicates the designated RAID.
When logicaldrive is specified, the hot spare drive of a designated RAID is
configured. When logicaldrive is not specified, the global hot spare drive is
configured.
The Figure 10-121 shows the process of configuring a hot spare drive.
----End
For details, see 10.10.2.18 Querying the Status of a Drive. In the command
output, if the value of State for a drive is Global Hot-Spare, the drive is a global
hot spare drive. If the value is Dedicated Hot-Spare, it is a dedicated hot spare
drive. Figure 10-122 shows a global hot spare drive and Figure 10-123 shows a
dedicated hot spare drive.
Prerequisites
Conditions
● You have downloaded and installed the ARCCONF tool on the operating
system.
Data
Data preparation is not required for this operation.
Tools
Procedure
Scenarios
If the OS is installed on a raw drive, set the drive as the boot drive. Otherwise, the
OS cannot be started.
NOTICE
A drive working as the boot drive has a higher priority than an array.
Prerequisites
Conditions
Data
Data on the target drive will be lost during the restoration. Therefore, back up the
data before the restoration.
Procedure
Restoring the Target Drive
This operation will delete the array partition information and OS residual
partitions from the drive so that the OS can directly use the drive.
NOTE
This operation will cause data loss on the drive. Back up the data before this operation.
Step 1 On the array configuration and management main screen, select Logical Device
Configuration and press Enter.
The array configuration screen is displayed. See Figure 10-124.
2. In the Select drive to set as boot device area, press the space bar to select
the drive whose information has been restored in Figure 10-125. The drive is
displayed in the Selected Drives area, as shown in Figure 10-127.
3. Press Enter.
A confirmation dialog box is displayed.
4. Press Y.
The drive is set as the boot drive.
----End
Additional Information
Related Tasks
After the configuration is complete, go to the drive list again. In the Selected
Drives area, check that the drive has been set as the boot drive.
Related Concepts
None
Scenarios
When the PM8060 is in Legacy or Dual mode, array capacity cannot be expanded
on the management screen.
The capacity of a RAID array can be expanded using either of the following
methods:
● Adding drives: Adds new drives to an existing array for capacity expansion.
● Increasing available array space: Increases the percentage of available space
of an array for capacity expansion if the capacity of all member drives in the
array is not fully utilized.
Prerequisites
Conditions
The following conditions must be met before you add a drive for array capacity
expansion:
● The server has drives that have not been added to an array.
● You have downloaded and installed the arcconf tool on the operating system.
The following conditions must be met before you Increase available array space
for capacity expansion:
Data
Tools
The RAID capacity cannot be expanded on the BIOS screen. To expand the RAID
capacity, you must install the Adaptec ARCCONF Command Line Utility
(ARCCONF for short) released by Microsemi on the operating system.
Procedure
● Add drives.
a. Open ARCCONF in the operating system.
b. Run the following command:
Format: arcconf modify controller_ID from RAID_ID to stripesize
stripsizevalue capacity RAID_level channel id1 device id1 ... channel idN
device idN
Parameters in the command are described as follows:
● controller ID: indicates the controller ID.
● RAID_ID: indicates the ID of the array whose capacity is to be expanded.
● stripsizevalue: indicates the strip size.
● capacity: indicates the capacity after the expansion.
● RAID_level: indicates the RAID level after the expansion.
● channel id: indicates the channel ID of the drive to be added. The default
value is 0.
● device id: indicates the slot ID of the drive to be added.
Example:
./arcconf modify 1 from 1 to stripesize 64 10240 5 0 1 0 2 0 3
Prerequisites
Conditions
● You have created an array.
● You have performed the operations in 10.3.2 Logging In to the
Configuration Utility.
Data
The data in the array to be deleted has been backed up.
Procedure
Step 1 On the configuration tool main screen, select Logical Device Configuration and
press Enter.
The array configuration main screen is displayed. See Figure 10-128.
Step 3 Select the array to be deleted and press Del to delete the array.
----End
Procedure
Step 1 Access the Configuration Utility main screen. For details, see 10.3.2 Logging In to
the Configuration Utility.
Step 2 On the configuration tool main screen, select Controller Settings and press Enter.
The controller operation menu is displayed. See Figure 10-130.
Parameter Description
----End
Scenarios
After drives of a server are configured with array properties, hot spare drives are
configured to increase security and reduce impact on services caused by drive
faults.
Prerequisites
Conditions
Data
Procedure
Step 1 The page for managing hot spare drives appears.
1. On the Controller Operations screen, select Logical Device Configuration
and press Enter.
The Logical Device Configuration screen is displayed, as shown in Figure
10-132.
----End
Procedure
Step 1 Access the Configuration Utility main screen. For details, see 10.4.1 Logging In to
the Management Screen.
Step 2 The page for managing hot spare drives appears.
1. On the Controller Operations screen, select Logical Device Configuration
and press Enter.
The Logical Device Configuration screen is displayed, as shown in Figure
10-135.
----End
Prerequisites
Conditions
● The server has idle drives.
● You have performed the operations in 10.4.1 Logging In to the Management
Screen.
Data
Procedure
Step 1 On the Controller Operations screen, select Logical Device Configuration and
press Enter.
The Logical Device Configuration screen is displayed, as shown in Figure 10-138.
Step 3 Select the array to be operated and press Enter. The screen shown in Figure
10-140 is displayed. Table 10-22 describes the operations on the screen.
Parameter Description
NOTE
----End
Scenarios
If the number of member drives in a RAID array is insufficient, you can delete a
hot spare drive to enable it to function as a common drive.
Procedure
Step 1 Access the Configuration Utility main screen. For details, see 10.4.1 Logging In to
the Management Screen.
Step 2 On the Controller Operations screen, select Logical Device Configuration and
press Enter.
Step 4 Select the array to be operated and press Enter. The screen shown in Figure
10-144 is displayed. Table 10-23 describes the operations on the screen.
----End
Prerequisites
Conditions
● You have created an array.
● You have performed the operations in 10.4.1 Logging In to the Management
Screen.
Data
The data in the array to be deleted has been backed up.
Procedure
Step 1 Go to the array property management screen.
1. On the Controller Operations screen, select Logical Device Configuration
and press Enter.
----End
10.7 Troubleshooting
This section describes solutions to drive faults, RAID controller card faults, and
battery or capacitor faults. For other situations, see the Huawei Server
Maintenance Guide.
If a drive in pass-through mode is faulty (not in the healthy status), the Fault indicator
on the drive will be lit and the iBMC will generate an alarm.
Solution
NOTE
----End
Solution
Step 1 Replace the RAID controller card. Then, check whether the alarm is cleared.
For details about how to replace a RAID controller card, see the user guide
delivered with your server.
● If yes, go to Step 2.
● If no, go to Step 3.
Step 2 Restart the server. After the foreign configuration is automatically imported, check
whether the RAID configuration is the same as that before the replacement.
● If yes, no further action is required.
● If no, go to Step 3.
Step 3 Contact Huawei technical support.
----End
Solution
Step 1 Restart the server and observe the supercapacitor status displayed during the
server startup.
----End
Procedure
Step 1 Set the Legacy mode. For details, see A.1.3 Setting the Legacy Mode.
Step 2 Log in to the Configuration Utility screen of the PM8060.
1. During the server startup, press Ctrl+A when the message "Press <Ctrl><A>
for PMC RAID Configuration Utility" is displayed.
The screen shown in Figure 10-149 is displayed. For details about the
parameters, see Table 10-24.
Disk Utilities Views the current drive list and performs operations
on specific drives, such as turning on indicators,
formatting drives, and verifying data.
----End
Additional Information
Related Tasks
When the server is restarted and before the RAID configuration screen is
displayed, you can view the firmware version, PCIe slot information, capacitor
information, drives connected to the controller, and information about created
RAID arrays. See Figure 10-150.
Related Concepts
None
Screen Introduction
On the Configuration Utility screen shown in Figure 10-149, select Logical Device
Configuration to display the array configuration main menu, as shown in Figure
10-151. Table 10-25 describes the parameters.
Screen Description
Select Main Menu > Manage Arrays, as shown in Figure 10-151. The array list is
displayed, as shown in Figure 10-152.
----End
Property Description
----End
Screen Introduction
Select Main Menu > Create Array, as shown in Figure 10-151. The array list is
displayed, as shown in Figure 10-154.
Stripe Size Stripe size (64 KB, 128 KB, 256 KB, 512 KB, or 1024 KB).
When Array Type is RAID 1 or Volume, the value of this
parameter cannot be changed.
Parameter Description
If all the drives to be added to the array are SSDs, a dialog box is displayed asking you
whether to disable cache settings.
The system creates an array and performs the operations defined in Create RAID
via.
----End
Screen Introduction
Select Main Menu > Initialize Drives, as shown in Figure 10-151. The disk list is
displayed, as shown in Figure 10-156.
Procedure
Step 1 In the Select drives for initialization area, select the drives to be initialized by
pressing the space bar or Insert button.
The selected drives are displayed in the Selected Drives area, as shown in Figure
10-157.
To cancel the selected drives, you can press Del.
NOTICE
Initialization clears all the data on the RAID member drives. Be cautious to
perform this operation.
Step 3 Enter Y.
The initialization begins and lasts for about 10 seconds.
Step 4 Press any key to return to the Main Menu screen.
----End
NOTICE
This operation will delete all data from the drive. Before performing the operation,
check whether the data needs to be backed up.
Screen Introduction
Select Main Menu > Secure Erase, as shown in Figure 10-151. The drive list is
displayed, as shown in Figure 10-158.
Procedure
Step 1 In the Select drives for secure erase area, select the desired drives by pressing
the space bar or Insert button.
The selected drives are displayed in the Selected Drives area, as shown in Figure
10-159.
NOTE
Step 3 Enter Y.
The system starts to safely erase the drive. During this process, no other
operations can be performed on the drive.
----End
When a specified drive is used as a pass-through drive, you can use this function
to initialize the drive.
NOTE
This operation will delete all data from the drive. Before performing the operation, check
whether the data needs to be backed up.
Screen Introduction
Select Main Menu > Uninitialize Drives, as shown in Figure 10-151. The drive list
is displayed, as shown in Figure 10-160.
Procedure
Step 1 In the Select drives for uninitialization area, select the drives to be initialized by
pressing the space bar or Insert button.
The selected drives are displayed in the Selected Drives area, as shown in Figure
10-161.
To cancel the selected drives, you can press Del.
NOTICE
Initialization clears all the data on the RAID member drives. Be cautious to
perform this operation.
Step 3 Enter Y.
The initialization begins and lasts for about 10 seconds.
Step 4 Press any key to return to the Main Menu screen.
----End
Screen Introduction
Select Main Menu > Select Boot Device, as shown in Figure 10-151. The drive
list is displayed, as shown in Figure 10-162.
Procedure
Step 1 In the Select drives to set as boot device area, select the desired drives by
pressing the space bar or Insert button.
The selected drives are displayed in the Selected Drives area, as shown in Figure
10-163.
To cancel the selected drives, you can press Del.
----End
Screen Introduction
Select Controller Settings in Figure 10-149. The controller operation menu is
displayed, as shown in Figure 10-164. Table 10-28 describes the parameters.
Screen Introduction
On the Options screen shown in Figure 10-164, select Controller Configuration
to display the controller property screen, as shown in Figure 10-165. The
parameters are described in Table 10-29.
Parameter Description
Drives Write Cache Indicates the drive cache write policy. The value options
are as follows:
● Drive Specific: retains the defaults.
● Enable All: enables the cache write function of all
drives.
● Disable All: disables the cache write function of all
drives. The default value is Drive Specific.
Runtime BIOS Indicates the controller Option ROM status. The value
options are as follows:
● Enabled: enables the controller Option ROM. The
default value is Enabled.
● Disabled: disables the controller Option ROM.
Parameter Description
Device based BBS Indicates the boot device reported to the BIOS. The value
Support options are as follows:
● Enabled: reports drive or RAID device information to
the BIOS.
● Disabled: reports only RAID device information to the
BIOS. The default value is Disabled.
Alarm Control Indicates the onboard buzzer status. The value options
are as follows:
● Enabled: enables the onboard buzzer. The default
value is Enabled.
● Disabled: disables the onboard buzzer.
Default Background Indicates the default background task priority. The value
Task Priority options are as follows:
● High
● Medium
● Low
Parameter Description
Controller Mode Indicates the controller working mode. The value options
are as follows:
● RAID: expose RAW: If drives are not configured with
RAID properties, the controller card reports them to
the OS as raw drives. Therefore, the drives can be
operated on the OS directly.
● RAID: hide RAW: The controller card reports only the
drives with the RAID properties.
● HBA: No RAID level is supported, and all drives are
reported as raw drives.
● Auto Volume: If drives are not configured with the
RAID properties but contain OS partitions, the
controller card reports them to the OS as raw drives.
If the drives are not configured with the RAID
properties and do not contain the OS partitions, the
controller card reports them after volumes are created
in the drives by using the controller card. The default
value is RAID: expose RAW.
----End
Screen Introduction
On the Options screen shown in Figure 10-164, select Advanced Configuration
to display the advanced controller property screen, as shown in Figure 10-166.
Table 10-30 describes the parameters.
Parameter Description
Power Management You can set the power supply parameters of the
controller.
Stay Awake Start Time from which the drive stays in the active state. That
is, the drive will not spin down from the specified time.
Stay Awake End Time after which the drive will no longer stay in the
active state. That is, the drive will not spin down before
the specified time.
----End
Screen Introduction
On the Options screen shown in Figure 10-164, select Backup Unit Status to
display the supercapacitor property screen, as shown in Figure 10-167. Table
10-31 describes the parameters.
Parameter Description
Screen Introduction
On the Configuration Utility screen shown in Figure 10-149, select Disk Utilities
to display the drive list, as shown in Figure 10-168.
Select a drive and press Enter to display the operation menu, as shown in Table
10-32.
Format Disk Formats the drive, which takes a long time and clears
all data in the drive. Formatting cannot be stopped
once started.
NOTE
This operation cannot be stopped once started, takes a long
time, and will delete all data from the drive. Therefore, this
operation is not recommended.
Verify Disk Media Checks drive bad blocks to prevent bad block during
use. Users can stop the operation by pressing Esc.
Locating a Drive
Step 1 Move the cursor to a specified drive and press Enter.
Step 2 On the displayed menu, select Identify Drive.
NOTE
----End
10.8.5 Exit
Click Exit to exit the configuration screen.
Figure 10-169 shows the screen.
● Click Yes to exit the configuration screen and restart the server.
● Click No to return to the configuration screen.
Scenarios
This section describes how to log in to the configuration management screen of
the PM8060.
Prerequisites
Conditions
None
Data
Documents
For details about how to log in to the screen by using the remote virtual console,
see the user guides of specific servers.
Procedure
Step 1 Set the EFI mode. For details, see A.1.4 Setting the EFI/UEFI Mode.
Step 2 Log in to the PM8060 management screen.
The position of the RAID controller card management interface in EFI/UEFI mode
varies with the BIOS platform.
● Brickland platform: The interface is integrated into the BIOS Setup. For details,
see A.1.1 Logging In to the RAID Controller Card Management Screen in
EFI/UEFI Mode (Brickland Platform).
● Grantley platform: The interface is integrated into the Device Manager. For
details, see A.1.2 Logging In to the RAID Controller Card Management
Screen in EFI/UEFI Mode (Grantley Platform).
Step 3 Select PMC maxView Storage Manager and press Enter.
The PMC maxView Storage Manager screen is displayed, as shown in Figure
10-55.
Operation Description
----End
Screen Introduction
On the Configuration Utility screen shown in Figure 10-57, select Logical Device
Configuration to display the Logical Device Configuration screen, as shown in
Figure 10-173. The parameters are described in Table 10-34.
Parameter Description
Initialize a drive.
NOTE
This operation will delete all data from the drive. Before performing the operation, check
whether the data needs to be backed up.
Step 1 Based on the actual situation, select Initialize Drives, Secure Erase Drives, Secure
ATA Erase, or Uninitialize Drives and press Enter.
The drive list is displayed.
Step 2 Select a drive and press Enter.
The drive status changes to [X].
Step 3 Select SUBMIT and press Enter.
----End
Screen Introduction
Select Logical Device Configuration > Manage Arrays, as shown in Figure
10-173. The array list is displayed, as shown in Figure 10-174.
Select the array to be operated and press Enter. The screen shown in Figure
10-175 is displayed. Table 10-35 describes the operations on the screen.
----End
Deleting an Array
Step 1 Select Delete Array and press Enter.
----End
NOTE
Creating an Array
Step 1 Select Logical Device Configuration > Create Array, as shown in Figure 10-173.
The drive list is displayed, as shown in Figure 10-178.
Step 2 Select drives to be added to the RAID array and press Enter.
NOTE
Stripe Size Stripe size (64 KB, 128 KB, 256 KB, 512 KB, or 1024 KB).
When Array Type is RAID 1 or Volume, the value of this
parameter cannot be changed.
Parameter Description
----End
----End
----End
Screen Introduction
On the Configuration Utility screen shown in Figure 10-172, select Controller
Settings to display the Controller Settings screen, as shown in Figure 10-182.
Table 10-38 describes the parameters.
Screen Introduction
On Controller Settings shown in Figure 10-182, select Controller Configuration
to display the Controller Configuration screen, as shown in Figure 10-183. Table
10-39 describes the parameters.
Parameter Description
Parameter Description
Parameter Description
----End
Screen Introduction
On Controller Settings shown in Figure 10-182, select Advanced Configuration
to display the Advanced Configuration screen, as shown in Figure 10-184. Table
10-40 describes the parameters.
----End
Screen Introduction
On Controller Settings shown in Figure 10-182, select Bettery Backup
Information to display the Bettery Backup Information screen, as shown in
Figure 10-185. Table 10-41 describes the parameters.
Screen Introduction
On Controller Settings shown in Figure 10-182, select Disk Utilities to display
the Disk Utilities screen, as shown in Figure 10-186.
Select the drive to be operated and press Enter. The drive operation screen is
displayed, as shown in Figure 10-187. The parameters are described in Table
10-42.
Parameter Description
Format Disk Formats the drive, which takes a long time and clears all
data in the drive. Formatting cannot be stopped once
started.
Verify Disk Media Checks drive bad blocks to prevent bad block during use.
You can stop the operation by pressing Esc.
Write Cache Indicates the drive cache write policy. The value options are
as follows:
● Write-Back (Enable): After the cache receives all data,
the controller card sends the host a message indicating
that data transmission is complete.
● Write-Through (Disable): After the drive receives all
data, the controller card sends the host a message
indicating that data transmission is complete.
Locating a Drive
Step 1 On the displayed menu, select Identify Drive.
NOTE
----End
Formatting a Drive
NOTICE
This operation cannot be stopped once started, takes a long time, and will delete
all data from the drive. Therefore, this operation is not recommended.
----End
----End
10.9.5 Administration
On the Administration screen, you can upgrade RAID controller card firmware
and view firmware parameters.
Screen Introduction
On the Configuration Utility screen shown in Figure 10-172, select
Administration to display the Administration screen, as shown in Figure 10-188.
Table 10-43 describes the parameters.
----End
10.9.6 Exit
Press ESC consecutively on any screen to exit Device Manager. The screen shown
in Figure 10-189 is displayed. Select options based on the site requirements.
Step 3 Decompress the downloaded file to obtain the tool packages for different
operating systems (OSs).
----End
Installing ARCCONF
The ARCCONF installation method varies depending on the OS type. The following
uses Windows and Linux as examples to describe the ARCCONF installation
procedure. For the installation procedures for other OSs, see the ARCCONF user
guide, which can be found at Microsemi's website.
● Installing ARCCONF in Windows
a. Upload the tool package applicable to Windows to the server OS.
b. Go to the command-line interface (CLI).
c. Run a command to go to the directory where the ARCCONF tool package
resides.
For Windows, ARCCONF does not require installation. You can directly run
RAID controller card management commands.
● Installing ARCCONF in Linux
a. Use a file transfer tool (for example, PuTTY) to upload the ARCCONF
package applicable to Linux to the server OS.
b. Run the rpm -ivh Arcconf-xxx.rpm command to install ARCCONF.
When the installation is complete, you can run RAID controller card
management commands.
Function
Query RAID controller card hard disk information.
Format
arcconf list controller_id
Parameters
Parameter Description Value
Usage Guidelines
None
Example
# Query the information about the hard disk of controller 1.
[root@localhost ~]# arcconf list 1
Controllers found: 1
----------------------------------------------------------------------
Controller information
----------------------------------------------------------------------
Controller ID : Status, Slot, Mode, Name, SerialNumber, WWN
----------------------------------------------------------------------
Controller 1: : Optimal, Slot 1, RAID (Expose RAW), PM8060-RAID, 7A45F30016A,
50000D1E00189F30
----------------------------------------------------------------------
Array Information
----------------------------------------------------------------------
Array ID : Status (Interface, TotalSize MB, FreeSpace MB)
----------------------------------------------------------------------
Array 0 : Ok (SATA SSD, 305152 MB, 0 MB)
----------------------------------------------------------------------
Logical device information
----------------------------------------------------------------------
Logical ID : Status (RAID, Interface, Size MB) Name
----------------------------------------------------------------------
Logical 0 : Optimal (0, Data, 305190 MB) LogicalDrv 0
----------------------------------------------------------------------
maxCache information
----------------------------------------------------------------------
----------------------------------------------------------------------
Physical Device information
----------------------------------------------------------------------
Physical ID : State (Interface, BlockSize, SizeMB, Vendor, Model, Type) WWN, [Location]
----------------------------------------------------------------------
Physical 0,0 : Online (SATA, 512 Bytes, 152627MB, ATA , INTEL SSDSA2BW16, Solid State
Drive) 30000D1E00189F30, [Enclosure Direct Attached, Slot 0(Connector 0:CN0)]
Physical 0,1 : Online (SATA, 512 Bytes, 1526185MB, ATA , SSDSC2BB016T7H , Solid State
Drive) 30000D1E00189F32, [Enclosure Direct Attached, Slot 1(Connector 0:CN0)]
Physical 0,4 : Ready (SAS, 512 Bytes, 1716957MB, TOSHIBA , AL14SEB18EQ , Hard Drive)
500003969812C842, [Enclosure Direct Attached, Slot 4(Connector 1:CN1)]
Physical 0,5 : Ready (SAS, 512 Bytes, 858483MB, HGST , HUC101890CSS200 , Hard Drive)
5000CCA0361C09C1, [Enclosure Direct Attached, Slot 5(Connector 1:CN1)]
Physical 0,6 : Ready (SATA, 512 Bytes, 457862MB, ATA , MTFDDAK480TDC-1A, Solid State
Drive) 30000D1E00189F39, [Enclosure Direct Attached, Slot 6(Connector 1:CN1)]
Physical 0,7 : Ready (SATA, 512 Bytes, 915715MB, ATA , MTFDDAK960TDC-1A, Solid State
Drive) 30000D1E00189F3B, [Enclosure Direct Attached, Slot 7(Connector 1:CN1)]
Physical 2,2 : Ready (SES2, Not Applicable, Not Applicable, MSCC , Virtual SGPIO, Enclosure
Services Device) 50000D1E00189F30, [Not Applicable]
Function
Set the number of drives that spin up simultaneously upon power-on.
Format
arcconf setpower controller_id spinup internal external
Parameters
Parameter Description Value
Usage Guidelines
None
Example
# Set drive spinup parameters.
domino:~# ./arcconf setpower 1 spinup 4 0
Controllers found: 1
Command completed successfully.
Function
Set power saving parameters for the virtual drive.
Format
arcconf setpower controller_id ld ld_id slowdown timer1 poweroff timer2
Parameters
Parameter Description Value
Usage Guidelines
None
Example
# Set power saving parameters.
domino:~# ./arcconf setpower 1 ld 0 slowdown 3 poweroff 5
Controllers found: 1
Command completed successfully.
Function
Set the initialization function for physical and virtual drives.
Format
arcconf task start controller_id logicaldrive ld_id option [noprompt]
arcconf task stop controller_id logicaldrive ld_id
arcconf task start controller_id device channel_id slot_id option [noprompt]
arcconf task stop controller_id device channel_id slot_id
Parameters
Parameter Description Value
Usage Guidelines
If the noprompt parameter is contained, the command is executed forcibly.
NOTICE
Examples
# Initialize the drive in slot 3.
domino:~# ./arcconf task start 1 device 0 3 clear
Controllers found: 1
Clear of a Hard drive is a long process.
Are you sure you want to continue?
Press y, then ENTER to continue or press ENTER to abort: y
Clearing Channel 0, Device 3.
Command completed successfully.
Function
Set the task priority and make the priority of the current task take effect.
Format
arcconf setpriority controller_id priority [current]
Parameters
Parameter Description Value
Usage Guidelines
None
Example
# Set the priority of the background task to high and make the setting take effect.
domino:~# ./arcconf setpriority 1 high current
Controllers found: 1
Command completed successfully.
Function
Query and set the performance mode of a RAID controller card.
Format
arcconf setperform controller_id mode
arcconf getperform controller_id
Parameters
Parameter Description Value
Usage Guidelines
None
Example
# Set the performance mode of the RAID controller card to Dymatic.
domino:~# ./arcconf setperform 1 1
Controllers found: 1
Command completed successfully.
Function
Set the working mode for a RAID controller card.
Format
arcconf setcontrollermode controller_id mode
Parameters
Parameter Description Value
Usage Guidelines
None
Example
# Set the working mode to RAID: expose RAW.
Function
Set a SATA password to prevent SATA drive data from being securely erased.
Format
arcconf atapassword controller_id set new_password channel_id slot_id
arcconf atapassword controller_id clear current_password channel_id slot_id
Parameters
Parameter Description Value
Usage Guidelines
None
Example
# Set the password of the SATA drive in slot 0 to huawei.
domino:~# ./arcconf atapassword 1 set huawei 0 0
Controllers found: 1
Setting the ATA security password on the SATA harddrive
Are you sure you want to continue?
Press y, then ENTER to continue or press ENTER to abort: y
Command completed successfully.
Function
Create and delete a RAID array.
Format
arcconf create controller_id logicaldrive stripesize stripesize name ld_name
priority ld_priority method mode capacity raid_level channel_id1 slot_id1
channel_id2 slot_id2...channel_idN slot_idN [noprompt]
arcconf delete controller_id logicaldrive ld_id noprompt
Parameters
Parameter Description Value
Usage Guidelines
None
Examples
# Create a RAID 5 array.
domino:~# ./arcconf create 1 logicaldrive stripesize 64 name test01 priority high method quick 102400
5000102
Controllers found: 1
For arrays with all SSD drives, caching is not recommended. Disable all cache settings?(Y/N)
Are you sure you want to continue?
Press y, then ENTER to continue or press ENTER to abort: y
Do you want to add a logical device to the configuration?
Press y, then ENTER to continue or press ENTER to abort: y
Creating logical device: test01
Command completed successfully.
Controllers found: 1
All data in logical device 1 will be lost:
Deleting: logical device 1 ("LogicalDrv 1")
Command completed successfully.
10.10.2.10 Setting the Cache Read and Write Properties for a RAID Array
Function
Set the cache read and write properties for a RAID array.
Format
arcconf setcache controller_id logicaldrive ld_id mode
Parameters
Parameter Description Value
Usage Guidelines
None
Example
# Enable the read cache for a RAID array.
domino:~# ./arcconf setcache 1 logicaldrive 0 ron
Controllers found: 1
Command completed successfully.
Function
Set the hot spare drive status to global or dedicated.
Format
arcconf setstate controller_id device channel_id slot_id hsp [logicaldrive ld_id1
ld_id2]
Parameters
Parameter Description Value
Usage Guidelines
If logicaldrive ld_id1 ld_id2 is not contained, the drive is a global hot spare drive.
If logicaldrive ld_id1 ld_id2 is contained, the drive is a dedicated hot spare drive.
Example
# Set the drive in slot 3 to a global hot spare drive.
domino:~# ./arcconf setstate 1 device 0 3 hsp
Controllers found: 1
This global hot spare will only protect logicals whose member block size is 512 Bytes.
Command completed successfully.
Function
You can adjust the RAID strip size, capacity, and level simultaneously.
Format
arcconf modify controller_id from ld_id to [stripesize size] capacity raid_level
channel_id1 slot_id1 ... channel_idN slot_idN [noprompt]
Parameters
Parameter Description Value
Usage Guidelines
If the noprompt parameter is contained, the command is executed forcibly.
Examples
# Change the strip size to 1024 without adding drives.
domino:~# ./arcconf modify 1 from 0 to stripesize 1024 1525760 0 0 0 0 1
Controllers found: 1
Reconfiguration of a logical device is a long process. Are you sure you want to continue?
Press y, then ENTER to continue or press ENTER to abort: y
Reconfiguring logical device: LogicalDrv 0
Command completed successfully.
Function
Set consistency check parameters.
Format
arcconf consistencycheckcontroller_idperiod time
Parameters
Parameter Description Value
Usage Guidelines
None
Example
# Set the automatic consistency check period to 10 days.
domino:~# ./arcconf consistencycheck 1 period 10
Controllers found: 1
Setting the period will automatically turn on background consistency check.
Are you sure you want to continue?
Press y, then ENTER to continue or press ENTER to abort: y
Command completed successfully.
Function
Enable or disable the read/write instruction optimization function.
Format
arcconf setncqcontroller_id state
Parameters
Parameter Description Value
Usage Guidelines
None
Example
# Enable the NCQ function.
Function
After a drive is configured as a pass-through drive, the OS can directly manage the
drive.
Format
arcconf uninitcontroller_id channel_id slot_id
Parameters
Parameter Description Value
Usage Guidelines
None
Example
# Set the drive in slot 3 as a pass-through drive.
domino:~# ./arcconf uninit 1 0 3
Controllers found: 1
1 device(s) uninitialized.
Command completed successfully.
Function
Turn on and off the UID indicator of a specified drive.
Format
arcconf identify controller_id device physical_id
Parameters
Parameter Description Value
Usage Guidelines
None
Example
# Turn on the UID indicator of the drive in slot 3.
domino:~# ./arcconf identify 1 device 3
Controllers found: 1
The specified device(s) is/are blinking.
Press any key to stop the blinking.
Command completed successfully.
Function
Query detailed information about RAID controller cards, physical drives, and
virtual drives.
Format
arcconf getconfig controller_id <ad | ld ld_id | pd channel_id slot_id | mc | al>
Parameters
Parameter Description Value
ad Queries controller –
properties.
Usage Guidelines
None
Example
# Query controller properties.
domino:~# ./arcconf getconfig 1 ad
Controllers found: 1
----------------------------------------------------------------------
Controller information
----------------------------------------------------------------------
Controller Status : Optimal
Controller Mode : RAID (Expose RAW)
Channel description : SAS/SATA
Controller Model : PM8060-RAID
Controller Serial Number : 123A456B789
Controller World Wide Name : 5E0247F95EEF7000
Controller Alarm : Enabled
Physical Slot :6
.................
Function
Query the status of a drive.
Format
arcconf getconfig controller_id <pd>
Parameters
Parameter Description Value
Usage Guidelines
None
Example
# Query the status of a drive.
domino:~# ./arcconf getconfig 1 pd
Device #2----------------------------------------------------------------------
Device is a Hard drive
State : Global Hot-Spare
Block Size : Bytes
Supported : Yes
Programmed Max Speed : SATA 6.0 Gb/s
Transfer Speed : SATA 12.0 Gb/s
Reported Channel,Device(T:L) : 0,1(11:0)
Reported Location : Enclosure 0, Slot 3(Connector 0, Connector 1 )
Reported ESD(T:L) : 2,0(0:0)
Vendor : ATA
Model : INTEL SSDSC2BX01
Firmware : 0150
Serial Number : BTHC445301D81P6PGN
Reserved Size : 435480 KB
Used Size : 1525760 MB
Unused Size : 64 KB
Total Size : 1526185 MB
Write Cache : Enabled (write-back)
FRU : None
SMART : NO
S M A R T warnings :0
Power States : Full RPM
NOTE
11 PM8068
11.1 Overview
A PM8068 RAID controller card provides two 12 Gbit/s SAS wide ports and
supports PCIe 3.0 ports.
In addition to better system performance, the controller card supports fault-
tolerant data storage in multiple drive partitions and read/write operations on
multiple drives at the same time. This makes accessing data on drives faster.
The PM8068 is connected to the mainboard through an Xcede connector and to
the drive backplane through two Mini-SAS HD cables. Figure 11-1 shows the
components of the card.
11.2 Functions
RAID 0 1 to 128 0
RAID 1 2 1
RAID 5 3 to 32 1
NOTE
The failed drives cannot be adjacent. For details, see A.2 RAID Levels.
● Dedicated spare drive: When the system detects that a member drive of the
array is faulty, the hot spare drive is enabled.
● Auto replace drives: When the system detects a SMART error occurs on a
member drive of the array, the hot spare drive is enabled.
The striping technology evenly distributes I/O loads to multiple physical drives. It
divides continuous data into multiple blocks and saves them to different drives.
This allows multiple processes to access these data blocks concurrently without
causing any drive conflicts. Striping also optimizes concurrent processing
performance in sequential access to data.
The storage space of each member drive in a RAID array is striped based on the
strip size. The data written to the drives is also divided into blocks based on the
strip size.
The PM8068 supports multiple strip sizes, including 8 KB, 16 KB, 32 KB, 64 KB, 128
KB, 256 KB, 512 KB and 1 MB.
NOTE
If a drive in passthrough mode is faulty (not in the healthy status), the Fault indicator on
the drive will be lit and the iBMC will generate an alarm.
When the server meets the conditions for importing foreign configurations, you
can press Ctrl+H to import the foreign configurations.
All configurations described in this document about the PM8068 are performed on
the Configuration Utility, which can be accessed only after you restart the server.
To monitor controller card status and obtain configuration information when the
system is running, use the ARCCONF tool.
Procedure
Set the boot type to Legacy on the BIOS. For details, see A.1.3 Setting the
Legacy Mode.
1. During the server startup, press Ctrl+A when the message "Press <Ctrl><A>
for PMC RAID Configuration Utility" is displayed.
The screen shown in Figure 11-4 is displayed. For details about the
parameters, see Table 11-3.
Parameter Description
Disk Utilities Views the current drive list and performs operations
on a specific drive, such as turning on indicators and
formatting the drive.
Additional Information
Related Tasks
During the server restart, you can view the firmware version and PCIe slot
information of the current controller before accessing the RAID configuration
screen, as shown in Figure 11-5.
Related Concepts
None
The PM8068 supports SAS/SATA HDDs and SSDs. Drives in one RAID array must be of the
same type, but can have different capacities or be provided by different vendors.
Workflow
Procedure
Back up data on drives and access the Configuration Utility main screen. For
details, see 11.3.2 Logging In to the Configuration Utility.
3. In the Select drives to create Array area, press the space or Insert key to
select drives to be added to the array.
The selected drives are displayed in the Selected Drives area, as shown in
Figure 11-8.
Parameter Description
Stripe Size Specifies the stripe size (8 KB, 16 KB, 32 KB, 64 KB, 128
KB, 256 KB, 512 KB, and 1024 KB for option).
5. Select RAID 0.
6. Configure the other RAID parameters. For details, see Table 11-4.
7. Select Done and press Enter.
The system creates an array and performs the operations defined in Create
RAID via.
After the array is created, the system displays the maximum capacity of the
array.
8. Press any key to return to the Figure 11-6 screen.
(Optional) Create multiple logical drives.
NOTE
● This operation can be performed only when the array size is smaller than the
maximum value.
● A maximum of 64 logical drives can be created for each array.
9. After the array is created, select Manage Arrays and press Enter, as shown in
Figure 11-6.
The array list is displayed.
10. Select the array for which multiple logical drives are to be created, and press
Ctrl+C.
The array configuration screen is displayed.
11. Repeat 6 to 8.
You can create multiple logical drives based on the actual situation. For
example, after two logical drives are created, select Manage Arrays and press
Enter, as shown in Figure 11-6. On the displayed screen, select the array for
which multiple logical drives have been created, and press Enter. The
configuration results are displayed, as shown in Figure 11-10.
The PM8068 supports SAS/SATA HDDs and SSDs. Drives in one RAID array must be of the
same type, but can have different capacities or be provided by different vendors.
Workflow
Procedure
Back up data on drives and access the Configuration Utility main screen. For
details, see 11.3.2 Logging In to the Configuration Utility.
Select a member drive.
1. On the screen shown in Figure 11-4, select Logical Device Configuration
and press Enter.
The Logical Device Configuration screen is displayed, as shown in Figure
11-11.
3. In the Select drives to create Array area, press the space or Insert key to
select drives to be added to the array.
The selected drives are displayed in the Selected Drives area, as shown in
Figure 11-13.
Stripe Size Specifies the stripe size (8 KB, 16 KB, 32 KB, 64 KB, 128
KB, 256 KB, 512 KB, and 1024 KB for option).
5. Select RAID 1.
6. Configure the other RAID parameters. For details, see Table 11-5.
7. Select Done and press Enter.
The system creates an array and performs the operations defined in Create
RAID via.
After the array is created, the system displays the maximum capacity of the
array.
8. Press any key to return to the Figure 11-11 screen.
(Optional) Create multiple logical drives.
NOTE
● This operation can be performed only when the array size is smaller than the
maximum value.
● A maximum of 64 logical drives can be created for each array.
9. After the array is created, select Manage Arrays and press Enter, as shown in
Figure 11-11.
The array list is displayed.
10. Select the array for which multiple logical drives are to be created, and press
Ctrl+C.
The array configuration screen is displayed.
11. Repeat 6 to 8.
You can create multiple logical drives based on the actual situation. For
example, after two logical drives are created, select Manage Arrays and press
Enter, as shown in Figure 11-11. On the displayed screen, select the array for
which multiple logical drives have been created, and press Enter. The
configuration results are displayed, as shown in Figure 11-15.
The PM8068 supports SAS/SATA HDDs and SSDs. Drives in one RAID array must be of the
same type, but can have different capacities or be provided by different vendors.
Workflow
Procedure
Back up data on drives and access the Configuration Utility main screen. For
details, see 11.3.2 Logging In to the Configuration Utility.
Select a member drive.
1. On the screen shown in Figure 11-4, select Logical Device Configuration
and press Enter.
The Logical Device Configuration screen is displayed, as shown in Figure
11-16.
3. In the Select drives to create Array area, press the space or Insert key to
select drives to be added to the array.
The selected drives are displayed in the Selected Drives area, as shown in
Figure 11-18.
Stripe Size Specifies the stripe size (8 KB, 16 KB, 32 KB, 64 KB, 128
KB, 256 KB, 512 KB, and 1024 KB for option).
5. Select RAID 5.
6. Configure the other RAID parameters. For details, see Table 11-6.
7. Select Done and press Enter.
The system creates an array and performs the operations defined in Create
RAID via.
After the array is created, the system displays the maximum capacity of the
array.
8. Press any key to return to the Figure 11-16 screen.
(Optional) Create multiple logical drives.
NOTE
● This operation can be performed only when the array size is smaller than the
maximum value.
● A maximum of 64 logical drives can be created for each array.
9. After the array is created, select Manage Arrays and press Enter, as shown in
Figure 11-16.
The array list is displayed.
10. Select the array for which multiple logical drives are to be created, and press
Ctrl+C.
The array configuration screen is displayed.
11. Repeat 6 to 8.
You can create multiple logical drives based on the actual situation. For
example, after two logical drives are created, select Manage Arrays and press
Enter, as shown in Figure 11-16. On the displayed screen, select the array for
which multiple logical drives have been created, and press Enter. The
configuration results are displayed, as shown in Figure 11-20.
The PM8068 supports SAS/SATA HDDs and SSDs. Drives in one RAID array must be of the
same type, but can have different capacities or be provided by different vendors.
Workflow
Procedure
Back up data on drives and access the Configuration Utility main screen. For
details, see 11.3.2 Logging In to the Configuration Utility.
Select a member drive.
1. On the screen shown in Figure 11-4, select Logical Device Configuration
and press Enter.
The Logical Device Configuration screen is displayed, as shown in Figure
11-21.
3. In the Select drives to create Array area, press the space or Insert key to
select drives to be added to the array.
The selected drives are displayed in the Selected Drives area, as shown in
Figure 11-23.
Stripe Size Specifies the stripe size (8 KB, 16 KB, 32 KB, 64 KB, 128
KB, 256 KB, 512 KB, and 1024 KB for option).
NOTE
● This operation can be performed only when the array size is smaller than the
maximum value.
● A maximum of 64 logical drives can be created for each array.
9. After the array is created, select Manage Arrays and press Enter, as shown in
Figure 11-21.
The array list is displayed.
10. Select the array for which multiple logical drives are to be created, and press
Ctrl+C.
The array configuration screen is displayed.
11. Repeat 6 to 8.
You can create multiple logical drives based on the actual situation. For
example, after two logical drives are created, select Manage Arrays and press
Enter, as shown in Figure 11-21. On the displayed screen, select the array for
which multiple logical drives have been created, and press Enter. The
configuration results are displayed, as shown in Figure 11-25.
Scenarios
NOTICE
The priority of a single drive as the boot device is higher than that of the logical
drive in an array.
Prerequisites
Conditions
● You have logged in to the Configuration Utility. For details, see 11.3.2
Logging In to the Configuration Utility.
● You have created an array.
Data
Data preparation is not required for this operation.
Procedure
Setting Boot Devices
1. On the configuration tool main screen, select Logical Device Configuration
and press Enter.
The Main Menu screen is displayed, as shown in Figure 11-26.
NOTE
When an array is in the rebuild or initialization state, its boot priority cannot be changed.
Procedure
Step 1 Set the EFI Boot Type mode. For details, see A.1.4 Setting the EFI/UEFI Mode.
Step 2 Log in to the PM8068 management screen.
The position of the RAID controller card management interface in EFI/UEFI mode
varies with the BIOS platform.
● Brickland platform: The interface is integrated into the BIOS Setup. For details,
see A.1.1 Logging In to the RAID Controller Card Management Screen in
EFI/UEFI Mode (Brickland Platform).
● Grantley platform: The interface is integrated into the Device Manager. For
details, see A.1.2 Logging In to the RAID Controller Card Management
Screen in EFI/UEFI Mode (Grantley Platform).
Step 3 Select Huawei BC11ESMS and press Enter.
The PM8068 configuration screen is displayed, as shown in Figure 11-27. Table
11-8 describes the parameters displayed on the screen.
----End
The PM8068 supports SAS/SATA HDDs and SSDs. Drives in one RAID array must be of the
same type, but can have different capacities or be provided by different vendors.
Procedure
Back up data on drives and access the Configuration Utility main screen. For
details, see 11.4.1 Logging In to the Management Screen.
Open the array configuration screen.
1. On the displayed screen, select Array Configuration and press Enter.
The Array Configuration screen is displayed. See Figure 11-28.
Stripe Size Specifies the stripe size. The value can be 8 KiB, 16 KiB,
32 KiB, 64 KiB, 128 KiB, 256 KiB, 512 KiB, or 1024 KiB.
Unit Size Specifies the logical drive capacity unit. The value can
be TiB, GiB, or MiB.
● This operation can be performed only when the array size is smaller than the
maximum value.
● A maximum of 64 logical drives can be created for each array.
10. After the array is created, select Manage Array LD and press Enter in Figure
11-28.
The array list is displayed.
11. Select the array for which multiple logical drives are to be created, and press
Enter. The array operation list is displayed. See Figure 11-32.
12. Select Create LD and press Enter. A UI shown in Figure 11-33 is displayed.
13. Repeat 6 to 9.
You can create multiple logical drives as required. For example, after three
logical drives are created, select List Logical Drives and press Enter in Figure
11-32. The configuration result is displayed. See Figure 11-34.
The PM8068 supports SAS/SATA HDDs and SSDs. Drives in one RAID array must be of the
same type, but can have different capacities or be provided by different vendors.
Procedure
Back up data on drives and access the Configuration Utility main screen. For
details, see 11.4.1 Logging In to the Management Screen.
Open the array configuration screen.
1. On the displayed screen, select Array Configuration and press Enter.
The Array Configuration screen is displayed. See Figure 11-35.
Stripe Size Specifies the stripe size. The value can be 8 KiB, 16 KiB,
32 KiB, 64 KiB, 128 KiB, 256 KiB, 512 KiB, or 1024 KiB.
Unit Size Specifies the logical drive capacity unit. The value can
be TiB, GiB, or MiB.
● This operation can be performed only when the array size is smaller than the
maximum value.
● A maximum of 64 logical drives can be created for each array.
10. After the array is created, select Manage Array LD and press Enter in Figure
11-35.
The array list is displayed.
11. Select the array for which multiple logical drives are to be created, and press
Enter. The array operation list is displayed. See Figure 11-39.
12. Select Create LD and press Enter. A UI shown in Figure 11-40 is displayed.
13. Repeat 6 to 9.
You can create multiple logical drives as required. For example, after three
logical drives are created, select List Logical Drives and press Enter in Figure
11-39. The configuration result is displayed. See Figure 11-41.
The PM8068 supports SAS/SATA HDDs and SSDs. Drives in one RAID array must be of the
same type, but can have different capacities or be provided by different vendors.
Procedure
Back up data on drives and access the Configuration Utility main screen. For
details, see 11.4.1 Logging In to the Management Screen.
Open the array configuration screen.
1. On the displayed screen, select Array Configuration and press Enter.
The Array Configuration screen is displayed. See Figure 11-42.
Stripe Size Specifies the stripe size. The value can be 8 KiB, 16 KiB,
32 KiB, 64 KiB, 128 KiB, 256 KiB, 512 KiB, or 1024 KiB.
Unit Size Specifies the logical drive capacity unit. The value can
be TiB, GiB, or MiB.
● This operation can be performed only when the array size is smaller than the
maximum value.
● A maximum of 64 logical drives can be created for each array.
10. After the array is created, select Manage Array LD and press Enter in Figure
11-42.
The array list is displayed.
11. Select the array for which multiple logical drives are to be created, and press
Enter. The array operation list is displayed. See Figure 11-46.
12. Select Create LD and press Enter. A UI shown in Figure 11-47 is displayed.
13. Repeat 6 to 9.
You can create multiple logical drives as required. For example, after three
logical drives are created, select List Logical Drives and press Enter in Figure
11-46. The configuration result is displayed. See Figure 11-48.
The PM8068 supports SAS/SATA HDDs and SSDs. Drives in one RAID array must be of the
same type, but can have different capacities or be provided by different vendors.
Procedure
Back up data on drives and access the Configuration Utility main screen. For
details, see 11.4.1 Logging In to the Management Screen.
Open the array configuration screen.
1. On the displayed screen, select Array Configuration and press Enter.
The Array Configuration screen is displayed. See Figure 11-49.
Stripe Size Specifies the stripe size. The value can be 8 KiB, 16 KiB,
32 KiB, 64 KiB, 128 KiB, 256 KiB, 512 KiB, or 1024 KiB.
Unit Size Specifies the logical drive capacity unit. The value can
be TiB, GiB, or MiB.
● This operation can be performed only when the array size is smaller than the
maximum value.
● A maximum of 64 logical drives can be created for each array.
10. After the array is created, select Manage Array LD and press Enter in Figure
11-49.
The array list is displayed.
11. Select the array for which multiple logical drives are to be created, and press
Enter. The array operation list is displayed. See Figure 11-53.
12. Select Create LD and press Enter. A UI shown in Figure 11-54 is displayed.
13. Repeat 6 to 9.
You can create multiple logical drives as required. For example, after three
logical drives are created, select List Logical Drives and press Enter in Figure
11-53. The configuration result is displayed. See Figure 11-55.
Prerequisites
Conditions
● The server has idle drives.
● You have logged in to the Configuration Utility. For details, see 11.3.2
Logging In to the Configuration Utility.
Data
Data preparation is not required for this operation.
Procedure
Step 1 On the screen shown in Figure 11-4, select Logical Device Configuration and
press Enter.
The Logical Device Configuration screen is displayed, as shown in Figure 11-56.
Step 4 In the Select Hotspare drives area, select the target drives and press the space
bar or Insert.
The selected drives are displayed in the Selected Drives area.
Step 5 Press Enter.
A confirmation dialog box is displayed.
Step 6 Enter Y.
The Select Spare Type screen is displayed, as shown in Figure 11-59.
Step 7 Select the type of the hot spare drive to be configured and press Enter.
A confirmation dialog box is displayed.
Step 8 Select DONE and press Enter.
Return to Figure 11-57.
----End
Procedure
Step 1 Log in to the Configuration Utility main screen. For details, see 11.3.2 Logging In
to the Configuration Utility.
Step 2 On the screen shown in Figure 11-4, select Logical Device Configuration and
press Enter.
The Logical Device Configuration screen is displayed, as shown in Figure 11-60.
Step 5 In the Select Hotspare drives area, select the target drives and press the space
bar or Insert.
The selected drives are displayed in the Selected Drives area.
Step 6 Press Enter.
A confirmation dialog box is displayed.
Step 7 Type Y and press Enter.
Return to Figure 11-61.
----End
NOTICE
The priority of a single drive as the boot device is higher than that of the logical
drive in an array.
Prerequisites
Conditions
● The server has idle drives.
● You have logged in to the Configuration Utility. For details, see 11.3.2
Logging In to the Configuration Utility.
Data
Procedure
Step 1 On the screen shown in Figure 11-4, select Logical Device Configuration and
press Enter.
Step 3 In the Select drive to set as boot device area, select the target drives and press
the space bar or Insert.
The selected drives are displayed in the Selected Drives area.
Step 4 Press Enter.
A confirmation dialog box is displayed.
Step 5 Type Y and press Enter.
Return to Figure 11-63.
----End
Prerequisites
Conditions
● Member drives of the array have available space.
● You have logged in to the Configuration Utility. For details, see 11.3.2
Logging In to the Configuration Utility.
Data
Data preparation is not required for this operation.
Procedure
Access the expansion screen.
1. On the screen shown in Figure 11-4, select Logical Device Configuration
and press Enter.
The Logical Device Configuration screen is displayed, as shown in Figure
11-65.
Parameter Description
Stripe Size Strip size. The value can be 8 KB, 16 KB, 32 KB, 64 KB,
128 KB, 256 KB, 512 KB, or 1024 KB.
Prerequisites
Conditions
● You have created an array.
● You have logged in to the Configuration Utility. For details, see 11.3.2
Logging In to the Configuration Utility.
Data
The data in the array to be deleted has been backed up.
Procedure
Step 1 On the screen shown in Figure 11-4, select Logical Device Configuration and
press Enter.
The Logical Device Configuration screen is displayed, as shown in Figure 11-68.
Step 6 Repeat the steps to delete all the logical drives in the array.
----End
Prerequisites
Conditions
● The server has idle drives.
● You have logged in to the Configuration Utility. For details, see 11.4.1
Logging In to the Management Screen.
Data
Data preparation is not required for this operation.
Procedure
The page for managing hot spare drives appears.
1. On the displayed screen, select Array Configuration and press Enter.
The Array Configuration screen is displayed. See Figure 11-71.
Prerequisites
Conditions
● Hot spare drives have been configured.
● You have logged in to the Configuration Utility. For details, see 11.4.1
Logging In to the Management Screen.
Data
Data preparation is not required for this operation.
Procedure
The page for managing hot spare drives appears.
1. On the displayed screen, select Array Configuration and press Enter.
The Array Configuration screen is displayed. See Figure 11-74.
Procedure
Open the array management screen.
1. On the displayed screen, select Array Configuration and press Enter.
The Array Configuration screen is displayed. See Figure 11-77.
Deletes an array.
4. Select Delete Array in Figure 11-78 and press Enter.
Exit the Configuration Utility.
5. Press Esc to go back to the Boot Manager screen.
11.7 Troubleshooting
This section describes solutions to drive faults and RAID controller card faults. For
other situations, see the Huawei Server Maintenance Guide.
Solution
NOTE
----End
Solution
Step 1 Replace the RAID controller card. Then, check whether the alarm is cleared.
For details about how to replace a RAID controller card, see the user guide
delivered with your server.
● If yes, go to Step 2.
● If no, go to Step 3.
Step 2 Restart the server. After the foreign configuration is automatically imported, check
whether the RAID configuration is the same as that before the replacement.
● If yes, no further action is required.
● If no, go to Step 3.
Step 3 Contact technical support.
----End
Procedure
Set the boot type to Legacy on the BIOS. For details, see A.1.3 Setting the
Legacy Mode.
Log in to the PM8068 Configuration Utility.
1. During the server startup, press Ctrl+A when the message "Press <Ctrl><A>
for PMC RAID Configuration Utility" is displayed.
The screen shown in Figure 11-79 is displayed. For details about the
parameters, see Table 11-14.
Disk Utilities Views the current drive list and perform operations
on a specific drive, such as turning on indicators and
formatting the drive.
Additional Information
Related Tasks
During the server restart, you can view the firmware version and PCIe slot
information of the current controller before accessing the RAID configuration
screen, as shown in Figure 11-80.
Related Concepts
None
Screen Introduction
On the Configuration Utility screen shown in Figure 11-4, select Logical Device
Configuration to display the array configuration main menu, as shown in Figure
11-81. Table 11-15 describes the parameters.
Screen Introduction
Select Manage Arrays to open the array list in Figure 11-81, as shown in Figure
11-82. Table 11-16 describes the operations that can be performed.
Step 2 Select the logical drive to be configured as the boot device and press Ctrl+B.
The logical drive is moved to the first row of the logical drive list.
----End
----End
Step 2 In the Select Hotspare drives area, select the target drives and press the space
bar or Insert.
The selected drives are displayed in the Selected Drives area.
Step 3 Press Enter.
A confirmation dialog box is displayed.
Step 4 Enter Y.
The Select Spare Type screen is displayed, as shown in Figure 11-86.
● Dedicated spare drive: When the system detects that a member drive of the
array is faulty, the hot spare drive is enabled.
● Auto replace drives: When the system detects a SMART error occurs on a
member drive of the array, the hot spare drive is enabled.
Step 5 Select the type of the hot spare drive to be configured and press Enter.
A confirmation dialog box is displayed.
Step 6 Select DONE and press Enter.
----End
Deleting an Array
Deletes an array, that is, deletes all the logical drives in the array.
----End
Screen Introduction
Select Create Array in Figure 11-81 to open the drive list, as shown in Figure
11-87.
The array property screen is displayed, as shown in Figure 11-88. Table 11-18
describes the parameters.
Parameter Description
Stripe Size Strip size. The value can be 8 KB, 16 KB, 32 KB, 64 KB, 128
KB, 256 KB, 512 KB, or 1024 KB.
----End
If both the boot drive and boot logical drive exist, the boot drive has a higher
priority than the boot logical drive.
Screen Introduction
Select Select Boot Device in Figure 11-81. The screen for selecting a drive is
displayed, as shown in Figure 11-89.
----End
Screen Introduction
Select Controller Settings in Figure 11-4. The controller operation menu is
displayed, as shown in Figure 11-90. Table 11-19 describes the parameters.
Set Controller Port Shows and sets the port mode of a controller.
Mode
Screen Introduction
Select Controller Configuration in Figure 11-90. The Controller Properties
screen is displayed, as shown in Figure 11-91. Table 11-20 describes the
parameters.
Parameter Description
Parameter Description
PD Write Cache Indicates the drive cache write policy. The value options
State are as follows:
● Enable All: enables the cache write function of all
drives.
● Disable All: disables the cache write function of all
drives.
----End
Screen Introduction
Select Set Controller Port Mode in Figure 11-90. The controller port mode screen
is displayed, as shown in Figure 11-92.
The value option list is displayed. The port modes are as follows:
● RAID: The drives managed by the RAID controller card can be used only after
an array is created.
● MIXD: Both the RAID mode and HBA mode are supported.
● HBA: The drives managed by the RAID controller card cannot be used to
create an array. They can only be used directly.
----End
Screen Introduction
Select Advanced Controller Configuration in Figure 11-90. The screen for setting
advanced controller properties is displayed, as shown in Figure 11-93. Table
11-21 describes the parameters.
Parameter Description
----End
Screen Introduction
Select Clear Controller Configuration in Figure 11-90. The Clear Controller
Configuration screen is displayed, as shown in Figure 11-94. Table 11-22
describes the parameters.
Parameter Description
Delete RIS on All Physical drives Clear invalid RAID information on all drives.
Return to Figure 11-94. A message is displayed, indicating that the role is deleted.
----End
Return to Figure 11-94. A message is displayed, indicating that the role is deleted.
----End
Screen Introduction
On the Configuration Utility main screen shown in Figure 11-4, select Disk
Utilities. The drive list is displayed, as shown in Figure 11-95.
Select a drive and press Enter. Table 11-23 describes the parameters displayed on
the screen.
Parameter Description
Secure Erase Formats the drive. It takes a long time because all data
on the drive will be deleted.
Turning On Indicators
Step 1 Select a drive and press Enter.
You can press any key to turn off the drive indicator.
----End
Formatting a Drive
Step 1 Select a drive and press Enter.
----End
11.8.5 Exit
Click Exit to exit the configuration screen.
● Click Yes to exit the configuration screen and restart the server.
● Click No to return to the configuration screen.
Procedure
Step 1 Set the EFI Boot Type mode. For details, see A.1.4 Setting the EFI/UEFI Mode.
The position of the RAID controller card management interface in EFI/UEFI mode
varies with the BIOS platform.
● Brickland platform: The interface is integrated into the BIOS Setup. For details,
see A.1.1 Logging In to the RAID Controller Card Management Screen in
EFI/UEFI Mode (Brickland Platform).
● Grantley platform: The interface is integrated into the Device Manager. For
details, see A.1.2 Logging In to the RAID Controller Card Management
Screen in EFI/UEFI Mode (Grantley Platform).
Parameter Description
Parameter Description
----End
Screen Introduction
Select Controller Information in Figure 11-27. The Controller Information
screen is displayed, as shown in Figure 11-98. Table 11-25 describes the
parameters.
Parameter Description
Parameter Description
Screen Introduction
Select Controller Configuration in Figure 11-27. The Controller Configuration
screen is displayed, as shown in Figure 11-99. Table 11-26 describes the
parameters.
Screen Introduction
Select Controller Properties in Figure 11-99. The Controller Properties screen is
displayed, as shown in Figure 11-100.
● RAID: The drives managed by the RAID controller card can be used only after
an array is created.
● MIXD: Both the RAID mode and HBA mode are supported.
● HBA: The drives managed by the RAID controller card cannot be used to
create an array. They can only be used directly.
----End
Screen Introduction
Select Clear Controller Configuration in Figure 11-99. The Clear Controller
Configuration screen is displayed, as shown in Figure 11-101. Table 11-27
describes the parameters.
Parameter Description
----End
----End
Screen Introduction
Select Array Configuration in Figure 11-27. The Array Configuration screen is
displayed, as shown in Figure 11-102. Table 11-28 describes the parameters.
Configuring an Array
Step 1 Select Select Drives and Create Array in Figure 11-102. The drive list is
displayed, as shown in Figure 11-103.
Parameter Description
Stripe Size Specifies the stripe size. The value can be 8 KiB, 16 KiB, 32
KiB, 64 KiB, 128 KiB, 256 KiB, 512 KiB, or 1024 KiB.
Unit Size Specifies the logical drive capacity unit. The value can be
TiB, GiB, or MiB.
----End
Screen Introduction
Select Manage Array LD in Figure 11-102 to open the array list.
Select the array to be operated and press Enter. The screen shown in Figure
11-106 is displayed. Table 11-30 describes the operations on the screen.
Parameter Description
Deleting an Array
Step 1 Select Delete Array and press Enter.
----End
Screen Introduction
On the screen shown in Figure 11-106, select List Logical Drives to open the
logical drive list.
Select the array to be operated and press Enter. The screen shown in Figure
11-107 is displayed. Table 11-31 describes the operations on the screen.
Parameter Description
The logical drive properties screen is displayed, as shown in Figure 11-108. Table
11-32 describes the parameters.
Parameter Description
Parameter Description
----End
----End
Screen Introduction
Select Manage Spare in Figure 11-106 and press Enter.
The hot spare drive list is displayed, as shown in Figure 11-109. Table 11-33
describes the operations.
Parameter Description
Dedicated Spare For Configures a dedicated hot spare drive. When the
Array system detects that a member drive in an array is
faulty, the hot spare drive is enabled.
Auto Replace For Array Configures an automatically adapted hot spare drive.
When the system detects that a SMART error occurs
on a member drive of the array, the hot spare drive is
enabled.
A message is displayed indicating that the hot spare drive is successfully added.
Step 3 Repeat the preceding operations to add multiple hot spare drives.
----End
Step 2 Select the hot spare drive to be deleted and press Enter.
A message is displayed indicating that the hot spare drive is deleted successfully.
----End
Screen Introduction
Select Disk Utilities in Figure 11-27 to open the drive list.
Select the drive to be operated and press Enter. The drive operation screen is
displayed, as shown in Figure 11-110. The parameters are described in Table
11-34.
Parameter Description
Turning On Indicators
Step 1 Select Blink LED and press Enter.
To turn on the LED, perform the following steps:
● Continue: turns on the indicator and the server can be located.
● Stop: turns off the indicator.
Step 2 Select Continue and press Enter.
----End
Formatting a Drive
Step 1 Select Erase Disk.
A confirmation screen is displayed.
----End
11.9.6 Administration
On the Administration screen, you can upgrade RAID controller card firmware
and view firmware version.
Screen Introduction
Select Administration in Figure 11-27. The Administration screen is displayed.
Press Enter. The screen shown in Figure 11-111 is displayed.
----End
11.9.7 Exit
Press Esc consecutively on any screen to exit Device Manager. The screen shown
in Figure 11-112 is displayed. Select options based on the site requirements.
Downloading ARCCONF
Step 1 Go to the controller card page at the Microsemi website.
Step 2 Click Microsemi Adaptec ARCCONF Command Line Utility vxxx on the
download list.
Step 3 Decompress the downloaded file to obtain the tool packages for different
operating systems (OSs).
----End
Installing ARCCONF
The ARCCONF installation method varies depending on the OS type. The following
uses Windows and Linux as examples to describe the ARCCONF installation
procedure. For the installation procedures for other OSs, see the ARCCONF user
guide, which can be found at Microsemi's website.
Function
Set the number of drives that spin up simultaneously upon power-on.
Format
arcconf setpower controller_id spinup internal external
Parameters
Parameter Description Value
Usage Guidelines
None
Example
# Set drive spinup parameters.
domino:~# ./arcconf setpower 1 spinup 4 0
Controllers found: 1
Command completed successfully.
Function
Set power saving parameters for the virtual drive.
Format
arcconf setpower controller_id ld ld_id slowdown timer1 poweroff timer2
Parameters
Parameter Description Value
Usage Guidelines
None
Example
# Set power saving parameters.
domino:~# ./arcconf setpower 1 ld 0 slowdown 3 poweroff 5
Controllers found: 1
Command completed successfully.
Function
Set the initialization function for physical and virtual drives.
Format
arcconf task start controller_id logicaldrive ld_id option [noprompt]
arcconf task stop controller_id logicaldrive ld_id
Parameters
Parameter Description Value
Usage Guidelines
If the noprompt parameter is contained, the command is executed forcibly.
Example
# Initialize the drive in slot 3.
domino:~# ./arcconf task start 1 device 0 3 clear
Controllers found: 1
Clear of a Hard drive is a long process.
Are you sure you want to continue?
Press y, then ENTER to continue or press ENTER to abort: y
Clearing Channel 0, Device 3.
Command completed successfully.
Function
Set the task priority and make the priority of the current task take effect.
Format
arcconf setpriority controller_id priority [current]
Parameters
Parameter Description Value
Usage Guidelines
None
Example
# Set the priority of the background task to high and make the setting take effect.
domino:~# ./arcconf setpriority 1 high current
Controllers found: 1
Command completed successfully.
Function
Query and set the performance mode of a RAID controller card.
Format
arcconf setperform controller_id mode
Parameters
Parameter Description Value
Usage Guidelines
None
Example
# Set the performance mode of the RAID controller card to Dymatic.
domino:~# ./arcconf setperform 1 1
Controllers found: 1
Command completed successfully.
Function
Set the working mode for a RAID controller card.
Format
arcconf setcontrollermode controller_id mode
Parameters
Parameter Description Value
Usage Guidelines
None
Example
# Set the working mode to RAID: expose RAW.
Function
Set a SATA password to prevent SATA drive data from being securely erased.
Format
arcconf atapassword controller_id set new_password channel_id slot_id
arcconf atapassword controller_id clear current_password channel_id slot_id
Parameters
Parameter Description Value
Usage Guidelines
None
Example
# Set the password of the SATA drive in slot 0 to huawei.
domino:~# ./arcconf atapassword 1 set huawei 0 0
Controllers found: 1
Setting the ATA security password on the SATA harddrive
Are you sure you want to continue?
Press y, then ENTER to continue or press ENTER to abort: y
Command completed successfully.
Function
Create and delete a RAID array.
Format
arcconf create controller_id logicaldrive stripesize stripesize name ld_name
priority ld_priority method mode capacity raid_level channel_id1 slot_id1
channel_id2 slot_id2...channel_idN slot_idN [noprompt]
arcconf delete controller_id logicaldrive ld_id noprompt
Parameter Description
Parameter Description Value
Usage Guidelines
None
Example
# Create a RAID 5 array.
domino:~# ./arcconf create 1 logicaldrive stripesize 64 name test01 priority high method quick 102400
5000102
Controllers found: 1
For arrays with all SSD drives, caching is not recommended. Disable all cache settings?(Y/N)
Are you sure you want to continue?
Press y, then ENTER to continue or press ENTER to abort: y
Do you want to add a logical device to the configuration?
Press y, then ENTER to continue or press ENTER to abort: y
Creating logical device: test01
Command completed successfully.
Controllers found: 1
All data in logical device 1 will be lost:
Deleting: logical device 1 ("LogicalDrv 1")
Command completed successfully.
Function
Set the cache read and write policies for a RAID array.
Format
arcconf setcache controller_id logicaldrive ld_id mode
Parameter Description
Parameter Description Value
Usage Guidelines
None
Example
# Enable the read cache for a RAID array.
domino:~# ./arcconf setcache 1 logicaldrive 0 ron
Controllers found: 1
Command completed successfully.
Function
Set the hot spare drive status to global or dedicated.
Format
arcconf setstate controller_id device channel_id slot_id hsp [logicaldrive ld_id1
ld_id2]
Parameters
Parameter Description Value
Usage Guidelines
If logicaldrive ld_id1 ld_id2 is not contained, the drive is a global hot spare drive.
Example
# Set the drive in slot 3 to a global hot spare drive.
domino:~# ./arcconf setstate 1 device 0 3 hsp
Controllers found: 1
This global hot spare will only protect logicals whose member block size is 512 Bytes.
Command completed successfully.
Function
You can adjust the RAID strip size, capacity, and level simultaneously.
Format
arcconf modify controller_id from ld_id to [stripesize size] capacity raid_level
channel_id1 slot_id1 ... channel_idN slot_idN [noprompt]
Parameters
Parameter Description Value
Usage Guidelines
If the noprompt parameter is contained, the command is executed forcibly.
Example
# Change the strip size to 1024 without adding drives.
domino:~# ./arcconf modify 1 from 0 to stripesize 1024 1525760 0 0 0 0 1
Controllers found: 1
Reconfiguration of a logical device is a long process. Are you sure you want to continue?
Press y, then ENTER to continue or press ENTER to abort: y
Reconfiguring logical device: LogicalDrv 0
Command completed successfully.
Function
Set consistency check parameters.
Format
arcconf consistencycheckcontroller_idperiod time
Parameters
Parameter Description Value
Usage Guidelines
None
Example
# Set the automatic consistency check period to 10 days.
domino:~# ./arcconf consistencycheck 1 period 10
Controllers found: 1
Setting the period will automatically turn on background consistency check.
Are you sure you want to continue?
Press y, then ENTER to continue or press ENTER to abort: y
Command completed successfully.
Function
Enable or disable the read/write instruction optimization function.
Format
arcconf setncqcontroller_id state
Parameters
Parameter Description Value
Usage Guidelines
None
Example
# Enable the NCQ function.
domino:~# ./arcconf setncq 1 enable
Controllers found: 1
WARNING : NCQ setting changes will be reflected only after next power cycle.
Command completed successfully.
Function
After a drive is configured as a pass-through drive, the OS can directly manage the
drive.
Format
arcconf uninitcontroller_id channel_id slot_id
Parameters
Parameter Description Value
Usage Guidelines
None
Example
# Set the drive in slot 3 as a pass-through drive.
domino:~# ./arcconf uninit 1 0 3
Controllers found: 1
1 device(s) uninitialized.
Function
Turn on and off the UID indicator of a specified drive.
Format
arcconf identify controller_iddevice channel_id slot_id
Parameters
Parameter Description Value
Usage Guidelines
None
Example
# Turn on the UID indicator of the drive in slot 3.
domino:~# ./arcconf identify 1 device 0 3
Controllers found: 1
The specified device(s) is/are blinking.
Press any key to stop the blinking.
Command completed successfully.
Function
Query detailed information about RAID controller cards, physical drives, and
virtual drives.
Format
arcconf getconfig controller_id <ad | ld ld_id | pd channel_id slot_id | mc | al>
Parameters
Parameter Description Value
ad Queries controller –
properties.
Usage Guidelines
None
Example
# Query controller properties.
domino:~# ./arcconf getconfig 1 ad
Controllers found: 1
----------------------------------------------------------------------
Controller information
----------------------------------------------------------------------
Controller Status : Optimal
Controller Mode : RAID (Expose RAW)
Channel description : SAS/SATA
Controller Model : PM8068-RAID
Controller Serial Number : 123A456B789
Controller World Wide Name : 5E0247F95EEF7000
Controller Alarm : Enabled
Physical Slot :6
.................
Function
Query the status of a drive.
Format
arcconf getconfig controller_id <pd>
Parameters
Parameter Description Value
Usage Guidelines
None
Example
# Query the drive status.
domino:~# ./arcconf getconfig 1 pd
Device #2----------------------------------------------------------------------
Device is a Hard drive
State : Global Hot-Spare
Block Size : Bytes
Supported : Yes
Programmed Max Speed : SATA 6.0 Gb/s
Transfer Speed : SATA 12.0 Gb/s
Reported Channel,Device(T:L) : 0,1(11:0)
Reported Location : Enclosure 0, Slot 3(Connector 0, Connector 1 )
Reported ESD(T:L) : 2,0(0:0)
Vendor : ATA
Model : INTEL SSDSC2BX01
Firmware : 0150
Serial Number : BTHC445301D81P6PGN
Reserved Size : 435480 KB
Used Size : 1525760 MB
Unused Size : 64 KB
Total Size : 1526185 MB
Write Cache : Enabled (write-back)
FRU : None
SMART : NO
S M A R T warnings :0
Power States : Full RPM
NOTE
In the command output, 11:0 in the brackets at the end of the row Reported
Channel,Device(T:L): 0,1(11:0) indicates that the channel ID is 11 and the device ID is 0.
A Appendix
Scenarios
The method of logging in to the RAID controller card management screen in EFI
mode varies with the BIOS platform.
This section describes how to log in to the LSI SAS2208 RAID controller card
management screen in EFI mode on the Brickland platform.
Prerequisites
Conditions
You have logged in to the real-time server desktop through the remote virtual
console of the management software (iBMC or iMana 200).
Procedure
Step 1 Press Delete when prompted to go to the BIOS configuration screen.
NOTE
----End
Prerequisites
Conditions
You have logged in to the real-time server desktop through the remote virtual
console of the management software (iBMC or iMana 200).
Data
None
Procedure
Step 1 During the restart process, press F11 when information shown in Figure A-2 is
displayed.
A password text box is displayed.
NOTE
----End
Scenarios
Set the server boot mode to the Legacy mode when required.
Prerequisites
Conditions
You have logged in to the real-time server desktop through the remote virtual
console of the management software (iBMC or iMana 200).
Data
None
Procedure
1. Restart the server. Press Delete when the screen shown in Figure A-6 is
displayed.
The BIOS configuration screen is displayed.
2. On the Boot screen, set the RAID controller card boot mode to the Legacy
mode.
– On a Grantley platform, set Boot Type to Legacy Boot Type, as shown in
Figure A-7.
Prerequisites
Conditions
You have logged in to the real-time server desktop through the remote virtual
console of the management software (iBMC or iMana 200).
Data
None
Procedure
1. Restart the server. Press Delete when the screen shown in Figure A-10 is
displayed.
2. On the Boot screen, set the RAID controller card boot mode to the EFI/UEFI
mode.
– On a Grantley platform, set Boot Type to UEFI Boot Type, as shown in
Figure A-11.
– .img: Used to install the RAID controller card driver during the OS
installation process.
– .iso: Used to install the RAID controller card driver after the OS is
installed. Two types of .iso files are available:
NOTE
The RAID controller card supports secure boot only in EFI or UEFI mode and uses the
security authentication mechanism provided by the BIOS.
A.2.1 RAID 0
RAID 0, also referred to as drive striping, provides the highest storage performance
in all RAID levels. In RAID 0 array, data is stored on multiple drives, which allows
data requests to be concurrently processed on the drives. Each drive processes
data requests that involve data stored on itself. The concurrent data processing
can make the best use of the bus bandwidth, improving the overall drive read/
write performance. However, RAID 0 provides no redundancy or fault tolerance.
The failure of one drive will cause the entire array to fail. RAID 0 applies only to
scenarios that require high I/O rate but low data security.
Working Principle
Figure A-16 shows how data is distributed into three drives in a RAID 0 array for
concurrent processing.
RAID 0 allows the original sequential data to be processed on three physical drives
at the same time.
A.2.2 RAID 1
RAID 1 is also referred to as mirroring. In a RAID 1 array, each operating drive has
a mirrored drive. Data is written to and read from both operating and mirrored
drives simultaneously. After a failed drive is replaced, data can be rebuilt from the
mirrored drive. RAID 1 provides high reliability, but only half of the total capacity
is available. It applies to scenarios where high fault tolerance is required, such as
finance.
Working Principle
Figure A-17 shows how data is processed on a RAID 1 array consisting of two
physical hard drives.
● When writing data into drive 0, the system also automatically copies the data
to drive 1.
● When reading data, the system obtains data from Drive 0 and Drive 1
simultaneously.
A.2.3 RAID 5
RAID 5 is a storage solution that balances storage performance, data security, and
storage costs. To ensure data reliability, RAID 5 uses the distributed redundancy
check mode and distributes parity data to member drives. If a drive in a RAID 5
array is faulty, data on the failed drive can be rebuilt from the data on other
member drives in the array. RAID 5 can be used to process a large or small
amount of data. It features high speed, large capacity, and fault tolerance
distribution.
Working Principle
Figure A-18 shows how a RAID 5 array works. As shown in the figure, PA is the
parity information of A0, A1, and A2; PB is the parity information of B0, B1, and
B2; and so on.
RAID 5 does not back up the data stored. Instead, data and its parity information
are stored on different member drives in the array. If data on a member drive is
damaged, the data can be restored from the remaining data and its parity
information. If data on a RAID 5 member drive is damaged, RAID 5 can use the
remaining data and the corresponding parity information to restore the damaged
data.
RAID 5 can be considered as a compromise between RAID 0 and RAID 1.
● RAID 5 provides lower data security level and lower storage costs than RAID
1.
● RAID 5 provides slightly lower data read/write speed than RAID 0. However,
its read performance is higher than the single-drive write performance.
A.2.4 RAID 6
Compared with RAID 5, RAID 6 adds a second independent parity block. In RAID 6,
two independent parity systems use different algorithms to ensure high reliability.
Data processing is not affected even if two drives fail at the same time. However,
RAID 6 requires larger drive space for storing parity information and has worse
"write hole" effect compared with RAID 5. Therefore, RAID 6 provides lower write
performance than RAID 5.
Working Principle
Figure A-19 shows how data is stored on a RAID 6 array consisting of five drives.
PA and QA are the first and second parity blocks for data blocks A0, A1, and A2;
PB and QB are the first and second parity blocks for data blocks B0, B1, and B2;
and so on.
Data blocks and parity blocks are distributed to each RAID 6 member drive. If one
or two member drives fail, the controller card restores or regenerates the lost data
by using data on other normal member drives.
A.2.5 RAID 10
RAID 10 is a combination of RAID 1 and RAID 0. It allows drives to be mirrored
(RAID 1) and then striped (RAID 0). RAID 10 is a solution that provides good
storage performance (similar to RAID 0) and data security (same as RAID 1).
(same as RAID 1), as well as storage performance similar to RAID 0.
Working Principle
In Figure A-20, drives 0 and 1 form span 0, drives 2 and 3 form span 1, and drives
in the same span are mirrors of each other.
If I/O requests are sent to drives in RAID 10, the sequential data requests are
distributed to the two spans for processing (RAID 0 mode). At the same time, in
RAID 1 mode, when data is written to drive 0, a copy is created on drive 1; when
data is written to drive 2, a copy is created on drive 3.
A.2.6 RAID 1E
RAID 1E is an enhanced version of RAID 1 and uses a similar working principle.
Data strips and backups of a RAID 1E array spread across all of its member drives.
Like RAID 1, RAID 1E data is mirrored, so the logical drive capacity is half the total
capacity of all member drives, but data is redundant and the performance is high.
However, RAID 1E allows more physical drives and consists of at least three drives.
Working Principle
Figure A-21 shows how data is processed on a RAID 1E array consisting of three
physical hard drives. Strip data is distributed evenly to the three drives and all data
has backup data in another drive. If a single drive fails, data is not lost.
A.2.7 RAID 50
RAID 50 is a combination of RAID 5 and RAID 0. RAID 0 allows data to be striped
and written to multiple drives simultaneously, and RAID 5 ensures data security by
using parity bits evenly distributed on drives.
Working Principle
Figure A-22 shows how a RAID 50 array works. As shown in the figure, PA is the
parity information of A0, A1, and A2; PB is the parity information of B0, B1, and
B2; and so on.
As a combination of RAID 5 and RAID 0, a RAID 50 array consists of multiple RAID
5 spans, where data is stored and accessed in RAID 0 mode. With the redundancy
function provided by RAID 5, RAID 50 ensures continued operation and rapid data
restoration if a member drive in a span is faulty. In addition, the replacement of
member drives does not affect services. RAID 50 tolerates a failed drive across
multiple spans simultaneously, which cannot be implemented in RAID 5 alone.
What is more, as data is distributed on multiple spans, RAID 50 provides high
read/write performance.
A.2.8 RAID 60
RAID 60 is a combination of RAID 6 and RAID 0. RAID 0 allows data to be striped
and written to multiple drives simultaneously. RAID 6 ensures data security by
using two parity blocks distributed evenly on drives.
Working Principle
In Figure A-23, PA and QA are respectively the first and second parity information
of A0, A1, and A2; PB and QB are respectively the first and second parity
information of B0, B1, and B2; and so on.
● RAID 10
RAID 10 uses multiple RAID 1 arrays to provide comprehensive data
redundancy capabilities. RAID 10 is applicable to all scenarios that require
100% redundancy based on mirror drive groups.
● RAID 50
RAID 50 provides data redundancy based on the distributed parity check of
multiple RAID 5 arrays. Each RAID 5 array allows one failed member drive, if
data integrity is ensured.
● RAID 60
RAID 60 provides data redundancy based on the distributed parity check of
multiple RAID 6 arrays. Each RAID 6 array allows two failed member drives, if
data integrity is ensured.
● RAID 0
RAID 0 provides excellent performance. In RAID 0, data is divided into smaller
data blocks and written into different drives. RAID 0 improves I/O
performance because it allows concurrent read and write of multiple drives.
● RAID 1
Drives in this RAID array exist in pairs. When data is written, the data is
written into the two drives in the same RAID array. This method requires more
time and resources. As a result, the performance deteriorates.
● RAID 5
RAID 5 provides relatively high data throughput capabilities. Each member
drive stores both common data and check data concurrently. Therefore, each
member drive can be read or written separately. In addition, RAID 5 adopts a
comprehensive cache algorithm. All these features make RAID 5 ideal for
many scenarios.
● RAID 6
RAID 6 is ideal for scenarios that demand high reliability, response rate, and
transmission rate. It provides high data throughput, redundancy, and I/O
performance. However, RAID 6 requires two sets of check data to be written
into each member drive, resulting in a performance deterioration during write
operations.
● RAID 10
The RAID 0 span provides high data transmission rates. In addition, RAID 10
provides excellent data storage capabilities. The I/O performance of RAID 10
improves as the number of its spans increases.
● RAID 50
RAID 50 delivers the best performance in scenarios that require high
reliability, response rate, and transmission rate. The I/O performance of RAID
50 improves as the number of its spans increases.
● RAID 60
● RAID 60 applies to scenarios similar to those of RAID 50. However, RAID 60 is
not suited for large-volume write tasks because it requires two sets of parity
data to be written into each member drive, which affects performance during
write operations.
NOTE
● If a RAID controller card has a RAID array requiring no verification (such as RAID 0 and
RAID 1) and a RAID array that needs verification (such as RAID 5 or RAID 6), and the
write policy of both RAID arrays is set to Write Back, the performance of the RAID array
that needs verification deteriorates and the I/O wait increases.
● You are advised to set the write policy of RAID groups that do not require verification to
Write Through to prevent impact on the performance of RAID groups that require
verification.
25 Your VDs that are configured for Cause: Your VDs that are
Write-Back are configured for Write-Back
temporarily running in Write-Through are temporarily running
mode. This is caused by the battery in Write-Through mode.
being charged, missing, or bad. This is caused by the
battery being charged,
Allow the battery to charge for 24 missing, or bad.
hours before evaluating the battery for
replacement. The following VDs are Action:
affected: %s ● Check the battery
Press any key to continue. cable to ensure that it
is connected properly.
● Ensure that the battery
is charging properly.
● Contact technical
support to replace the
battery if the battery is
draining out.
43 There was a drive security key error. Cause: There was a drive
All secure drives will be marked as security key error. All
foreign. secure drives will be
Press any key to continue, or C to load marked as foreign.
the configuration utility. Action: Check if the
controller supports self-
encrypting drives.
48 DKM new key request failed; controller Cause: DKM new key
security mode transition was not request failed. Controller
successful. security mode transition
Reboot the server to retry request, or was not successful.
press any key to continue. Action: Check the
connection of the EKMS
and restart the system to
re-establish the
connection to the EKMS.
49 Firmware did not find valid NVDATA Cause: Firmware did not
image. find valid NVDATA image.
Program a valid NVDATA image and Action:
restart your system. ● Update the correct
Press any key to continue. firmware package that
has a proper NV data
image.
● Check the current
firmware version, and
if needed, update to
the latest firmware
version. Updating to
the latest firmware
version may require
importing foreign
volumes.
Press D to disable this warning (if your ● Ensure that the battery
controller does not have a battery). is charging properly.
● Contact technical
support to replace the
battery if the battery is
draining out.
60 Cache data was lost, but the controller Cause: Cache data was
has recovered. This could be because lost, but the controller
your controller had protected cache has recovered. This could
after an unexpected power loss and be because your
your system was without power longer controller had protected
than the battery backup time. cache after an
Press any key to continue or C to load unexpected power loss
the configuration utility. and your system was
without power longer
than the battery backup
time.
Action:
● Check the memory
and battery.
● Check the voltage
levels and cache
offload timing in case
of power loss.
● If necessary, replace
the memory or battery.