VSP - G1x00 - F1500 - v80-06-02 - Hardware Guide - MK-92RD8007-19
VSP - G1x00 - F1500 - v80-06-02 - Hardware Guide - MK-92RD8007-19
VSP - G1x00 - F1500 - v80-06-02 - Hardware Guide - MK-92RD8007-19
Hardware Guide
This document provides information about the system hardware components, mechanical, and
environmental specifications for the VSP G1000, VSP G1500, and VSP F1500 storage systems.
MK-92RD8007-19
February 2018
© 2014, 2018 Hitachi, Ltd. All rights reserved.
No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including copying and
recording, or stored in a database or retrieval system for commercial purposes without the express written permission of Hitachi, Ltd., or
Hitachi Vantara Corporation (collectively “Hitachi”). Licensee may make copies of the Materials provided that any such copy is: (i) created as an
essential step in utilization of the Software as licensed and is used in no other manner; or (ii) used for archival purposes. Licensee may not
make any other copies of the Materials. “Materials” mean text, data, photographs, graphics, audio, video and documents.
Hitachi reserves the right to make changes to this Material at any time without notice and assumes no responsibility for its use. The Materials
contain the most current information available at the time of publication.
Some of the features described in the Materials might not be currently available. Refer to the most recent product announcement for
information about feature and product availability, or contact Hitachi Vantara Corporation at https://support.hitachivantara.com/en_us/contact-
us.html.
Notice: Hitachi products and services can be ordered only under the terms and conditions of the applicable Hitachi agreements. The use of
Hitachi products is governed by the terms of your agreements with Hitachi Vantara Corporation.
By using this software, you agree that you are responsible for:
1. Acquiring the relevant consents as may be required under local privacy laws or otherwise from authorized employees and other
individuals to access relevant data; and
2. Verifying that data continues to be held, retrieved, deleted, or otherwise processed in accordance with relevant laws.
Notice on Export Controls. The technical data and technology inherent in this Document may be subject to U.S. export control laws, including
the U.S. Export Administration Act and its associated regulations, and may be subject to export or import regulations in other countries. Reader
agrees to comply strictly with all such regulations and acknowledges that Reader has the responsibility to obtain licenses to export, re-export, or
import the Document and any Compliant Products.
Hitachi is a registered trademark of Hitachi, Ltd., in the United States and other countries.
AIX, AS/400e, DB2, Domino, DS6000, DS8000, Enterprise Storage Server, eServer, FICON, FlashCopy, IBM, Lotus, MVS, OS/390, PowerPC, RS/6000,
S/390, System z9, System z10, Tivoli, z/OS, z9, z10, z13, z/VM, and z/VSE are registered trademarks or trademarks of International Business
Machines Corporation.
Active Directory, ActiveX, Bing, Excel, Hyper-V, Internet Explorer, the Internet Explorer logo, Microsoft, the Microsoft Corporate Logo, MS-DOS,
Outlook, PowerPoint, SharePoint, Silverlight, SmartScreen, SQL Server, Visual Basic, Visual C++, Visual Studio, Windows, the Windows logo,
Windows Azure, Windows PowerShell, Windows Server, the Windows start button, and Windows Vista are registered trademarks or trademarks
of Microsoft Corporation. Microsoft product screen shots are reprinted with permission from Microsoft Corporation.
All other trademarks, service marks, and company names in this document or website are properties of their respective owners.
Hitachi Virtual Storage Platform G1000, G1500, and VSP F1500 Hardware Guide ii
Contents
Preface..................................................................................................... 9
Safety and environmental information................................................................. 9
Intended audience............................................................................................... 9
Product version....................................................................................................9
Release notes....................................................................................................10
Changes in this revision.....................................................................................10
Related documents............................................................................................10
Document conventions...................................................................................... 10
Conventions for storage capacity values........................................................... 12
Accessing product documentation.....................................................................13
Getting help........................................................................................................13
Comments..........................................................................................................13
Contents
Hitachi Virtual Storage Platform G1000, G1500, and VSP F1500 Hardware Guide 3
Front-end directors.............................................................................................36
Supported connectors and protocols............................................................38
Flexible front-end director installation...........................................................41
Supported speeds and cable lengths........................................................... 44
Back-end director...............................................................................................45
Flexible back-end director installation.......................................................... 46
Drive chassis......................................................................................................46
Cache memory...................................................................................................51
Memory operation.........................................................................................53
Data protection............................................................................................. 53
Cache capacity.............................................................................................53
Shared memory............................................................................................54
Cache flash memory..........................................................................................58
Cache flash memory operation.................................................................... 58
Cache flash memory capacity...................................................................... 58
Contents
Hitachi Virtual Storage Platform G1000, G1500, and VSP F1500 Hardware Guide 4
Open-systems compatibility and functionality.............................................. 71
Open-systems host platform support............................................................72
System configuration....................................................................................72
Host modes and host mode options.................................................................. 73
Device Manager - Storage Navigator program.................................................. 73
Contents
Hitachi Virtual Storage Platform G1000, G1500, and VSP F1500 Hardware Guide 5
Chapter 5: Cable connection guidelines...........................................104
Port configurations...........................................................................................104
Power connection diagrams............................................................................ 105
UPS power connection...............................................................................107
Data connection diagrams............................................................................... 108
Extended cable connections............................................................................ 112
Contents
Hitachi Virtual Storage Platform G1000, G1500, and VSP F1500 Hardware Guide 6
Appendix A: Storage system specifications.................................... 131
Contents
Hitachi Virtual Storage Platform G1000, G1500, and VSP F1500 Hardware Guide 7
Hazardous and toxic substances.....................................................................170
Disposal........................................................................................................... 171
Recycling......................................................................................................... 171
Electronic emissions certificates...................................................................... 171
FIPS 140-2 Consolidated Validation Certificate............................................... 175
Glossary........................................................................................... 176
Index................................................................................................. 184
Contents
Hitachi Virtual Storage Platform G1000, G1500, and VSP F1500 Hardware Guide 8
Preface
This guide provides technical information about the Hitachi Virtual Storage Platform
G1x00 and Hitachi Virtual Storage Platform F1500 storage systems.
Read this guide carefully to learn more about the storage systems and keep a copy to
reference any information about the products.
Intended audience
This document is intended for system administrators, Hitachi Vantara representatives,
and authorized service providers who install, configure, and operate VSP G1000, VSP
G1500, and VSP F1500 storage systems.
Readers of this document should be familiar with the following:
■ Data processing and RAID storage systems and their basic functions.
■ The VSP G1000, VSP G1500, and VSP F1500 storage systems and the Product Overview.
■ The Storage Navigator software.
■ The concepts and functionality of storage provisioning operations in the use of Hitachi
Dynamic Provisioning, Hitachi Dynamic Tiering software, and Hitachi Data Retention
Utility.
Product version
This document revision applies to storage system microcode version 80-06-02 or later.
Preface
Hitachi Virtual Storage Platform G1000, G1500, and VSP F1500 Hardware Guide 9
Release notes
Release notes
Read the release notes before installing and using this product. They may contain
requirements or restrictions that are not fully described in this document or updates or
corrections to this document. Release notes are available on Hitachi Vantara Support
Connect: https://knowledge.hitachivantara.com/Documents.
Related documents
The following documents are referenced in this guide or contain more information about
the features described in this document.
Hitachi Virtual Storage Platform G1x00, and Hitachi Virtual Storage Platform F1500
documents:
■ Product Overview, MK-92RD8051
■ System Administrator Guide, MK-92RD8016
■ Provisioning Guide for Mainframe Systems, MK-92RD8013
■ Provisioning Guide for Open Systems, MK-92RD8014
■ Hitachi Universal V2 Rack Reference Guide, MK-94HM8035
■ Mainframe Host Attachment and Operations Guide, MK-96RD645
■ Open-Systems Host Attachment Guide, MK-90RD7037
■ Hitachi SNMP Agent User Guide, MK-92RD8015
For a list of all documents related to the Hitachi Virtual Storage Platform G1x00 and
Hitachi Virtual Storage Platform F1500 storage systems, see the Product Overview.
Document conventions
This document uses the following typographic conventions:
Preface
Hitachi Virtual Storage Platform G1000, G1500, and VSP F1500 Hardware Guide 10
Document conventions
Convention Description
pairdisplay -g group
(For exceptions to this convention for variables, see the entry for
angle brackets.)
Status-<report-name><file-version>.csv
■ Variables in headings.
| vertical bar Indicates that you have a choice between two or more options or
arguments. Examples:
[ a | b ] indicates that you can choose a, b, or nothing.
{ a | b } indicates that you must choose either a or b.
Preface
Hitachi Virtual Storage Platform G1000, G1500, and VSP F1500 Hardware Guide 11
Conventions for storage capacity values
Logical capacity values (for example, logical device capacity, cache memory capacity) are
calculated based on the following values:
Preface
Hitachi Virtual Storage Platform G1000, G1500, and VSP F1500 Hardware Guide 12
Accessing product documentation
Getting help
Hitachi Vantara Support Connect is the destination for technical support of products and
solutions sold by Hitachi Vantara. To contact technical support, log on to Hitachi Vantara
Support Connect for contact information: https://support.hitachivantara.com/en_us/
contact-us.html.
Hitachi Vantara Community is a global online community for Hitachi Vantara customers,
partners, independent software vendors, employees, and prospects. It is the destination
to get answers, discover insights, and make connections. Join the conversation today!
Go to community.hitachivantara.com, register, and complete your profile.
Comments
Please send us your comments on this document to
doc.comments@hitachivantara.com. Include the document title and number, including
the revision level (for example, -07), and refer to specific sections and paragraphs
whenever possible. All comments become the property of Hitachi Vantara Corporation.
Thank you!
Preface
Hitachi Virtual Storage Platform G1000, G1500, and VSP F1500 Hardware Guide 13
Chapter 1: VSP G1000, VSP G1500, and VSP
F1500 overview
The following describes the hardware components of the VSP G1000, VSP G1500, and
VSP F1500 storage systems.
System overview
The VSP G1000, VSP G1500, and VSP F1500 are high-capacity, high-performance, unified
block and file enterprise storage systems that offer a wide range of storage and data
services, software, logical partitioning, and unified data replication across heterogeneous
storage systems.
Features
The VSP G1000, VSP G1500, and VSP F1500 storage systems include state-of-the art
advances in hardware technology that improve reliability, serviceability, and accessibility
to drives and other components when maintenance is required.
■ VSP F1500 all-flash array is configured exclusively with the latest generation of flash
module drives (FMDs) to provide performance optimized for intense I/O operations.
Designed for flash-first, high-performance workloads and leveraging Hitachi's SVOS-
based deduplication and compression, VSP F1500 offers up to five times greater ROI
with unified support for SAN, NAS, and mainframe workloads.
● Accelerated flash architecture delivers consistent, low-latency IOPS at scale.
● Adaptive flash management distributes writes and rebalances load over time.
● Hitachi FMDs deliver enterprise performance with superior functionality and
greater cost value.
■ The VSP G1500 and VSP F1500 are equipped with new virtual storage directors (VSD).
The VSD uses the latest generation of Intel Xenon 2.3-GHz 8-core microprocessor to
efficiently manage the front-end directors, back-end directors, PCI Express interface,
local memory, and communication between the service processor.
■ Hitachi Accelerated Flash FMD DC2 storage offers a patented data-center-class design
and rack-optimized form factor that delivers more than 8 PB per system. The FMD
DC2 supports a sustained performance of 100,000 8K I/O per second, per device, with
low and consistent response time.
■ The latest 2.5-inch and 3.5-inch 6 Gbps SAS drives support lower power consumption
and higher density per rack with up to 2,304 drives in six 19-inch standard racks. For
more information about drive specifications, see Storage system specifications (on
page 131) . For information about Hitachi racks, refer to the Hitachi Universal V2 Rack
Reference Guide.
■ Hitachi NAS Platform hardware-accelerated network protocols support up to 2 Gbps
throughput for sequential workloads and up to 1.2 million NFS operations per
second.
■ Efficient caching makes up to 2 TB global cache dynamically accessible by all
connected hosts and Hitachi NAS Platform nodes.
■ The HNAS file module provides primary data deduplication using hardware-based
SHA-256 calculation engines. This module achieves up to 90% capacity savings while
maintaining high performance.
■ When each controller is housed in a separate rack, the two controller racks can be
placed up to 100 meters apart. In addition, the drive racks attached to a controller
rack can be placed up to 100 meters from the controller rack. This enables maximum
flexibility to optimize data center space usage and provides ease of access for
operation and maintenance. See the detailed description of this feature and the cable
diagrams in Long cable connections (on page 112) .
■ Expandable cache memory (up to 2 TB per 2-controller system).
■ Nondisruptive migration is available as a service from Hitachi Vantara representatives
as well as by purchasing an optional software license for customer implementation.
Best practice is to use the nondisruptive migration planning service offered by Hitachi
Vantara Global Solution Services (GSS). See Nondisruptive service and upgrades (on
page 20) .
■ High temperature mode is a licensed feature that allows the storage system to
operate at either standard temperature (60.8°F to 89.6°F / 16°C to 32°C) or higher
temperatures (60.8°F to 104°F / 16°C to 40°C) in a data center, saving energy and
cooling costs. See high temperature mode (on page 20) .
High performance
Hitachi Vantara offers the highest performance storage systems for the enterprise-class
segment. The high-performance storage system enables consolidation and real-time
applications, a wide range of storage and data services, software, logical partitioning,
along with simplified and unified data replication across heterogeneous storage systems.
Its large-scale, enterprise class virtualization layer, combined with Hitachi Dynamic
Tiering and thin provisioning software, allows you to consolidate internal and external
storage into one pool.
The storage system includes several features that improve system performance:
■ Hitachi Accelerated Flash module drives that support ultra-high I/O rates and ultra-
low latency.
■ Solid-state drives with high-speed response.
■ Device Manager - Storage Navigator and Hitachi Storage Advisor provide integrated
data and storage management to ensure high-speed data transfer between the back-
end directors and small form-factor (SFF) or large form-factor (LFF) drives at 6 Gbps
using a SAS interface.
■ Ability to scale and upgrade system performance.
■ Compression functionality reduces the size of stored data by encoding without
reducing the amount of data.
■ Deduplication functionality deletes the duplicated data while keeping the data in a
single location when the same data is written to different addresses within the same
pool.
■ Disk drives operating at 7,200, 10,000, or 15,000 RPM.
Scalability
The storage systems offer an entirely new type of scalable and adaptable integrated
active-active architecture that supports integrated management. Hitachi storage systems
can be configured in numerous ways to meet performance and storage requirements.
Notes:
1. A VSD pair consists of two VSD blades. Each VSD contains one 8-core processor.
2. Cache memory modules can be either 16 GB or 32 GB, but only one memory module
size can be used in a system.
3. HDS minimum cache per system is 64 GB whether the system contains one or two
controllers.
Flexible connectivity
®
The storage system supports connectivity to mainframe hosts through FICON front-end
directors and to open servers via Fibre Channel, iSCSI, and Fibre Channel over Ethernet
(FCoE) front-end directors. The storage system can be configured with a combination of
all of these front-end directors to support both mainframe hosts and open servers
simultaneously.
For details about host connectivity and OS support, see https://support.hds.com/en_us/
interoperability.html.
High reliability
The storage system includes the following features to enhance reliability:
■ Multiple RAID configurations: The system supports RAID 6 (6D+2P and 14D+2P),
RAID 5 (3D+1P and 7D+1P), and RAID 1 (2D+2D and 4D+4D).
■ Duplicate hardware: Every module in the controller chassis and drive chassis is
configured in redundant pairs so that if any module fails, the redundant module takes
over until the failed component is replaced. The redundant hardware includes power
supplies, VSD pairs, cache path controllers, front-end directors, back-end directors,
and drives. If one of these hardware components fails, the storage system continues
normal operation with zero data loss.
■ Protection from power failures: The storage systems have dual-power feeds. In the
event of a partial power loss on one of the feeds, the system operates normally on
the alternate feed until full power is restored. In the event of a full power loss, the
cache backup modules maintain the availability of the cache contents for 32 minutes
while the system copies the system configuration information and all data in the
cache to a cache flash drive (SSD).
High flexibility
The storage systems are available in several configurations, from a small single rack,
diskless system to a large six-rack system that includes two controller chassis, up to
2,304 SFF drives, up to 1,152 LFF drives, up to 384 SSDs (per controller in a standard
performance back-end configuration) or 1,152 SSDs (per controller in a high-
performance back-end configuration), up to 576 flash module drives, and a total of 2 TB
cache. The systems can be easily reconfigured for more storage capacity.
The storage systems support block-only, file-only, and unified (block and file)
configurations in open and mainframe environments. Unified systems contain Hitachi
Network Attached Storage servers and switches in addition to the block controller and
storage drives.
Software applications
The storage systems provide the foundation for matching application requirements to
different classes of storage and delivering critical services, including:
■ Business continuity services
■ Content management services (search, indexing)
■ Thin provisioning
■ Dynamic Tiering
■ High availability
■ Security services
■ I/O load balancing
■ Data classification
■ File management services
System life
The lifetime of the system is five years when operating in the standard temperature
mode. This lifetime is reduced when operating the system in high temperature mode,
even if you change the system to standard temperature mode later.
Example 1: A new cache flash memory battery has three years of usable life when
operated in a standard temperature environment. If you enable high temperature mode
when the battery is new, the battery life will be reduced to two years.
Example 2: The storage system is used for two years at normal temperature mode. The
cache battery has one year of usable life remaining at that time. If you enable high
temperature mode, the life of the battery is reduced to eight months.
Temperature measurement
Ambient air temperature is measured by a sensor in the cooling air inlet on each module
in the primary VSD pair on each controller.
Note:
The illustration shown is only an example. The storage system provides
flexibility for placing the controller and drive chassis within the racks. For
more information about system configurations, contact your sales account
representative.
Item Description
Item Description
4 8U space
Single-controller Two-controller
Chassis Description system system
No. of controller 1 1 1 2 2 2
chassis
Front-end directors 1 1 1 1 1 1
Back-end directors 0 0 0 0 0 0
Number of racks 1 1 1 11 11 11
Model Number
VSP G1500
(Upgrade from
VSP G1000 to
Component VSP G1000 G1500) VSP G1500 VSP F1500
Model Number
VSP G1500
(Upgrade from
VSP G1000 to
Component VSP G1000 G1500) VSP G1500 VSP F1500
Model Number
VSP G1500
(Upgrade from
VSP G1000 to
Component VSP G1000 G1500) VSP G1500 VSP F1500
Model Number
VSP G1500
(Upgrade from
VSP G1000 to
Component VSP G1000 G1500) VSP G1500 VSP F1500
Model Number
VSP G1500
(Upgrade from
VSP G1000 to
Component VSP G1000 G1500) VSP G1500 VSP F1500
DKC- DKC-
F810I-300KCMC F810I-300KCMC
DKC- DKC-
F810I-600JCMC F810I-600JCMC
DKC- DKC-
F810I-600J5MC F810I-600J5MC
DKC- DKC-
F810I-900JCMC F810I-900JCMC
Model Number
VSP G1500
(Upgrade from
VSP G1000 to
Component VSP G1000 G1500) VSP G1500 VSP F1500
DKC- DKC-
F810I-1R2JCMC F810I-1R2JCMC
Solid-state drives
Model Number
VSP G1500
(Upgrade from
VSP G1000 to
Component VSP G1000 G1500) VSP G1500 VSP F1500
3 ■ Virtual 2 (1 8 (4 A VSD may contain either an Intel Xeon 2.1GHz or 2.3GHz 8-core
storag pair) pairs) microprocessor. The VSDs must be installed in pairs and the
e VSDs control the front-end directors, back-end directors, PCI
direct Express interface, local memory, and communication to the SVP.
or The VSDs are independent of the front-end directors and back-
(2.1- end directors, and can be shared across them.
GHz)
■ Virtual
storag
e
direct
or
(2.3-
Ghz)
4 Cooling 5 5 The five intake fans on the front of the controller pull air into the
fan controller and distribute it across the controller components.
(intake)
5 Cache 1 4 The CPA uses the built-in switch to connect the VSDs to the
Path front-end directors, back-end directors, and the cache backup
Control memory. It distributes data (data routing function) and sends
Adapter hot-line signals to the VSD. The shared memory is located on the
(CPA) first CPA cache board in each cluster in the primary controller.
6 Cooling 5 5 The exhaust fans on the rear of the controller pull hot
fan air away from the components and push it out the
(exhaust) back of the rack.
Note:
1. Achieved FIPS 140-2 Level 1 certification.
Front-end directors
A front-end director (FED) is a pair of blades installed in the controller.
The front-end director connects the storage system to the host servers, processes
channel commands from hosts, manages host access to the cache, and controls the
transfer of data between the hosts and the controller cache.
The following FEDs are available:
■ iSCSI
■ Fibre Channel
■ FICON (shortwave and longwave)
■ Fibre Channel over Ethernet (FCoE)
The Fibre Channel FED can be configured with either shortwave or longwave host
connectors. The FICON is configured with either longwave or shortwave connectors that
match the wavelength of the mainframe ports.
The following figure shows the port LEDs of a FED, and the following table lists the
description of the port LEDs.
1 Blade Dark (off) OFF: Power is not supplied to the system. The system is not
Status operational.
Red (on)
ON: Board failure. The blade can be replaced while the system is
running.
2 Power Dark (off) OFF: Power is not supplied to the system or, if power is supplied to
supply the system, power supply in this blade is operational.
Amber (on)
Status
ON: Power supply failure, abnormal voltage in power supply.
3 Port Dark (off) OFF: If system power is off, the port is not ready.
Status
Green (on) OFF: If system power is on, the port is ready.
(FC/
ON: Link is active.
iSCSI)
4 Link Dark (off) OFF: No link activity, for three possible reasons: power is off,
Activity initialization is not completed, and if system is operational, the port
Green (on)
is not being accessed.
(FC/
iSCSI) ON (steady): Link is available and initialization is complete, but
connection to the host has not been established.
Blinking: When the port is being accessed and data is being
transferred between the host and the cache.
3 Port Dark (off) OFF: If system power is off, the port is not ready.
Status
Green (on) ON: Link is available and initialization is complete, but connection
(FICON) to the host has not been established.
ON: Link is active.
4 Link Dark (off) OFF: No link activity because either power is off, initialization is not
Activity complete, or, if the system is operational, the port is not being
Amber (on)
accessed.
(FICON)
ON (fast blink): When the port is being accessed and data is being
transferred between the host and the cache.
Ports
A variety of FED options are available for installation in the controller chassis. The
maximum number of ports configurable in a two controller system by FED type are as
follows:
■ 96 iSCSI ports (10 Gbps, 8-port)
■ 192 Fibre Channel ports (16 Gbps, 16-port)
■ 192 Fibre Channel ports (8 Gbps, 16-port)
■ 96 Fibre Channel ports (16 Gbps, 8-port)
■ 176 FICON ports (16 Gbps, 16-port) available in longwave and shortwave versions
■ 176 FICON ports (8 Gbps, 16-port) available in longwave and shortwave versions
■ 192 FCoE ports (10 Gbps, 16-port)
Note: A storage system can be configured with a mixture of FED pairs thus
providing a variety of port types.
See Site preparation (on page 75) for information about port configurations.
Item Description
Protocols
Fibre Channel, iSCSI and Fibre Channel over Ethernet (FCoE) FEDs support open system
hosts while FICON FEDs support mainframe systems.
The following tables lists the supported FEDs and protocols.
Notes:
®
1. Supports remote replication, including TrueCopy , global-active device, Hitachi
Universal Replicator, and Hitachi Universal Volume Manager.
Note: Each front-end director and back-end director consists of a set of two
blades, as indicated by the numbers in the figure. A VSD pair, however, uses a
single slot, but is sold and installed in pairs.
The following table shows the order of front-end director (FED) installation. If the storage
system includes internal drives, the controller requires a minimum of a single pair of
back-end directors and can be configured to support up to two back-end director pairs. A
storage system that does not include any internal drives is referred to as a diskless
configuration. The term standard describes a controller configured with a single back-
end director, while high performance describes a controller configured with two back-end
director pairss.
High-performance
Installatio Diskless model Standard model model
n order
(see the DKC810I- DKC- DKC810I- DKC- DKC810I- DKC-
previous CBXA/ F810I- CBXA/ F810I- CBXA/ F810I-
figure) CBXAC CBXB CBXAC CBXB CBXAC CBXB
Installatio High-performance
n order Diskless model Standard model model
(see the
previous DKC810I- DKC810I- DKC810I- DKC810I- DKC810I- DKC810I-
figure) CBXE CBXF CBXE CBXF CBXE CBXF
Back-end director
A back-end director (BED) is a pair of blades installed into the controller chassis and it
controls the data transfer between the cache memory and internal drives of the storage
system.
Hitachi offers the following BEDs:
■ Standard back-end director
■ Encrypting back-end director
The standard back-end director blades are each equipped with four 6-Gbps SAS ports
and do not support encryption.
The encrypting back-end director (EBED) blades each provide four 6-Gbps SAS ports.
When writing data to the internal drives of the system, the EBED encrypts the data. The
encrypted data-at-rest is unencrypted by the EBED as it is read from the drive. The EBED
is certified as FIPS 140-2 Level 2 compliant to meet the strict security standards of
customers managing storage systems. A Hitachi Encryption License Key must be installed
during installation in order to enable the encryption functionality of the EBED and an
additional license key known as FIPS 140-2 Level 2 License Key must be installed for
operating in compliance with the FIPS 140-2 Level 2 specification. For more information
about the encrypting back-end directors and implementing a storage system with FIPS
140-2 Level 2 compliance, contact a Hitachi Vantara representative.
For more information about FIPS 140-2 criteria and certificate for VSP G1x00, see the
following websites:
■ FIPS 140-2: http://csrc.nist.gov/groups/STM/cmvp/standards.html
■ FIPS 140-2 Level 2 certificate #2727 for the VSP G1x00: http://csrc.nist.gov/
groups/STM/cmvp/documents/140-1/1401val2016.htm#2727
The hardware components used in the standard back-end director blades are different
from the encrypting back-end director blades. A BED pair may not consist of one
standard and one encrypting blade.
Drive chassis
The VSP G1x00 support three different drive chassis. The VSP F1500 only supports the
FMD chassis. All components in the drive chassis are configured with redundant pairs to
prevent system failure. While the storage system is in operation, all components in the
drive chassis can be added or replaced. For detailed information about the drives in each
chassis, see Storage system specifications (on page 131) .
The following illustrations show the front and rear panels of the three types of 2U drive
trays, and the following tables describe the connectors and LEDs.
LOCATE LED Orange ON: Nonfatal error. Storage system can continue operating.
Contact technical support. See Getting Help in the preface of
this manual.
2 ENC IN LED Green ON: Port is connected to an OUT port in the controller. This can
be directly or via another drive box with daisy chained cables.
3 ENC IN - Connects the drives to the ENC OUT port in the control chassis
connector either directly or via another drive box with daisy chained
cables.
4 ENC OUT - Connects the drives to the ENC IN port in the control chassis
connector either directly, or via another drive box with daisy chained
cables.
5 ENC OUT LED Green ON: Indicates that the port is connected to an IN port in the
controller. This can be performed directly or indirectly, as
previously described.
7 Power Supply - Converts 200 VAC to the DC voltages used by the drives and the
ENC adapters.
8 RDY (Ready) Green OFF: No power is supplied to the system or the power supply
LED has failed.
ON: The power supply is operating normally.
10 ALM (Alarm) Red Power supply has failed. Contact technical support. See Getting
LED Help in the preface of this manual.
Item Description
1 Flash module Active LED - Lights when the flash module is activated - Blinks
at drive access.
2 Flash module Alarm LED - Lights when the flash module has an error and
should be replaced.
7 ENC adapter - Connects the flash modules to the BEDs in the controller
through ENC cables.
14 Power Supply alarm LED - Lights when power supply has an error.
Cache memory
The VSP G1000, VSP G1500, and VSP F1500 storage systems can be configured with 64
GB to 1 TB of cache memory per controller. The cache memory is installed in one or two
cache path control adapters (CPA). A CPA feature consists of a pair of redundant blades
that are installed and work together to provide cache and shared memory for the
system. The following figure shows two CPAs (2-3, and 1-4).
Cache memory modules (DIMMs) are available in either 16 GB or 32 GB sizes. The
minimum memory required per controller is 64 GB, either two 16 GB DIMMs or one 32
GB DIMM must be installed in each CPA blade. The memory modules in a system must
all be the same size.
The following table shows minimum and maximum cache capacities per controller. The
figures are doubled for a two-controller system.
2 cache path
control adapter
1 cache path pairs (one included
# of controller control adapter pair with controller and
chassis Capacity of cache (included with an additional
configuration memory module controller) feature added)
Notes:
1. One DIMM minimum, eight DIMMs maximum per board. Two blades/boards per
CPA. One or two CPAs installed per controller.
2. HDS minimum cache per system is 64 GB whether configured with one or two
controllers.
2 and 3 Main (required) cache path 1 and 4 Optional cache path control adapters
control adapters
Memory operation
The controller places all read and write data into the cache. The amount of fast-write
data in cache is dynamically managed by the cache control algorithms to provide an
optimum amount of read and write cache, depending on the workload read and write
I/O characteristics.
Data protection
The VSP G1000, VSP G1500, and VSP F1500 storage systems protect the loss of data or
configuration information stored in the cache when electrical power fails. The cache is
kept active for up to 32 minutes by the cache backup batteries while the system
configuration and data are copied to the cache flash memory in the cache backup
modules. For more information, see Cache flash memory (on page 58) and Battery
backup operations (on page 125) .
Cache capacity
The recommended amount of cache to install is determined by the RAID level, the
number of drives installed in the system, and whether Hitachi Dynamic Provisioning
(HDP), Hitachi Dynamic Tiering (HDT), Dynamic Cache Residency (DCR), and Universal
Volume Manager (UVM) are applied. The recommended data cache capacity per Cache
Logical Partition (CLPR) = (CLPR capacity) - (DCR Extent setting capacity per CLPR). When
CLPR is not applied to DP/DT/DCR, install the recommended data cache capacity shown
in the following table.
To configure a system for maximum performance, contact your authorized Hitachi
Vantara representative. See Getting Help in the preface of this manual.
Table 12 Recommended data cache capacity when DP, DT, DCR, and UVM are not
being used
Total logical capacity of external volumes Recommended data cache capacity per
+ internal volumes per CLPR CLPR
Total logical capacity of external volumes Recommended data cache capacity per
+ internal volumes per CLPR CLPR
Shared memory
Shared memory holds storage system configuration information and it resides in the
cache. The capacity of the shared memory + the capacity of the cache memory = the
total capacity of the cache memory needed by the storage system.
The capacity overheads associated with the capacity saving function (data deduction)
include capacity consumed by metadata and capacity consumed by garbage (invalid)
data. For more information, see the Provisioning Guide for Open Systems. The
recommendation is to use 0.2% of active data size as cache size (200 GB of cache for
every 100 TB of pool capacity to be reduced).
The following table shows the shared memory capacity needed depending on the kind of
software applications installed in the system.
HDP/
HDT/
AF
Software 1 extension
64KL
SI/ HDP/ TC/
DEV
Number of SM
VM/ TI/ UR/ HDT/
control exten iSC capacit
unit NDM FC TPF GAD AF sion 1 2 3 SI DC y
HDP/
HDT/
AF
Software 1 extension
64KL
SI/ HDP/ TC/
DEV
Number of SM
VM/ TI/ UR/ HDT/
control exten iSC capacit
unit NDM FC TPF GAD AF sion 1 2 3 SI DC y
1-255 (64k Apply Appl Appl Appl Appl Apply — — — App Appl 40 GB
LDEV) y y y y ly y
1-255 (64k Apply Appl Appl — Appl Apply App — — App Appl 40 GB
LDEV) y y y ly ly y
HDP/
HDT/
AF
Software 1 extension
64KL
SI/ HDP/ TC/
DEV
Number of SM
VM/ TI/ UR/ HDT/
control exten iSC capacit
unit NDM FC TPF GAD AF sion 1 2 3 SI DC y
1-255 (64k Apply Appl Appl Appl — Apply App — — App Appl 40 GB
LDEV) y y y ly ly y
1-255 (64k Apply Appl Appl Appl Appl Apply App — — App Appl 48 GB
LDEV) y y y y ly ly y
1-255 (64k Apply Appl Appl — — Apply App Appl — App Appl 48 GB
LDEV) y y ly y ly y
1-255 (64k Apply Appl Appl Appl — Apply App Appl — App Appl 56 GB
LDEV) y y y ly y ly y
1-255 (64k Apply Appl Appl — Appl Apply App Appl — App Appl 56 GB
LDEV) y y y ly y ly y
1-255 (64k Apply Appl Appl — — Apply App Appl App App Appl 64 GB
LDEV) y y ly y ly ly y
1-255 (64k Apply Appl Appl Appl Appl Apply App Appl — App Appl 64 GB
LDEV) y y y y ly y ly y
HDP/
HDT/
AF
Software 1 extension
64KL
SI/ HDP/ TC/
DEV
Number of SM
VM/ TI/ UR/ HDT/
control exten iSC capacit
unit NDM FC TPF GAD AF sion 1 2 3 SI DC y
1-255 (64k Apply Appl Appl Appl — Apply App Appl App App Appl 72 GB
LDEV) y y y ly y ly ly y
1-255 (64k Apply Appl Appl — Appl Apply App Appl App App Appl 72 GB
LDEV) y y y ly y ly ly y
1-255 (64k Apply Appl Appl Appl Appl Apply App Appl App App Appl 80 GB
LDEV) y y y y ly y ly ly y
2. The required cache memory capacity is determined by the storage capacity and the number
of Processor Blades.
Number of CFM
Number of features (pairs of Memory module
controllers boxes) size CFM size1
Number of CFM
Number of features (pairs of Memory module
controllers boxes) size CFM size1
2 CFMs 128 GB 1 TB
4 boxes / SSDs 256 GB 2 TB
Notes:
1. SDD sizes must be the same in all CFM. Cache must be distributed evenly across
CFMs and controllers.
Note: The small CFM SSDs (128 GB) can be installed in the large cache
backup, allowing for easier and less expensive upgrades.
The storage system supports the following RAID levels: RAID 1, RAID 5, and RAID 6. When
configured in four-drive RAID 5 parity groups (3D+1P), 75% of the raw capacity is
available to store user data, and 25% of the raw capacity is used for parity data.
RAID 1
The following two figures illustrate the RAID 1 configurations. The tables following the
figures describe each configuration.
Item Description
Description Mirror disks (duplicated writes). Two disk drives, plus primary and
secondary disk drives, compose a RAID pair (mirroring pair) and the
identical data is written to the primary and secondary disk drives. The
data is distributed on the two RAID pairs.
Advantage RAID 1 is highly usable and reliable because of the duplicated data. It
has higher performance than ordinary RAID 1 (when it consists of two
disk drives) because it consists of the two RAID pairs.
Item Description
Description Mirror disks (duplicated writing). The two parity groups of RAID 1(2D
+2D) are concatenated and data is distributed on them. In the each
RAID pair, data is written in duplicate.
RAID 5
A RAID 5 array group consists of four or eight drives (3D+1P) or (7D+1P). The data is
written across the four drives or eight drives in a stripe that has three or seven data
chunks and one parity chunk. Each chunk contains either eight logical tracks (mainframe)
or 768 logical blocks (open). This RAID 5 implementation minimizes the write penalty
incurred by standard RAID 5 implementations by keeping write data in cache until the
entire stripe can be built, and then writing the entire data stripe to the drives. The 7D+1P
RAID 5 configuration increases usable capacity and improves performance.
The following two figures illustrate the RAID 5 configurations. The tables following the
figures describes each configuration.
Item Description
Advantage RAID 5 supports transaction operations that mainly use small size
random access because each disk can receive I/O instructions
independently. It can provide high reliability and usability at a
comparatively low cost by virtue of the parity data.
Disadvantage Write penalty of RAID 5 is larger than that of RAID 1 because pre-
update data and pre-update parity data must be read internally as the
parity data is updated when data is updated.
Item Description
Description Two or four parity groups (eight drives) are concatenated. The data is
distributed and arranged in 16 drives or 32 drives.
Disadvantage The impact when two drives are blocked is significant because twice
or four times the numbers of LDEVs are arranged in the parity group
when compared with RAID 5 (3D+1P). However, the chance that the
read of the single block in the parity group cannot be performed due
to failure is the same as that of RAID 5 (3D+1P).
Figure 10 Sample RAID 5 3D + 1P Layout (Data Plus Parity Stripe) (on page 65) shows
RAID 5 data stripes mapped across four physical drives. Data and parity are striped
across each drive in the array group. The logical devices (LDEVs) are dispersed evenly in
the array group, so that the performance of each LDEV within the array group is the
same. This figure also shows the parity chunks that are the Exclusive OR (XOR) of the
data chunks. The parity chunks and data chunks rotate after each stripe. The total data
in each stripe is 2304 blocks (768 blocks per chunk) for Open-systems data. Each of these
array groups can be configured as either 3390-x or OPEN-x logical devices. All LDEVs in
the array group must be the same format (3390-x or OPEN-x). For Open systems, each
LDEV is mapped to a SCSI address, so that it has a track identifier (TID) and logical unit
number (LUN).
RAID 6
A RAID 6 array group consists of eight drives (6D+2P). The data is written across the eight
drives in a stripe that has six data chunks and two parity chunks. Each chunk contains
768 logical blocks.
In RAID 6, data can be assured when up to two drives in an array group fail. Therefore,
RAID 6 is the most reliable of the RAID levels.
The following figure illustrates the RAID 6 configuration and the table describes the
configuration.
Note: RAID 6 contains two configurations: 6D+2P (8 disk drives) and 14D+2P
(16 disk drives). The following diagram shows the 6D+2P configuration.
Item Description
Description Data blocks are scattered to multiple disks in the same way as RAID 5
and two parity disks, P and Q, are set in each row. Therefore, data can
be assured even when failures occur in up to two disk drives in a
parity group.
Advantage RAID 6 is much more reliable than RAID 1 and RAID 5 because it can
restore data even when failures occur in up to two disks in a parity
group.
Disadvantage The parity data P and Q must be updated when data is updated, RAID
6 imposes a write heavier than that on RAID 5. Performance of the
random writing is lower than RAID 5 when the number of drives
makes a bottleneck.
All drives and device emulation types are supported for LDEV striping. LDEV striping can
be used with all storage system data management functions.
CU images
The storage system is configured with one control unit image for each 256 devices (one
SSID per 64 LDEVs or 256 LDEVs) and supports a maximum of 255 CU images in the
primary logical disk controller (LDKC).
The storage system supports the control unit (CU) emulation type 2107.
The mainframe data management features of the storage system can restrict CU image
compatibility.
For more information on CU image support, see the Mainframe Host Attachment and
Operations Guide, or contact your Hitachi Vantara account team.
Note: The 3390-3 and 3390-3R LVIs cannot be intermixed in the same storage
system.
The LVI configuration of the storage system depends on the RAID implementation and
physical data drive capacities. To access the LDEVs, combine the logical disk controller
number (00), CU number (00-FE), and device number (00-FF). All control unit images can
support an installed LVI range of 00 to FF.
For maximum flexibility in LVI configuration, the storage system provides the Virtual LVI
feature. Using Virtual LVI, users can configure multiple LVIs under a single LDEV. For
further information on Virtual LVI, see the Provisioning Guide for Mainframe Systems.
Logical units
The storage system is configured with OPEN-V logical unit types. The OPEN-V logical unit
size can vary from 48.1 MB to 4 TB. For information about other logical unit types, for
example, OPEN-9, contact Hitachi Vantara support.
For maximum flexibility in LU configuration, the storage system provides the Virtual LUN
feature. Using Virtual LUN, users can configure multiple LUs under a single LDEV. For
further information on Virtual LUN, see the Provisioning Guide for Open Systems.
Mainframe operations
This section provides high-level descriptions of mainframe compatibility, support, and
configurations.
Mainframe configuration
After a storage system installation is complete, users can configure the storage system
for Mainframe operations.
See the following user documents for information and instructions about configuring
your storage system for Mainframe operations:
■ The Mainframe Host Attachment and Operations Guide, describes and provides
instructions related to configuring the storage system for Mainframe operations,
including FICON attachment, hardware definition, cache operations, and device
operations.
For detailed information about FICON connectivity, FICON or Open intermix
configurations, and supported HBAs, switches, and directors for VSP G1000, VSP
G1500, and VSP F1500, contact customer support.
■ The System Administrator Guide provides instructions for installing, configuring, and
using Device Manager - Storage Navigator to perform resource and data management
operations on the storage systems.
■ The Provisioning Guide for Mainframe Systems provide instructions for converting single
volumes (LVIs) into multiple smaller volumes to improve data access performance.
Open-systems operations
This section provides high-level descriptions of open-systems compatibility, support, and
configuration for storage systems.
Users should plan for path failover (alternate pathing) to ensure the highest data
availability. The logical units can be mapped for access from multiple ports or multiple
target IDs. The number of connected hosts is limited only by the number of Fibre
Channel ports installed and the requirement for alternate pathing within each host. If
possible, the primary path and alternate paths should be attached to different channel
cards.
System configuration
After physical installation of the storage system is complete, users can configure the
storage system for open-systems operations.
Refer to the following documents for information and instructions about configuring
your storage system for open-systems operations:
■ The host attachment guide provides information and instructions to configure the
storage system and data storage devices for attachment to the open-systems hosts.
Note: The storage system queue depth and other parameters are
adjustable. See the Open-Systems Host Attachment Guide for queue depth
and other requirements.
■ The System Administrator Guide provides instructions for installing, configuring, and
using Device Manager - Storage Navigator to perform resource and data management
operations on the storage system.
■ The Provisioning Guide for Open Systems describes and provides instructions for
configuring the storage system for host operations, including FC port configuration,
LUN mapping, host groups, host modes and host mode options, and LUN security.
Each Fibre Channel port on the storage system provides addressing capabilities for up
to 2,048 LUNs across as many as 255 host groups, each with its own LUN 0, host
mode, and host mode options. Multiple host groups are supported using LUN
security.
■ The Hitachi SNMP Agent User Guide describes the SNMP API interface for the storage
systems and provides instructions for configuring and performing SNMP operations.
■ The Provisioning Guide for Open Systems provides instructions for configuring multiple
custom volumes (logical units) under single LDEVs on the VSP G1000, VSP G1500, and
VSP F1500.
Responsibilities
The responsibilities for site planning and preparation are shared by the system users
and Hitachi Vantara support. The required installation planning tasks must be scheduled
and completed to ensure a successful and efficient installation of the storage system.
Customer responsibilities
You are responsible for completing the following tasks and preparing your site for
installation of the storage system.
■ Understand the applicable safety requirements associated with installing a storage
system.
■ Understand the installation requirements for the storage system. You can use the
information in this manual to determine the specific requirements for your
installation. As needed, review the Product Overview to familiarize yourself with the
components, features, and functions of the storage system.
■ Verify the installation site meets all installation requirements. A checklist is included in
this section to help you with this task.
■ Meet electrical power prerequisites and provide electrical hardware, including cables,
connectors and receptacles for connecting the storage system to site power.
■ As needed, work with Hitachi Vantara support to create an installation plan. Make
sure to allow enough time to complete any changes to the plan, so your site is ready
when the equipment arrives.
Definition of terms
Equipment
The hardware delivered to the customer site that includes the storage system
components. The system can be installed in a Hitachi rack when delivered or
assembled on site. The delivered equipment can include only the system
components if the customer supplies a standard 19-inch rack. Rack specifications
are contained in the Hitachi Universal V2 Rack Reference Guide.
Location
The specific location in the data center (area or footprint on the floor) where the
storage system is installed.
User information
Company
Address
Contact
Phone
Mobile
Contact
Phone
Mobile
User information
Contact
Phone
Mobile
Contact
Phone
Mobile
Notes
Safety requirements
See Safety requirements (on page 163) .
Does the data center provide appropriate fire protection for the
storage systems?
Is the data center free of hazards such as cables that obstruct access
to the equipment?
Delivery Requirements
See General site requirements (on page 79) .
Are all doors, hallways, elevators, and ramps wide enough and high
enough to allow the equipment to be moved from the receiving area
to the installation area?
Can the floors, elevators, and ramps support the weight of the
equipment? See General site requirements (on page 79) .
Storage Requirements
See System storage requirements (on page 80) .
Facilities Requirements
See Data center requirements (on page 81) .
Does the location meet the requirements for service clearance and
cable routing (for example, floor cutouts)? See Equipment clearances
(on page 79) .
Does the installation site meet the floor load rating requirements?
Power Requirements
See Electrical specifications (on page 140) .
Does the data center meet the AC input power requirements? See
Power connection (on page 80) and Electrical specifications (on
page 140) .
Does the data center meet the circuit breaker and plug
requirements? See Data center requirements (on page 81) .
Environmental Requirements
See Environmental specifications (on page 143) .
Temperature
Humidity
Altitude
Air flow
Electrostatic discharge
Electrical/radio frequency
interference
Operational Requirements
See Operational requirements (on page 82) .
Does the data center provide a LAN for Device Manager - Storage
Navigator?
Does the location meet the cable length requirements for the front-
end directors?
Equipment clearances
Receiving area
The receiving dock, storage area, and receiving area must be large enough to allow
movement of and access to crated or packed equipment.
Other areas
The hallways, doorways, ramps and elevators must be wide enough to allow a single
unpacked rack to be moved to the installation location. If there is insufficient space for
unpacking, the storage systems are typically unpacked in the receiving area and the
individual racks with pre-installed equipment are rolled into the data center. For
information about rack dimensions, refer to the Hitachi Universal V2 Rack Reference Guide.
Equipment weight
The floors, elevators, and ramps must be able to support the weight of the delivered
equipment as it is moved to the installation location. Spreader plates can be a
prerequisite for distributing the load and protecting the floor as the equipment is moved
from the receiving area to the installation location. Consult the system bill of materials to
establish the approximate weight of the equipment. See the next paragraph for
information about calculating the exact weight of the equipment.
The weight for a fully configured 2-controller, 6-rack storage system can reach 6,146
pounds / 2,917 kilograms. The exact weight of the equipment depends on the storage
system configuration. The following table provides weights of typical system
configurations.
Note:
The data in the following table was taken from measurements of a system in
a controlled environment.
To calculate the power draw, current draw, and heat output of a specific
system, see Component weight_ heat_ airflow_ and power consumption (on
page 146) or (easier) use the weight and power calculator at the following
URL: http://www.hds.com/go/weight-and-power-calculator.
Contact technical support if you need assistance using this tool.
Grounding
The site and site equipment must meet the following grounding requirements:
■ An insulated grounding conductor that is identical in size and insulation material and
thickness to the grounded and ungrounded branch-circuit supply conductors. It must
be green, with or without yellow stripes, and must be installed as a part of the branch
circuit that supplies the unit or system.
■ Connect the grounding conductor to earth ground at the service equipment or other
acceptable building earth ground. For a high rise steel-frame structure, this can be the
steel frame.
■ The receptacles in the vicinity of the unit or system must include a ground
connection. The grounding conductors serving these receptacles must be connected
to earth ground at the service equipment or other acceptable building earth ground.
Power connection
The AC power input for the storage system has a duplex PDU structure that allows the
rack-installed equipment to remain powered on if power is removed from one of the two
power distribution panels (PDP).
For more information, see Electrical specifications (on page 140) and Power distribution
units for Hitachi Universal V2 Rack (on page 153) .
Note: Site power can be connected to the PDUs at either the top or bottom of
the racks.
Item Description
Temperature The data center must maintain ambient temperature from 50ºF
(10ºC) to 95ºF (35ºC).
Humidity The data center must maintain ambient humidity from 20% to
80%, noncondensing.
Item Description
Contamination The data center must provide adequate protection from dust,
pollution, and particulate contamination.
Acoustics The data center must provide adequate acoustic insulation for
operating the system.
Operational requirements
The operational requirements for the storage system include:
■ LAN for Device Manager - Storage Navigator
Device Manager - Storage Navigator communicates with the storage system over a
LAN to obtain system configuration and status information and send user commands
to the storage system. Device Manager - Storage Navigator is an integrated interface
for all resource manager components.
■ Cable length for front-end directors
The following table lists the cable length requirements for the front-end directors in
the storage system.
Table 14 Maximum cable length (shortwave)
Third-party rack support for VSP G1000, VSP G1500, and VSP
F1500 storage systems
You must obtain permission to install VSP G1000, VSP G1500, and VSP F1500 storage
systems into a third-party rack.
Contact your Hitachi Vantara account team or customer support for more information.
Per square meter 660 lbs (300 Kg) 1,540 lb (700 Kg)
Single-rack configuration
The following figure shows the service clearances for a single-rack configuration.
Table 16 Floor load rating and required clearances for a single-rack configuration
Over 700 0 0 0 0 0 0 0 0
600 0.1 0 0 0 0 0 0 0
Notes:
1. Actual clearances for installation should be determined after consulting with the
construction specialist responsible for installations in the building. Clearances can
vary depending on the size/layout of the system and building conditions.
2. When various configurations of storage systems are arranged in a row, clearance
values based on the largest storage system configuration should be used.
3. For easier maintenance operations, make clearance (c) as large as possible.
Table 17 Floor load rating and required clearances for a two-rack configuration
Over 700 0 0 0 0 0 0 0 0
600 0.1 0 0 0 0 0 0 0
Notes:
1. Actual clearances for installation should be determined after consulting with the
construction specialist responsible for installations in the building. Clearances can
vary depending on the size/layout of the system and building conditions.
2. When various configurations of storage systems are arranged in a row, clearance
values based on the largest storage system configuration should be used.
3. For easier maintenance operations, make clearance (c) as large as possible.
Table 18 Floor load rating and required clearances for a two-rack configuration
Over 700 0 0 0 0 0 0 0 0
600 0.2 0 0 0 0 0 0 0
Notes:
1. Actual clearances for installation should be determined after consulting with the
construction specialist responsible for installations in the building. Clearances can
vary depending on the size/layout of the system and building conditions.
2. When various configurations of storage systems are arranged in a row, clearance
values based on the largest storage system configuration should be used.
3. For easier maintenance operations, make clearance (c) as large as possible.
Table 19 Floor load rating and required clearances for a three-rack, single-
controller configuration
Over 700 0 0 0 0 0 0 0 0
600 0 0 0 0 0 0 0 0
Notes:
1. Actual clearances for installation should be determined after consulting with the
construction specialist responsible for installations in the building. Clearances can
vary depending on the size/layout of the system and building conditions.
2. When various configurations of storage systems are arranged in a row, clearance
values based on the largest storage system configuration should be used.
3. For easier maintenance operations, make clearance (c) as large as possible.
Table 20 Floor load rating and required clearances for a three-rack, dual-controller
configuration
Over 700 0 0 0 0 0 0 0 0
600 0.2 0 0 0 0 0 0 0
Notes:
1. Actual clearances for installation should be determined after consulting with the
construction specialist responsible for installations in the building. Clearances can
vary depending on the size/layout of the system and building conditions.
2. When various configurations of storage systems are arranged in a row, clearance
values based on the largest storage system configuration should be used.
3. For easier maintenance operations, make clearance (c) as large as possible.
Table 21 Floor load rating and required clearances for a four-rack, two-controller
system
Over 700 0 0 0 0 0 0 0 0
600 0.1 0 0 0 0 0 0 0
Notes:
1. Actual clearances for installation should be determined after consulting with the
construction specialist responsible for installations in the building. Clearances can
vary depending on the size/layout of the system and building conditions.
2. When various configurations of storage systems are arranged in a row, clearance
values based on the largest storage system configuration should be used.
3. For easier maintenance operations, make clearance (c) as large as possible.
Table 22 Floor load rating and required clearances for a four-rack, two-controller
system
Over 700 0 0 0 0 0 0 0 0
600 0.1 0 0 0 0 0 0 0
Notes:
1. Actual clearances for installation should be determined after consulting with the
construction specialist responsible for installations in the building. Clearances can
vary depending on the size/layout of the system and building conditions.
2. When various configurations of storage systems are arranged in a row, clearance
values based on the largest storage system configuration should be used.
3. For easier maintenance operations, make clearance (c) as large as possible.
Five-rack configuration
The following figure shows the service clearances for a five-rack configuration.
Table 23 Floor load rating and required clearances for a five-rack configuration
Over 700 0 0 0 0 0 0 0 0
600 0 0 0 0 0 0 0 0
Notes:
1. Actual clearances for installation should be determined after consulting with the
construction specialist responsible for installations in the building. Clearances can
vary depending on the size/layout of the system and building conditions.
2. When various configurations of storage systems are arranged in a row, clearance
values based on the largest storage system configuration should be used.
3. For easier maintenance operations, make clearance (c) as large as possible.
Six-rack configuration
The following figure shows the service clearances for a six-rack configuration.
Table 24 Floor load rating and required clearances for a six-rack configuration
Over 700 0 0 0 0 0 0 0 0
600 0 0 0 0 0 0 0 0
Notes;
1. Actual clearances for installation should be determined after consulting with the
construction specialist responsible for installations in the building. Clearances can
vary depending on the size/layout of the system and building conditions.
2. When various configurations of storage systems are arranged in a row, clearance
values based on the largest storage system configuration should be used.
3. For easier maintenance operations, make clearance (c) as large as possible.
Seven-rack configuration
The following figure shows the service clearances for a seven-rack configuration.
Table 25 Floor load rating and required clearances for a seven-rack configuration
Notes;
1. Actual clearances for installation should be determined after consulting with the
construction specialist responsible for installations in the building. Clearances can
vary depending on the size/layout of the system and building conditions.
2. When various configurations of storage systems are arranged in a row, clearance
values based on the largest storage system configuration should be used.
3. For easier maintenance operations, make clearance (c) as large as possible.
Port configurations
The following figures show the front-end director port configurations.
As shown in the configuration, in addition to the standard 5 meter connection, there are
two optional features that can be ordered to enable the separation of the racks
containing the two controllers in a dual controller storage system. Using the inter-
controller connecting kit (DKC-F810I-MOD30) depends on whether the 30 or 100 meter
cables are required.
The interconnect kit includes three different cable types of the required length (30 or 100
meter) and two Modcon packages (one package for each controller). Depending on
whether both the basic and optional cache platform board features (cache path control
adapter) are in installed, an addition of eight MFC optical cables may also be required
along with the cables included with the interconnecting kit. The following cables are used
for interconnecting various components in the primary controller to the equivalent
component in the secondary controller:
■ 8 x MFC or 16 x MFC Optical Cables (light blue)
■ 2 x Modcon Optical Cables (green)
■ 1 x LAN cable (light grey)
When a customer adopts diverse routing of host cables within their data center for
resiliency and redundancy reasons, it is possible to follow the same approach for the
different cables used to separate the two controller racks. Where redundant routing is
required, all of the CL1 interconnect cables along with one of the Modcon interconnect
cables should be laid through one cable route and all of the CL2 interconnect cables,
including the other Modcon cable, should be directed through an alternate cable route.
The single LAN cable can be laid through either route direction. The only supported
extended cable options are those specified by Hitachi. The intermixing of 30 meter and
100 meter cables in a single configuration is not permitted. Choosing the proper
interconnection kit or cable length is determined by the longest cable route. When using
extended cables between controllers, Hitachi recommends taking precautionary steps
such as routing the cables through the cable trays in order to protect the cables from any
accidental physical damage.
Separation of drive-only racks
A system can also be designed to separate a rack that includes a combination of
controllers and drive chassis from multiple racks containing only drive chassis.
The following figure shows a single-controller configuration with extended cabling
between rack R0 (containing the primary controller and two drive chassis) and R1 rack
(containing two drive chassis). In addition, the extended cabling between drive chassis-
only R1 rack and R2 rack. Extending the cabling between racks in a Twin Configuration is
also supported.
To avoid I/O latency issues, the sum of the length of all cables (controller-to-drive chassis
cable and drive chassis-to-drive chassis cables) cannot exceed 125 meters.
The following example shows a configuration of a controller controlling a maximum of
six drive chassis.
There are three optional cable kits available to provide the separation of drive-only racks
from either the control rack or adjacent drive-only rack.
Each kit contains eight optical cables in either 5, 30, or 100 meter lengths and provides
enough cables to support the SAS paths from one backend module feature or pair of
disk boards. The number of drive rack interconnection kits required for a specific
configuration depends on various factors including the number of installed backend
modules and racks being separated in the configuration.
The figure illustrates a single controller installed with both the basic and optional
backend modules in a high performance, all-flash configuration using FMDs. The
example uses the extended cable kit to separate the controller rack from the second
drive rack. In this supported configuration, only the cables in the controller interconnect
kit are provided by Hitachi.
When a customer adopts diverse routing of host cables within their data center for
resiliency and redundancy reasons, it is possible to follow the same approach for the
cables that are used to separate the two racks. Where redundant routing is required, all
of the cables extending the SAS paths from any backend modules in CL1 should be laid
through one cable route and all cables extending the SAS paths from any backend
modules in CL2 should be directed through an alternate route. The cables connecting
any two racks must be the same length so choosing the proper cable kit is determined by
the longest cable route between the two racks. When using extended cables between
racks, Hitachi recommends taking precautionary steps such as routing the cables
through the cable trays in order to protect the cables from any accidental physical
damage.
Separated controller and drive-only rack configuration
A separated controller and drive-only rack configuration separates the rack with the
controllers from a rack containing only drive chassis. This particular configuration
combines both options described in the previous two examples.
The following figure shows a Twin controller configuration with extended SAS optical
cabling between R0 rack (containing the primary controller) and R1 rack (containing two
drive chassis), as well as between L0 rack (containing the secondary controller) and L1
rack (containing two drive chassis).
Although not shown in the following figure, the configuration can include an R2 rack
directly connected to, or separated from the R1 rack. Similarly, the configuration can
include an L2 rack directly connected to, or separated from the L1 rack. To avoid I/O
latency issues, the sum of the length of all cables (controller-to-drive chassis cable and
drive chassis-to-drive chassis cables) cannot exceed 125 meters.
Additional guidelines
■ You can implement extended cabling with a system installed in customer-supplied
racks only if the racks meet HDS specifications and are approved by the HDS
Customer Sales and Support (CSS) organization.
■ The high temperature mode option can be implemented on VSP G1x00 systems using
extended cabling.
■ The minimum microcode requirements to support extended cabling includes:
● VSP G1000: V02 (microcode 80-02-01-00/01), released in October 2014, must be
installed to support the SAS optical cables
● VSP G1500: SVOS 7.0 (microcode 80-05-01-00/00), released in October 2016, must
be installed to support the SAS optical cables
● To maintain proper functioning of storage system, continue to keep the storage
system microcode level current to ensure code enhancements and fixes. If the
storage system is using an earlier version of microcode, contact an authorized
service provider for assistance with planning, ordering, and installing a more
current microcode version.
■ Consult a Hitachi Vantara representative for more information about system
configurations and available extended cabling options.
Item Description
2 ENABLE switch: Used to enable the PS ON/PS OFF switch. See Power on procedures (on
page 122) .
Item Description
3 POWER switch: Turn system power on or off. See Power on procedures (on page 122) .
5 ALARM LED
■ Off: When the system is off or when the system is on and operational without failures .
■ Red: When the SVP detects a component failure or other failure condition in the system.
6 MESSAGE LED
■ Off: When power is off, or when a system-generated message is not in the queue, and
the SVP has not failed.
■ Amber: On when a system information message (SIM) is generated by either cluster and
sent to Device Manager - Storage Navigator and to the users that are set up in Device
Manager - Storage Navigator to receive them.
■ Blinking: When an SVP failure has occurred in a single SVP configuration, or if both SVPs
have failed in a dual SVP configuration. Does not blink if only one SVP in a dual SVP
configuration fails.
8 BS-ON LED: Indicates the status of the AC power to the system (basic supply)
■ Off: When AC power is applied to the system from the PDUs.
■ Amber: On when AC power is applied to the system from the PDUs. The fans are
running.
Warning: Verify the storage system is turned off normally and in idle mode
before turning off the PDU circuit breakers. Otherwise, turning off the PDU
circuit breakers can leave the storage system in an abnormal condition.
Power on procedures
Note: The control panel includes a safety feature to prevent the storage
system power from being turned on or off accidentally. The PS ON/OFF switch
does not work unless the ENABLE switch is moved to and held in ENABLE
while the power switch is moved to ON or OFF.
Perform the procedure to turn on the storage system. If applicable, see Power control
panel (on page 119) .
Procedure
1. On the control panel, check the amber BS LED and make sure it is lit. It indicates
that the storage system is in idle mode.
2. In the PS area on the control panel, move the ENABLE switch to the ENABLE position
and hold it there. While holding the switch in the ENABLE position, move the PS
ON/OFF switch to ON. Then release both switches.
3. Wait for the storage system to complete its power-on self-test and start processes.
Depending on the storage system configuration, this can take several minutes.
The storage system does not go to the READY state until the cache backup batteries
are charged to at least 50%. The process can take 90 minutes if the batteries are
completely discharged. The storage system generates a SIM that provides the status
of the battery charge. See Cache backup batteries (on page 126) for information
about the batteries.
4. When the system self-test is complete and all components are operating normally,
the green READY LED turns ON and the storage system is ready for use.
If the ALARM LED is also ON, or if the READY LED is not ON after 20 minutes, contact
customer support for assistance.
Caution: Except in an emergency, do not turn off the PDU breakers before
turning off the power to the system. The system reacts as a power failure
occurred and uses the cache backup batteries to keep the cache active until
the data in the cache is transferred to the cache backup flash memory. When
the cache backup batteries discharge power, the power-on time can be
prolonged by the amount of charge remaining in the batteries. Fully
discharged batteries take 90 minutes to charge.
Note: The control panel includes a safety feature to prevent the storage
system power from being turned on or off accidentally. The PS power ON/OFF
switch does not work unless the ENABLE switch is moved to and held in
ENABLE while the power switch is moved to ON or OFF.
Procedure
1. In the PS area on the control panel, move the ENABLE switch to the ENABLED and
hold it there. While holding the switch in ENABLED, move the PS ON/OFF switch to
OFF. Then release both switches.
2. Wait for the storage system to complete its shutdown routines. Depending on the
storage system configuration and certain MODE settings, you can wait 20 minutes
for the storage system to copy data from the cache to the cache flash drives and for
the disk drives to spin down.
If the READY and PS LEDs do not turn OFF after 20 minutes, contact customer
support for assistance.
Note: When turning off the storage system, first turn off the PDUs connecting
to the controllers and then turn off the PDUs connecting to the drive trays.
Procedure
1. Open the back doors of both racks that contain control units.
2. Turn off the circuit breakers in the following order:
a. Turn off the circuit breakers in both lower PDUs in both racks.
b. Turn off the circuit breakers in both upper PDUs in both racks with control
units.
3. Open the back doors of all racks containing only drive units and, in any order, turn
off the circuit breakers to all the PDUs.
Note: When turning on the storage system, first turn on the PDUs connecting
to the drive trays and then turn on the PDUs connecting to the controllers.
Procedure
1. In all system racks, turn on the circuit breakers in the PDUs supplying electrical
power to the drive units.
2. In both controller racks, turn on the circuit breakers in the PDUs supplying electrical
power to the controllers.
3. Turn on power to the system. For more information, see Normal power On/Off
procedures (on page 121) .
Item Description
Item Description
3 The cache memory data and storage system configuration are backed
up to the cache flash memory in the cache backup assemblies. If power
is restored during the backup, the backup stops unless the backup
battery capacities are used less than 50%. In case, the system operates
in write-through mode until the batteries are charged enough for a full
backup.
4 Data is stored in the cache flash memory until power is restored, and
then it is written to the drives.
Note: The storage system generates a SIM when the cache backup batteries
are not connected.
Battery life
The batteries have a lifespan of three years and hold a charge for a specific amount time
when disconnected. When the batteries are connected and power is on, the batteries are
continuously charging. The process occurs during both normal system operation and
while the system is in idle mode.
When the batteries are connected and the power is off, the batteries slowly discharge its
power. The batteries have a charge of less than 50% after two weeks without power.
When the batteries are fully discharged, the batteries must be connected to power for
three hours to fully recharge.
Note: The storage system generates a SIM when the cache backup batteries
are not charged to at least 50%. The LEDs on the front panel of the cache
backup kits also show the status of the batteries.
Getting help
If you continue experience technical difficulties after troubleshooting the storage system,
contact Hitachi Vantara Support at https://support.hds.com/en_us/contact-us.html.
Solving problems
The following table lists possible error conditions and recommends actions to resolve
each condition for VSP G1000, VSP G1500, and VSP F1500 storage systems.
If you cannot resolve an error condition, contact your Hitachi Vantara representative or
contact customer support for assistance.
Table 28 Troubleshooting errors
Error message Determine the type of error (refer to the SIM codes section). If
displayed possible, fix the cause of the error. If you cannot correct the
error condition, contact customer support for assistance.
General power Turn off all PDU switches and breakers. After the facility power is
failure fully restored, turn on the switches and breakers and power on
the system.
See Turning power on and off to the storage systems (on
page 119) for instructions about turning on the power to the
storage system. If necessary, contact customer support for
assistance.
ALARM LED is on If there is a temperature problem in the area, turn the power off
to the storage system, lower the room temperature to the
specified operating range, and then turn on the power to the
storage system. If necessary, contact customer support for
assistance with turning on the power to the storage system. If
the area temperature is not the cause of the alarm, contact
customer support for assistance.
The following figure illustrates a typical 32-byte SIM from the storage system. The SIMs
are displayed by reference code (RC) and severity. The six-digit RC comprises bytes 22,
23, and 13, identifies the possible error and determines the severity. The SIM type,
located in byte 28, indicates which component experienced the error.
Note: The current and power specifications of the storage system in the
following tables were measured in a controlled environment. To calculate the
power and current draw, and heat output of a specific system, see
Component power consumption, heat output, and airflow (on page 146) or
use the weight and power calculator at the following URL: http://
www.hds.com/go/weight-and-power-calculator/.
If you need assistance using this tool, contact Hitachi Vantara Support at
https://support.hds.com/en_us/contact-us.html.
Item Specification
iSCSI/FCoE 10 Gbps
Microcode Support
Notes:
1 Available as spare drive or data disks.
2 The port can be changed to long wavelength by replacing the SFP transceiver of the
fibre port on the CHB to the DKC-F810I-1PL8.
3 The port can be changed to long wavelength by replacing the SFP transceiver of the
fibre port on the CHB to the DKC-F810I-1PL16.
4 Measurement Condition: The point 1 m far from floor and surface of the product.
5 Even if storage system is in a power-off state, the cooling fan continues to spin in a
standby mode.
6 Does not include the spare drive.
7 The DKC-F810I-1R6FM/3R2FM and DKC-F710I-1R6FM/3R2FM cannot be used at 40°C.
Item Specification
iSCSI/FCoE 10 Gbps
Microcode Support
Notes:
1 Available as spare or data disks.
Transfer rate
Type Size (inches)1 Capacity Speed (RPM) (Gbps)
600 GB 10,000 -
3.5 400 GB - 6
Drive type Drive chassis Max per drive chassis Max per 2-controller system
(inches)
Spare drives5 - 48 96
Notes:
1. The LFF drive chassis uses 3.5-inch drives. The SFF drive chassis uses 2.5-inch drives.
Dual
Single controller controller
Dimension Single rack (3 racks) (6 racks)
Dual
Single controller controller
System weight Single rack (3 racks) (6 racks)
(lbs/kg) Diskless - -
1 controller: 638/290
2 controllers: 983/446
Inrush Current
Notes:
1. The maximum current in case AC input is not a redundant configuration (in case
of 184 V [200 V - 8%]).
2. The maximum current in case AC input is a redundant configuration (in case of
184 V [200 V - 8%]).
3. 110/120 VAC system is not supported.
Operating
/ Max Max No. of
Phas Voltage Current CB per
e Location PDU Plug Rating Rating PDU Breaker Rating
Thre America NEMA L15 30P 208 VAC / 30A per 3 15A 2 pole
e3 s phase UL489
3 pole, 4 wire 240 VAC
PDU-32C
13800F1 A+B+C+
0 gnd
EMEA, IEC 309, red 400 VAC 32A per 3 16A 2 pole
APAC phase UL489
4 pole, 5 wire
A3CK-24
3694-50 A+B+C+
Neut + gnd
Notes:
1The numbers in this table were taken from the PDU manufacturer’s specifications. For
information about PDUs, see Power distribution units for Hitachi Universal V2 Rack (on
page 153) .
2Americas: Single phase, 30 Amp PDU, (12) IEC C13. EMEA/APAC: Single phase, 32 Amp
PDU, (12) IEC C13; (2) IEC C19 .
3Americas: Method three phase, 30 Amp PDU, (24) IEC C13; (6) IEC C19. EMEA/APAC:
Minkels three phase, 32 Amp PDU, (24) IEC C13; (6) IEC C19.
Condition
Random Vibration:
0.147 m/s2, 3, 30 min, 5
Hz to 100Hz7
Vertical: Rotational
Edge 0.15m9
Notes:
1. Environmental specification for operation should be met before the storage system is
powered on. Maximum temperature of 90°F/32°C at air system air inlet should be strictly met.
2. Unless otherwise specified, the non-operating condition includes both packing and unpacking
conditions.
3. The system and components are packed in factory packing for shipping and storing.
4. No condensation in and around the drives should be observed under any conditions.
5. Vibration specifications are applied to all three axes.
6. See ASTM D999-01, Standard Test Methods for Vibration Testing of Shipping Containers.
7. See ASTM D4728-01 Standard Test Methods for Random Vibration Testing of Shipping
Containers.
8. See ASTM D5277-92 Standard Test Methods for Performing Programmed Horizontal Impacts
Using an Inclined Impact Tester.
9. See ASTM D6055-96 Standard Test Methods for Mechanical Handling of Unitized Loads and
Large Shipping Cases and Crates.
10. Applies only when flash module drives are installed.
11. See ANSI/ISA-71.04-2013 Environmental Conditions for Process Measurement and Control
Systems: Airborne Contaminants.
The following table lists the maximum acoustic emission values [loudness in dB(A)] for
the VSP G1000, VSP G1500, and VSP F1500 storage systems in standard and high
temperature modes.
Controller chassis
Temperature (ºF /
Item ºC) Fan speed (RPM) Noise level (dB)
Power
consumptio
Weight n Airflow
Component model Heat
Component number (lb/kg) (VA) output (m3/min)
DKC-F810I-MP2UGH
Notes:
1. Maximum values with all fans spinning at maximum speed.
2. Power is consumed during the battery back-up time only. The idle power is included in
DW700-CBX.
3. Actual values at a typical I/O condition. (Random read and write, 50 IOPS for HDD, 2500 IOPS
for SSD/FMD, data length of 8 Kbytes) These values can increase for future compatible drives.
4. The component does not contain BNST.
5. Actual values at a typical I/O condition. (Random read and write, 50 IOPS for HDD, 2500 IOPS
for SSD/FMD, data length of 8 Kbytes)
These values can increase for future compatible drives.
Caution:
■ Before installing third-party devices into the rack, check the electrical
current draw of each device. Verify the electrical specifications and
allowable current load on each PDU before plugging the device into the
PDU.
■ Balance the electrical current load between available PDUs.
Figure 40 Americas PDU for the Hitachi Universal V2 Rack (Single-phase PDU
1P30A-8C13-3C19UL.P)
Figure 41 Americas PDU for the Hitachi Universal V2 Rack (Single-phase PDU
1P30A-15C13-3C19UL.P)
Figure 42 Americas PDU for the Hitachi Universal V2 Rack (Three-phase PDU
3P30A-8C13-3C19UL.P)
Figure 43 Americas PDU for the Hitachi Universal V2 Rack (Three-phase PDU
3P30A-15C13-3C19UL.P)
Figure 44 Americas PDU for the Hitachi Universal V2 Rack (Three-phase PDU
3P30A-24C13-6C19UL.P)
Figure 45 APAC and EMEA PDU for the Hitachi Universal V2 Rack (Single-phase
1P32A-9C13-3C19CE.P)
Figure 46 APAC and EMEA PDU for the Hitachi Universal V2 Rack (Single-phase
1P32A-18C13-3C19CE.P)
Figure 47 APAC and EMEA PDU for the Hitachi Universal V2 Rack (Three-phase
3P16A-9C13-3C19CE.P)
Figure 48 APAC and EMEA PDU for the Hitachi Universal V2 Rack (Three-phase
3P16A-15C13-3C19CE.P)
Figure 49 APAC and EMEA PDU for the Hitachi Universal V2 Rack (Three-phase
3P32A-24C13-6C19CE.P)
Regulatory compliance
This equipment has been tested and is certified to meet the following certifications.
Table 38 VSP G1000, VSP G1500, and VSP F1500 certifications
Mark on the
Standard Specification product Country
Safety certification TUV Safety Report Yes (TUV) EU, North America
and TUV-NRTL
Certification, FCC
Verification Report
Electronic emission TUV Safety Report, Yes (CE Mark) European Union
certifications EMC Report, TUV GS
License, EMC
Certificate, CE Mark
Mark on the
Standard Specification product Country
Table 39 VSP G1000, VSP G1500, and VSP F1500 certifications by region
Certif
icatio Regul
n Region atory Standard Certificate No. and Report No.
Photo 12030097
Documentation
12030890
Numbers
12028263
Certif
icatio Regul
n Region atory Standard Certificate No. and Report No.
S1-50245594
FBX_SWPSPPD600 2012010907575263
1
US FCC Notice
FCC Notice
Federal Communications Commission
This equipment has been tested and found to comply with the limits for a Class A digital
device, pursuant to part 15 of the FCC Rules. These limits are designed to provide
reasonable protection against harmful interference when the equipment is operated in a
commercial environment.
This equipment generates, uses, and can radiate radio frequency energy and, if not
installed and used in accordance with the instruction manual, may cause harmful
interference to radio communications. Operation of this equipment in a residential area
is likely to cause harmful interference, in which case users will be required to correct the
interference at their own expense.
China RoHS
Controlle X O O O O O
r chassis
Drive X O O O O O
chassis
The Symbol O indicates that this toxic or hazardous substances contained in all of the
homogeneous materials used for this part is below this limit requirement in SJ/T
11363-2006.
The symbol X indicates that this toxic or hazardous substances contained in at least
one of the homogeneous materials used for this part is above the limit requirement in
SJ/T 11363-2006.
Disposal
Note: This symbol on the product or on its packaging means that your
electrical and electronic equipment should be disposed at the end of
life separately from household wastes. There are separate collection
systems for recycling EU and many cities in the USA. For more
information, contact the local authority or the dealer where you
purchased the product.
Recycling
Glossary
Hitachi Virtual Storage Platform G1000, G1500, and VSP F1500 Hardware Guide 176
Command Control Interface (CCI)
Software used to control volume replication functionality (such as TrueCopy or ShadowImage)
by means of commands issued from a host to a storage system. A command device must be
set up in the storage system to enable the storage system to receive commands from CCI.
In an open system, Replication Manager uses the CCI configuration definition files to modify
copy pair configurations and to acquire configuration information. Copy pair modification
processing, such as splitting and resynchronizing copy pairs, is executed on the storage
system via CCI.
command device
A dedicated logical volume used to interface with the storage system. Can be shared by
several hosts.
controller box
The enclosure that contains the storage system controller. For some models, disk drives may
be included as well. Controller boxes come in 2U and 3U versions.
■ CBL: AC-powered 3U controller box.
■ CBLE: AC-powered 2U controller box with support for encryption.
■ CBLD: DC-powered 3U controller box.
■ CBLE: 3U controller box that supports encryption.
■ CBSL controller box: A 3U controller box that can contain a maximum of 12 3.5-inch
drives.
■ CBSS controller box: A 2U controller box that can contain a maximum of 24 2.5-inch
drives.
■ CBXSL controller box: A 3U controller box that can contain a maximum of 12 3.5-inch
drives.
■ CBXSS controller box: A 2U controller box that can contain a maximum of 24 2.-5 inch
drives.
CRC
See cyclic redundancy check.
cyclic redundancy check (CRC)
An error-correcting code designed to detect accidental changes to raw computer data.
differential management-logical unit
disaster recovery
A set of procedures to recover critical application data and processing after a disaster or other
failure. Disaster recovery processes include fallover and fallback procedures.
DMLU
See differential management-logical unit.
Glossary
Hitachi Virtual Storage Platform G1000, G1500, and VSP F1500 Hardware Guide 177
drive box
Chassis for mounting drives that connect to the controller box.
■ Drive boxes with AC power supply:
● DBS, DBL, DBF: Drive box (2U)
● DBX: Drive box (4U)
● DBW: Drive box (5U)
■ Drive boxes with DC power supply:
● DBSD: Drive box (2U)
● DBLD: Drive box (2U)
drive I/O module
I/O module for the controller box that has drive interfaces.
duplex
The transmission of data in either one or two directions. Duplex modes are full-duplex and
half-duplex. Full-duplex is the simultaneous transmission of data in two directions. For
example, a telephone is a full-duplex device, because both parties can talk at once. In
contrast, a walkie-talkie is a half-duplex device because only one party can transmit at a time.
ethernet
A computer networking technology for local-area networks.
extent
A contiguous area of storage in a computer file system that is reserved for writing or storing a
file.
fabric
Hardware that connects workstations and servers to storage devices in a storage-area
network (SAN)N. The SAN fabric enables any server to any storage device connectivity through
the use of fibre channel switching technology.
failback
The process of restoring a system, component, or service in a state of failover back to its
original state (before failure).
failover
Automatic switching to a redundant or standby computer server, system, hardware
component, or network upon the failure or abnormal termination of the previously active
application, server, system, hardware component, or network. Failover and switchover are
essentially the same operation, except that failover is automatic and usually operates without
warning, while switchover requires human intervention.
fault tolerance
A system with the ability to continue operating, possibly at a reduced level, rather than failing
completely, when some part of the system fails.
FC
Fibre Channel
Glossary
Hitachi Virtual Storage Platform G1000, G1500, and VSP F1500 Hardware Guide 178
FC-AL
See arbitrated loop.
FCoE
Fibre Channel over Ethernet. An encapsulation of Fibre Channel frames over Ethernet
networks. This allows Fibre Channel to use 10-gigabit Ethernet networks (or higher speeds)
while preserving the Fibre Channel protocol.
Fibre Channel (FC)
A technology for transmitting data between computer devices at a data rate of up to 4 Gbps. It
is especially suited for attaching computer servers to shared storage devices and for
interconnecting storage controllers and drives.
firmware
Software embedded into a storage device. It may also be referred to as microcode.
flash module (FMD)
A high speed data storage device that includes a custom flash controller and several flash
memory sub-modules on a single PCB.
full-duplex
Transmission of data in two directions simultaneously. For example, a telephone is a full-
duplex device because both parties can talk at the same time.
Gbps
Gigabit per second.
gigabit ethernet
A version of ethernet that supports data transfer speeds of 1 gigabit per second. The cables
and equipment are very similar to previous ethernet standards.
GUI
graphical user interface
HA
High availability.
half-duplex
Transmission of data in just one direction at a time. For example, a walkie-talkie is a half-
duplex device because only one party can talk at a time.
HBA
See host bus adapter.
host
One or more host bus adapter (HBA) world wide names (WWN).
Glossary
Hitachi Virtual Storage Platform G1000, G1500, and VSP F1500 Hardware Guide 179
host bus adapter (HBA)
One or more dedicated adapter cards that are installed in a host, have unique WWN
addresses, and provide Fibre Channel I/O connectivity to storage systems, typically through
Fibre Channel switches. Unlike general-purpose Ethernet adapters, which handle a multitude
of network protocols, host bus adapters are dedicated to high-speed block transfers for
optimized I/O performance.
host I/O module
I/O module for the controller box . The host I/O module provides interface functions for the
host.
I/O
input/output
I/O card
The I/O card (ENC) is installed in a DBX. It provides interface functions for the controller box or
drive box.
I/O module
The I/O module (ENC) is installed in a DBS/DBSD/DBL/DBLD/DBF/DBW. It provides interface
functions for the controller box or drive box.
IEEE
Institute of Electrical and Electronics Engineers. A non-profit professional association best
known for developing standards for the computer and electronics industry. In particular, the
IEEE 802 standards for local-area networks are widely followed.
IOPS
I/Os per second
iSCSI
Internet Small Computer Systems Interface
iSCSI initiator
iSCSI-specific software installed on the host server that controls communications between the
host server and the storage system.
iSNS
Internet Storage Naming Service. An automated discovery, management, and configuration
tool used by some iSCSI devices. iSNS eliminates the need to manually configure each
individual storage system with a specific list of initiators and target IP addresses. Instead, iSNS
automatically discovers, manages, and configures all iSCSI devices in your environment.
LAN
See local area network.
load
In UNIX computing, the system load is a measure of the amount of work that a computer
system is doing.
Glossary
Hitachi Virtual Storage Platform G1000, G1500, and VSP F1500 Hardware Guide 180
local area network (LAN)
A computer network that spans a relatively small geographic area, such as a single building or
group of buildings.
logical
Describes a user's view of the way data or systems are organized. The opposite of logical is
physical, which refers to the real organization of a system. A logical description of a file that it
is a quantity of data collected together in one place. The file appears this way to users.
Physically, the elements of the file could live in segments across a disk.
microcode
The lowest-level instructions directly controlling a microprocessor. Microcode is generally
hardwired and cannot be modified. It is also referred to as firmware embedded in a storage
subsystem.
Microsoft Cluster Server
A clustering technology that supports clustering of two NT servers to provide a single fault-
tolerant server.
pair
Two logical volumes in a replication relationship in which one volume contains original data to
be copied and the other volume contains the copy of the original data. The copy operations
can be synchronous or asynchronous, and the pair volumes can be located in the same
storage system (in-system replication) or in different storage systems (remote replication).
pair status
Indicates the condition of a copy pair. A pair must have a specific status for specific
operations. When a pair operation completes, the status of the pair changes to a different
status determined by the type of operation.
parity
In computers, parity refers to a technique of checking whether data has been lost or written
over when it is moved from one place in storage to another or when transmitted between
computers.
Parity computations are used in RAID drive arrays for fault tolerance by calculating the data in
two drives and storing the results on a third. The parity is computed by XOR'ing a bit from
drive 1 with a bit from drive 2 and storing the result on drive 3. After a failed drive is replaced,
the RAID controller rebuilds the lost data from the other two drives. RAID systems often have
a "hot" spare drive ready and waiting to replace a drive that fails.
parity group
See RAID group.
point-to-point
A topology where two points communicate.
port
An access point in a device where a link attaches.
Glossary
Hitachi Virtual Storage Platform G1000, G1500, and VSP F1500 Hardware Guide 181
primary site
The physical location of a storage system that contains original data to be replicated and that
is connected to one or more storage systems at a remote or secondary site via remote copy
connections. A primary site can also be called a “main site” or “local site”.
The term "primary site" is also used for host failover operations. In that case, the primary site
is the location of the host on which the production applications are running, and the
secondary site is the location of the host on which the backup applications that run when the
applications at the primary site have failed.
RAID
redundant array of independent disks
A collection of two or more disk drives that presents the image of a single logical disk drive to
the system. Part of the physical storage capacity is used to store redundant information about
user data stored on the remainder of the storage capacity. In the event of a single device
failure, the data can be read or regenerated from the other disk drives.
RAID employs the technique of disk striping, which involves partitioning each drive's storage
space into units ranging from a sector (512 bytes) up to several megabytes. The stripes of all
the disks are interleaved and addressed in order.
RAID group
A redundant array of inexpensive drives (RAID) that have the same capacity and are treated as
one group for data storage and recovery. A RAID group contains both user data and parity
information, which allows the user data to be accessed in the event that one or more of the
drives within the RAID group are not available. The RAID level of a RAID group determines the
number of data drives and parity drives and how the data is "striped" across the drives. For
RAID1, user data is duplicated within the RAID group, so there is no parity data for RAID1 RAID
groups.
A RAID group can also be called an array group or a parity group.
remote path
A route connecting identical ports on the local storage system and the remote storage system.
Two remote paths must be set up for each storage system (one path for each of the two
controllers built in the storage system).
SAN
See storage area network.
SAS
See Serial Attached SCSI.
SAS cable
Cable for connecting a controller box and drive box.
Secure Sockets Layer (SSL)
A common protocol for managing the security of message transmission over the Internet.
Two SSL-enabled peers use their private and public keys to establish a secure communication
session, with each peer encrypting transmitted data with a randomly generated and agreed-
upon symmetric key.
Glossary
Hitachi Virtual Storage Platform G1000, G1500, and VSP F1500 Hardware Guide 182
Serial Attached SCSI (SAS)
A replacement for Fibre Channel drives in high-performance applications. See also SCSI.
snapshot
A term used to denote a copy of the data and data-file organization on a node in a disk file
system. A snapshot is a replica of the data as it existed at a particular point in time.
SNM2
See Storage Navigator Modular 2.
storage area network (SAN)
A network of shared storage devices that contain disks for storing data.
Storage Navigator Modular 2
A multi-featured scalable storage management application that is used to configure and
manage the storage functions of Hitachi storage systems.
striping
A way of writing data across drive spindles.
target
The receiving end of an iSCSI conversation, typically a device such as a disk drive.
URL
Uniform Resource Locator
world wide name
A unique identifier that identifies a particular fibre channel target.
zoning
A logical separation of traffic between host and resources. By breaking up into zones,
processing activity is distributed evenly.
Glossary
Hitachi Virtual Storage Platform G1000, G1500, and VSP F1500 Hardware Guide 183
Index
A conditions 128
configuration 104, 112
architecture
connection 105, 107
system 60
connector 38
authorized service provider 75
control panel 119
controller 60
B controller chassis 23, 60, 108, 112
back-end director 45, 60
backup 58 D
backup battery 126
data 108
battery 127
diagram 105
battery backup 125
DIMM 51
battery life 125
drive chassis 23, 112
BED 45, 46, 60
drive chassis components 46
breaker configuration 107
drive tray 108
C E
cable connection 104, 108
electrical specifications 140
cache 58, 60, 125, 126
emergency 123, 124
cache flash memory 58
equipment
cache memory 51
clearances 79
cache path control adapter 51
equipment weight 79
certificate 175
export controls 170
certification 175
extended cable connection 112
certifications
compliance 166
Europe 169
F
Japan 170 FED 60
US FCC 169 Fibre Channel 36, 38
CFM 58 Fibre Channel over Ethernet (FCoE) 36, 38
CHA 36 FICON 36, 38
channel adapter 36, 38, 104 FIPS 140-2 175
chassis flexible back-end director 46
drive 46 front-end director 36, 38, 60, 104
checklist 76
circuit breaker 123, 124 G
clearances
equipment 79 guidelines
components access by authorized personnel 163
drive chassis 46 cabling 163
earthquake safety 163
Index
Hitachi Virtual Storage Platform G1000, G1500, and VSP F1500 Hardware Guide 184
guidelines (continued) power distribution unit (continued)
electrical safety 165 specifications (continued)
equipment modifications 163 Americas (continued)
fire protection 163 single-phase 153, 154
hazards 163 three-phase 154–156
loose clothing 164 APAC and EMEA
moving equipment 164 single-phase 157, 158
operating in storms 164 three-phase 159–161
power cables 164 power failure 125
safety glasses 164 preparation 75
walkways and floors 164 prerequisite 119
warning and safety labels 163 problems 128
work safety 164 procedure 119
procedures
H power off 122
power on 122
hardware 60
protocol 38
host 36, 38
host connectivity 17
R
host modes 71, 73
rack 112
I RAID groups 61
RAID implementation 61
installation 46, 75
requirement 119
iSCSI 36, 38
requirements
airflow 146
L cable length 44, 82
life 127 circuit breakers 80
logical units 68, 69 data center 81
longwave 36, 38 delivery 76
facilities 76
general 163
M
grounding 80
mainframe 69, 70 installation 75
microprocessor 60 LAN 82
operational 76, 82
O plugs 80
power 76
operating systems 17
power connection 80
operations
safety 75, 76, 163
battery backup 125
service clearance 83
site 79
P storage 76, 80
part 164 responsibilities
PDU 60, 121, 123, 124 support team 76
port 104 user 75
power 105, 107, 119, 123, 124 responsibility 75
power distribution unit
overview 153 S
specifications
safety 164
Americas
SAS 108
Index
Hitachi Virtual Storage Platform G1000, G1500, and VSP F1500 Hardware Guide 185
shortwave 36, 38
site planning 75
Solve 128
specifications
cable length 131
drive 131
environmental 143
heat output 146
load rating 83
mechanical 131, 139
storage area network 38
storage system 23, 75, 119, 127, 128
switches
power 122
system idle mode 121
system reliability 23
T
technological advances 23
third-party racks
VSP F1500 83
VSP G1000 83
VSP G1500 83
troubleshoot 128
turn 119
turn off 123
turn on 123, 124
U
UPS 107
user 75
V
virtual storage director 60
VSP F1500 61
VSP G1000 61
VSP G1000, VSP G1500, and VSP F1500 131, 139, 140
VSP G1500 61
W
warning 164
Index
Hitachi Virtual Storage Platform G1000, G1500, and VSP F1500 Hardware Guide 186
Hitachi Vantara
Corporate Headquarters Regional Contact Information
2845 Lafayette Street Americas: +1 866 374 5822 or info@hitachivantara.com
Santa Clara, CA 95050-2639 USA Europe, Middle East, and Africa: +44 (0) 1753 618000 or info@emea@hitachivantara.com
www.HitachiVantara.com | community.HitachiVantara.com Asia Pacific: + 852 3189 7900 or info.marketing.apac@hitachivantara.com