TXE0780

Download as pdf or txt
Download as pdf or txt
You are on page 1of 118

HDS Architect – Business

Continuity 1 / Certification
Preparation Course
TXE0780

Courseware Version 2.2


Notice: This document is for informational purposes only, and does not set forth any warranty, express or
implied, concerning any equipment or service offered or to be offered by Hitachi Data Systems. This
document describes some capabilities that are conditioned on a maintenance contract with Hitachi Data
Systems being in effect, and that may be configuration-dependent, and features that may not be currently
available. Contact your local Hitachi Data Systems sales office for information on feature and product
availability.
Hitachi Data Systems sells and licenses its products subject to certain terms and conditions, including limited
warranties. To see a copy of these terms and conditions prior to purchase or license, please call your local
sales representative to obtain a printed copy. If you purchase or license the product, you are deemed to have
accepted these terms and conditions.
THE INFORMATION CONTAINED IN THIS MANUAL IS DISTRIBUTED ON AN "AS IS" BASIS
WITHOUT WARRANTY OF ANY KIND, INCLUDING WITHOUT LIMITATION, ANY IMPLIED
WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE OR
NONINFRINGEMENT. IN NO EVENT WILL HDS BE LIABLE TO THE END USER OR ANY THIRD PARTY
FOR ANY LOSS OR DAMAGE, DIRECT OR INDIRECT, FROM THE USE OF THIS MANUAL, INCLUDING,
WITHOUT LIMITATION, LOST PROFITS, BUSINESS INTERRUPTION, GOODWILL OR LOST DATA,
EVEN IF HDS EXPRESSLY ADVISED OF SUCH LOSS OR DAMAGE.
Hitachi Data Systems is registered with the U.S. Patent and Trademark Office as a trademark and service
mark of Hitachi, Ltd. The Hitachi Data Systems logotype is a trademark and service mark of Hitachi, Ltd.
The following terms are trademarks or service marks of Hitachi Data Systems Corporation in the United
States and/or other countries:

Hitachi Data Systems Registered Trademarks


Essential NAS Platform Hi-Track ShadowImage TrueCopy

Hitachi Data Systems Trademarks


Hi-PER Architecture Hi-Star NanoCopy Resource Manager SplitSecond
Universal Star Network Universal Storage Platform
All other trademarks, trade names, and service marks used herein are the rightful property of their respective
owners.
NOTICE:
Notational conventions: 1KB stands for 1,024 bytes, 1MB for 1,024 kilobytes, 1GB for 1,024 megabytes, and
1TB for 1,024 gigabytes, as is consistent with IEC (International Electrotechnical Commission) standards for
prefixes for binary and metric multiples.
©2009, Hitachi Data Systems Corporation. All Rights Reserved
HDS Academy 0029

Contact Hitachi Data Systems at www.hds.com.

Page ii HDS Confidential: For distribution only to authorized parties.


Products Used in this Course
Following are the trademark names of the products used in this course:

Legacy Hitachi Modular Storage Systems:


y Hitachi Lightning 9500™ V Series modular storage systems
y Hitachi Thunder 9520V™ workgroup modular storage
y Hitachi Thunder 9530V™ entry-level storage
y Hitachi Thunder 9570V™ high-end modular storage
y Hitachi Thunder 9585V™ high-end modular storage

Hitachi Enterprise Storage Systems


y Hitachi Universal Storage Platform™
y Hitachi Universal Storage Platform™ V
y Hitachi Universal Storage Platform™ VM

Hitachi Modular Storage Systems


y Hitachi Network Storage Controller, model NSC55

Hitachi Storage Command Suite:


y Hitachi Protection Manager software
y Hitachi Tuning Manager software

Hitachi Replication Software:


Remote Replication:
y Hitachi TrueCopy® Synchronous software
y Hitachi TrueCopy® Asynchronous software
y Hitachi TrueCopy® Remote Replication software bundle
y Hitachi TrueCopy® Extended Distance software
y Hitachi Universal Replicator Software

In-System Replication:
y Hitachi ShadowImage® Heterogeneous Replication software
y Hitachi Copy-on-Write Snapshot Software
Products Used in this Course HDS Architect – Business Continuity 1 / Certification Preparation Course

Page ii HDS Confidential: For distribution only to authorized parties.


Contents

INTRODUCTION ................................................................................ V
Welcome and Introductions......................................................................... v
Course Description ......................................................................................vi
Course Prerequisites ..................................................................................vii
Exercise: What Would You Like To Get Out of This Course? .................. viii
Course Objectives .......................................................................................ix
Course Topics ............................................................................................. x
Learning Paths ............................................................................................xi

1. UNDERSTANDING BUSINESS AND RECOVERY REQUIREMENTS ...... 1-1


Module Objectives .................................................................................... 1-1
RPO versus RTO...................................................................................... 1-2
Data Protection Classifications................................................................. 1-3
Regulatory Requirements......................................................................... 1-4
Recovery Techniques............................................................................... 1-5
Exercise: Data Gathering ......................................................................... 1-6
Data Gathering ......................................................................................... 1-7
Exercise: Discover the Environment ...................................................... 1-10
Assess the Environment......................................................................... 1-10

2. REPLICATION STRATEGIES ........................................................ 2-1


Module Objectives .................................................................................... 2-1
PIT Copies................................................................................................ 2-2
PIT Copies — Product Comparison ......................................................... 2-3
Remote PIT Process Flow........................................................................ 2-4
PIT — Mediated Remote Copy ................................................................ 2-5
Continuous Remote Copy ........................................................................ 2-6
Planned and Unplanned Outages ............................................................ 2-7
Reference Architectures........................................................................... 2-8
Disaster Recovery Testing ..................................................................... 2-11
Failback .................................................................................................. 2-12
Replication Resynchronization Time ...................................................... 2-13
Scenarios................................................................................................ 2-14
Exercise.................................................................................................. 2-15

3. TECHNOLOGY RECOMMENDATIONS ............................................ 3-1


Module Objectives .................................................................................... 3-1
Product Comparisons ............................................................................... 3-2
External ShadowImage over Distance ..................................................... 3-4
Channel Extension ................................................................................... 3-5
DWDM ...................................................................................................... 3-6
Buffer Credits............................................................................................ 3-7
Exercise: Buffer Credits............................................................................ 3-8
Channel Extenders with Hitachi Data Systems Storage .......................... 3-9
Channel Extension Fundamentals ......................................................... 3-10
Exercise.................................................................................................. 3-11

4. WORKLOAD ANALYSIS .............................................................. 4-1


Module Objectives .................................................................................... 4-1
Workload Analysis Tools .......................................................................... 4-2
Workload Data Sources for Open Systems ............................................. 4-3
Workload Study ........................................................................................ 4-4
Inputs for Workload Analysis.................................................................... 4-5
Contents HDS Architect – Business Continuity 1 / Certification Preparation Course

Block Size .................................................................................................4-6


Formulas ...................................................................................................4-7
Workload Data (MB/sec)...........................................................................4-8
Identify Peak Workload .............................................................................4-9
Calculating Interval Write Volume...........................................................4-10
Write Volume Over Time.........................................................................4-11
Rolling Average.......................................................................................4-12
Data from Multiple Hosts.........................................................................4-13
Desired Amount of Data..........................................................................4-14
Workload Analysis ..................................................................................4-15
Using Workload Analysis ........................................................................4-16
Response Time .......................................................................................4-17
Example ..................................................................................................4-18

5. CONNECTIVITY REQUIREMENTS ..................................................5-1


Module Objectives ....................................................................................5-1
Bandwidth Sizing Strategies .....................................................................5-2
Telecom Recommendations .....................................................................5-3
Telecom Capacity .....................................................................................5-3
Reference Architecture Connectivity ........................................................5-4
Estimating Initial Copy Duration................................................................5-5
Estimating Resync Duration......................................................................5-6
Reducing Initial Copy Time .......................................................................5-7
RPO and Universal Replicator with Constrained Bandwidth ....................5-8
Latency......................................................................................................5-9
Additional Sizing Considerations ............................................................5-10
Inflow Control ..........................................................................................5-11
Impact of Journal Overflow .....................................................................5-12
Exercise ..................................................................................................5-13

6. CONFIGURATION SIZING ............................................................6-1


Module Objectives ....................................................................................6-1
Size a Point-in-Time Solution....................................................................6-2
Size a Synchronous Solution ....................................................................6-3
Size a Universal Replicator Solution.........................................................6-4
Universal Replicator Mode Settings..........................................................6-5
Size a TrueCopy Extended Solution.........................................................6-6
Balanced System Design..........................................................................6-7
Volume Placement....................................................................................6-8
Cache........................................................................................................6-9
Host Delay...............................................................................................6-10
HDP and Replication...............................................................................6-11
Universal Replicator Notes .....................................................................6-12
Exercise: Universal Replicator Workload Analysis .................................6-13

GLOSSARY
EVALUATING THIS COURSE

Page iv HDS Confidential: For distribution only to authorized parties.


Introduction
Welcome and Introductions

• Introductions
– Name
– Position
– Professional skills
– Expectations from the course

HDS Confidential: For distribution only to authorized parties. Page v


Introduction
Course Description

Course Description

• This six hour virtual instructor-led course prepares the Hitachi Data
Systems Certified Professional to take the Architect - Business Continuity
Exam (HH0-400) by refreshing the existing knowledge on the Business
Continuity architecture concepts and Hitachi replications solutions. Also,
the workload analysis exercise helps to reinforce workload sizing in the
replication environments.
• The following skills and knowledge are validated upon successful
completion of the certification exam:
– The ability to examine data and information requirements from a business
perspective and respond with solutions, defined as a hardware and software
architecture meeting the requirements
– In a sales engineering role, preparing technical architecture implementation
strategies and plans and applying new technologies
– In an architect or storage engineering role, preparing detailed implementation
plans in association with and for execution by implementation specialists

Page vi HDS Confidential: For distribution only to authorized parties.


Introduction
Course Prerequisites

Course Prerequisites

• Practical field knowledge and experience with planning/sizing


– Hitachi TrueCopy® Heterogeneous Remote Replication software
– Hitachi TrueCopy® Extended Remote Replication software
– Hitachi ShadowImage® Heterogeneous Replication software
– Hitachi Universal Replicator software
– Hitachi Copy-on-Write Snapshot software
• Practical field knowledge and experience with implementation of:
– TrueCopy software
– TrueCopy Extended software
– ShadowImage software
– Universal Replicator software
– Copy-on-Write Snapshot software

HDS Confidential: For distribution only to authorized parties. Page vii


Introduction
Exercise: What Would You Like To Get Out of This Course?

Exercise: What Would You Like To Get Out of This Course?

Page viii HDS Confidential: For distribution only to authorized parties.


Introduction
Course Objectives

Course Objectives

• Upon completion of this course, the learner should be able to:


– Describe business and recovery requirements
– Distinguish between recovery characteristics of various replication strategies
– Identify technology recommendations based on recovery requirements of a
Business Continuity scenario
– Describe the key elements of Workload Analysis
– Recognize a connectivity design
– Identify the recommended configuration

This course follows a Business Continuity (BC) engagement methodology. Module


one covers Assess and Discover phases.

HDS Confidential: For distribution only to authorized parties. Page ix


Introduction
Course Topics

Course Topics

• Session 1 – Day 1
– Introduction
– Understanding Business and Recovery Requirements
– Replication Strategies
– Technology Recommendations

• Session 2 – Day 2
– Workload Analysis
– Connectivity Requirements
– Configuration Sizing

Page x HDS Confidential: For distribution only to authorized parties.


Introduction
Learning Paths

Learning Paths

• Are a path to professional


certification
• Enable career advancement
• Are for customers, partners, and
employees
– Available on HDS.com, Partner
Xchange and HDSnet
• Are available from the instructor
– Details or copies

HDS Confidential: For distribution only to authorized parties. Page xi


Introduction
Learning Paths

Page xii HDS Confidential: For distribution only to authorized parties.


1. Understanding
Business and
Recovery
Requirements
Module Objectives

• Upon completion of this module, the learner should be able to:


– Identify how much protection is required
• Differentiate between Recovery Point Objective (RPO) and Recovery
Time Objective (RTO)
• Identify regulatory requirements
– Describe what is being protected in a customer’s environment
– Define existing recovery techniques

HDS Confidential: For distribution only to authorized parties. Page 1-1


Understanding Business and Recovery Requirements
RPO versus RTO

RPO versus RTO

• RPO − Worst case time between the last back-up and interruption time
– It is based on a risk tolerance discussion with the customer
• Business impact of missing data
• Tolerance varies according to cost

Disaster

Timeline

RPO

There is no tool to determine RPO.

• RTO − How long is the customer willing to live with down systems?
– RTO is outage duration; RPO is how much data must be recovered
– Time includes recovery of multiple components at the secondary site

Disaster

Timeline

RPO RTO

Page 1-2 HDS Confidential: For distribution only to authorized parties.


Understanding Business and Recovery Requirements
Data Protection Classifications

Data Protection Classifications

• Scale
– Site-wide Disaster – Large scale outage that impacts operations at an entire
facility, such as fire, hurricane, earthquake, terrorism
– Point Disaster – Single event outage that occurs at a single, readily
identifiable point in time, such as administrator error, viruses, isolated
hardware failure
• Time
– Immediate Disaster – Distinct event that affects all components at the same
time, such as a meteor
– Rolling Disaster − Several components failing at different points in time,
such as an AC failure or power failure where a server fails, then storage, then
network, eventually the whole site

Point-in-Time copies and real-time copies are discussed in the next module.

HDS Confidential: For distribution only to authorized parties. Page 1-3


Understanding Business and Recovery Requirements
Regulatory Requirements

Regulatory Requirements

• Data protection strategies address and support regulatory requirements


such as:
– Sarbanes-Oxley – Corporate reporting and financial results; tells CEO and
CFO that they must be able to defend accuracy of their books
– Basel II – International banking regulation that deals with amortizing cost of
risk into financial markets
– Email archiving – Email is considered a business record; should have email
policy; defense against litigation

Page 1-4 HDS Confidential: For distribution only to authorized parties.


Understanding Business and Recovery Requirements
Recovery Techniques

Recovery Techniques

• Manual data entry (retype)


• Tape
• Bare metal
• File-level recovery
• Rapid recovery
– VMware Site Recovery Manager with Site Recovery Agent
• Cluster Failover
– Clustering is used to reduce recovery time – useful for more than automated
failover
– Common Cluster Products
• GDPS, VCS, MSCS, HACMP, MC/ServiceGuard, HSC

• RTO combined with a RPO, based on the value of


particular data, map to a range of technical approaches,
costs, and degrees of data protection.

HDS Confidential: For distribution only to authorized parties. Page 1-5


Understanding Business and Recovery Requirements
Exercise: Data Gathering

Exercise: Data Gathering

• Identify key stakeholders and Order

• Create requirements document – IT Director/Project Sponsor


– DR Planner/Manager
– Business Representative for
Applications
– Application Administrators
– System Administrator
– Database Administrator
– Storage Architect/Administrator
– Network Administrator
– Operations Manager

Activity –
Who are some of the key stakeholders you typically talk to?
Is there only one set of correct stakeholders?
The order is important: top to bottom, or bottom to top.

Page 1-6 HDS Confidential: For distribution only to authorized parties.


Understanding Business and Recovery Requirements
Data Gathering

Data Gathering

• Gather Business Requirements


– Applications
– Purpose (Disaster recovery, business continuity, migration)
– Business Impact Analysis performed?
– RPO
– RTO
– Disaster levels/data protection strategies
– Regulatory requirements
– Existing recovery techniques
– Timelines for implementation
– Future considerations

10

• Gather Technical Requirements


– Blanket recovery requirements versus application specific requirements
– Auxiliary requirements for other data copies (backups, reporting, and more)
– Applications and Database requirements
– Existing servers, storage
– Network and channel extension
– Timelines for implementation
– Vendor preferences
– Operations considerations

HDS Confidential: For distribution only to authorized parties. Page 1-7


Understanding Business and Recovery Requirements
Data Gathering

• Environment Audit – Application Specific Questions


– Application function
– RPO and RTO values
– Database version
– Backup type and schedule – Why?
– Identify storage system
– Is there a Disaster Recovery (DR) test schedule and written plan
– Host information (Operating System, volume manager, backup schedule
and tape rotation, multipathing, and more)
– File system to LUN map
– New environment or conversion
– DR servers in place
– Storage in place
– Change control procedures

• Environment Audit – Environment Questions


– Distance between sites
– Bi-directional replication
– WAN Environment
• Dedicated to replication?
• Current workload
• QOS?
– Channel extension hardware
– Replication licenses in place and utilized?
– Enterprise Scheduler
– System monitoring software in place
– Servers for CCI at each site
– Preferred scripting language
– Desire non-disruptive testing
– Third party Disaster Recovery site?
– Data and workload growth rate
– Review existing replication
• Raid Manager
• Storage Navigator
13

Page 1-8 HDS Confidential: For distribution only to authorized parties.


Understanding Business and Recovery Requirements
Data Gathering

• Environment Audit – Application and Database Questions


– Recoverability utilizing replication is dependent on in order delivery. This is
provided by consistency groups.

Server

Consistency Consistency
Group Group

DATA LOGS DATA LOGS

Storage System Storage System

14

Discussion of crash/recovery

HDS Confidential: For distribution only to authorized parties. Page 1-9


Understanding Business and Recovery Requirements
Exercise: Discover the Environment

Exercise: Discover the Environment

• List products and methodologies for discovering existing infrastructure

Assess the Environment

• Balance cost versus benefit–delving into what customers really want and
what they are willing to pay for
• Identify customers solution flexibility–is there only one answer or are there
a variety of solutions that might work with relative costs
• Document and verify all collected information with entire customer team

16

Page 1-10 HDS Confidential: For distribution only to authorized parties.


2. Replication
Strategies
Module Objectives

• Upon completion of this module, the learner should be able to:


– Identify the recovery characteristics of the most common replication
strategies:
• Point-in-time (PIT)
• Continuous
• 3-Data Center
– Compare the advantages and disadvantages of each recovery characteristic
– List the criteria for choosing:
• One recovery characteristics over another
or
• In recovery characteristics addition to another
– Describe how each strategy mitigates exposure

At this point, we have covered Assess and Discovery and are moving into Design
phase of a methodology.

HDS Confidential: For distribution only to authorized parties. Page 2-1


Replication Strategies
PIT Copies

PIT Copies

• Definition
– Volume image that contains data from a specific point in time as opposed to a
volume replica that is continuously updated
– It is used to mitigate risks of logical corruption
• Products include:
– ShadowImage Replication software
– Copy-on-Write Snapshot software
• Point-in-Time Copy Management
– Schedule application freeze to ensure consistent image on disk
– Snap PIT copy (PAIR to PSUS)
– Resume application
• Automation tools include:
• Hitachi Command Control Interface for copy management
• Host scripting for application integration with freeze/thaw mechanism
• Hitachi Protection Manager software
• Split-second architecture

Stress that PSUS is a data consistent state for ShadowImage

Page 2-2 HDS Confidential: For distribution only to authorized parties.


Replication Strategies
PIT Copies — Product Comparison

PIT Copies — Product Comparison

• Benefits
– Reduce impact to production hosts
– Centralized copy management
– Quick disk based recovery
– Replicated data available to other hosts for backup/reporting
• Products under Load
– ShadowImage Replication Software
• Imposes load while pairs are in PAIR status
– Copy-on-Write Snapshot Software
• Imposes load while pairs are suspended
• Resync Times
– ShadowImage Software
• quick resync
– Near instantaneous, but carries system performance penalty
• normal resync
– Time dependent on differential data
– Copy-on-Write Software
• instantaneous resync

• Exercise – When would you use each and predict behavior given certain
workload characteristics
– ShadowImage Software
• Used when recovery considerations exceed cost considerations
• Provides fastest recovery with minimal overhead to production
applications
– Copy-on-Write Software
• Used when storage cost considerations exceed recovery considerations
• Performance overhead may impact production applications
• Recovery time is a function of pool consumption plus initial overhead
(recovery time may be lengthy)

HDS Confidential: For distribution only to authorized parties. Page 2-3


Replication Strategies
Remote PIT Process Flow

Remote PIT Process Flow

For ShadowImage software, resync time is a function of the amount of change


between the secondary volume and the primary volume
y When the amount of change is high resync will take longer; at some point that
may exceed the customer’s resync window.
y In that case, consider leaving the pairs active during normal operations.
y The state of the pairs during normal operation is known as the “resting state”: If
you leave the pairs active most of the time, the resting state is PAIR, otherwise
the resting state is suspended.
y For Copy-on-Write Snapshot software, resync time may not be instantaneous
due to the need to add pages back into the free queue. Alternative is to drop and
recreate the snapshot.

Page 2-4 HDS Confidential: For distribution only to authorized parties.


Replication Strategies
PIT — Mediated Remote Copy

PIT — Mediated Remote Copy

• PIT mediated remote copies are used to create point-in-time replicas at a


secondary site to provide recovery images at specific RPO intervals.
• PIT mediated copies protect against site-wide and rolling disasters, but
can not achieve low RPO.
• Products
– TrueCopy Remote Replication software with ShadowImage software.
– Universal Replicator software with ShadowImage software.
• For PIT mediated remote copy, pairs are suspended during production
activity (PSUS).

HDS Confidential: For distribution only to authorized parties. Page 2-5


Replication Strategies
Continuous Remote Copy

Continuous Remote Copy

• Continuous copies are used to maintain a continuous replica at a recovery


facility. Continuous copies protect against a site-wide disaster.
• Products
– TrueCopy Remote Replication software.
– Universal Replicator software.
• Use CCI for copy management.
• For a typical disaster recovery scenario, pairs are left active during
production activity.

Page 2-6 HDS Confidential: For distribution only to authorized parties.


Replication Strategies
Planned and Unplanned Outages

Planned and Unplanned Outages

• Planned outages are generally used to perform site maintenance


– Applications should be stopped gracefully to ensure I/O consistency
• Unplanned outages are caused by failures
– Introduce uncertainty
– Customer must decide what to recover and to where; more automation is not
necessarily better
– May require a full initial copy
• If the pair was in Copy state during disaster, the secondary image is
invalid
• If either storage system loses power and battery backup is exhausted,
bitmaps in the shared memory are lost

HDS Confidential: For distribution only to authorized parties. Page 2-7


Replication Strategies
Reference Architectures

Reference Architectures

• Standard Disaster Recovery

Site wide instant disaster Low RPO (0 for sync, minutes for Universal
Replicator)
Logical corruption (L3) Point-in-Time recovery images are schedulable
Testing implication No L3 recovery during Disaster Recovery (DR)
testing
Manageability ranking 1

10

Continuous line arrow indicates real-time; dotted arrow indicates Point –in- Time
(PIT).

Page 2-8 HDS Confidential: For distribution only to authorized parties.


Replication Strategies
Reference Architectures

• Batch Disaster Recovery

Site wide instant disaster High RPO (hours)


Logical corruption (L3) Point-in-Time recovery images are
schedulable
Testing implication RPO = Test duration + resync duration
Manageability ranking 2

11

• Point-In-Time Mediated Copy (4-copy model)

Site wide instant disaster High RPO (hours)


Logical corruption (L3) Point-in-Time recovery images are
schedulable
Testing implication RPO = Test duration + resync duration
Manageability ranking 4

12

HDS Confidential: For distribution only to authorized parties. Page 2-9


Replication Strategies
Reference Architectures

• Journal-based Continuous Replication

Site wide instant disaster Flexible RPO (minutes to hours)


Logical corruption (L3) Point in Time recovery images are
schedulable
Testing implication No L3 recovery during DR testing
Adding workload or volumes requires re-evaluation of journal configuration
Manageability ranking 3

13

• 3 Data Center

Site wide instant disaster RPO = 0, no data loss


Logical corruption (L3) Point-in-Time recovery images are
schedulable
Testing implication No L3 recovery during DR testing
Manageability ranking 5

Delta resync
pass-thru

Page 2-10 HDS Confidential: For distribution only to authorized parties.


Replication Strategies
Disaster Recovery Testing

Disaster Recovery Testing

• Suspend and mount ShadowImage S-VOL cascaded from remote


replication S-VOL
– Recommend second ShadowImage copy for nondisruptive testing
• Suspend and mount remote replication S-VOL
– Recommend ShadowImage copy for disaster recovery during test
– Remote replication suspension must be done from the primary command
control interface (CCI) server or Storage Navigator program
– Horctakeover can be used from the secondary CCI server, but should be
used with care
• Special options are required to use this method without impacting
production data

15

HDS Confidential: For distribution only to authorized parties. Page 2-11


Replication Strategies
Failback

Failback

• In the event of an actual disaster where shared memory is lost or the


storage system is destroyed and replaced an initial copy will be required
to return operations to the production facility
• In the event that shared memory is retained, production applications can
be shutdown, and a horctakeover can be used from the primary CCI
server to return operations to the production facility
– Only differential data will be replicated

16

Page 2-12 HDS Confidential: For distribution only to authorized parties.


Replication Strategies
Replication Resynchronization Time

Replication Resynchronization Time

• Factors that Impact Resynchronization Time


– Available bandwidth
– Amount of differential data accumulated during the preceding suspension
– Sustained compression ratio of the data by channel extension gear
– Presence of existing replication update traffic on the telecommunications link
– Overall workload on the storage system

17

HDS Confidential: For distribution only to authorized parties. Page 2-13


Replication Strategies
Scenarios

Scenarios

• Example Engagement: A financial institution


– Multinational (NY, London, SF)
– Protecting Web-based end user transaction DB (Oracle on Solaris)
– Revenue generating trading application
– Executive Email systems running Microsoft Exchange server
– Recent plumbing leak impacted a different team, SLA’s were missed, CTO is
watching decisions

• Example Engagement: A retail business


– Four locations — corporate headquarters (Chicago) and three regional
distribution centers (San Francisco, Kansas City, Atlanta)
– Data warehouse of logistics and sales data

18

Page 2-14 HDS Confidential: For distribution only to authorized parties.


Replication Strategies
Exercise

Exercise

• Draw the architecture that is most appropriate for each scenario:

Financial Institution Retail Business

19

Break group into two groups – each group will work together to link replication
architecture strategies to one of the scenarios.

HDS Confidential: For distribution only to authorized parties. Page 2-15


Replication Strategies

Page 2-16 HDS Confidential: For distribution only to authorized parties.


3. Technology
Recommendations
Module Objectives

• Upon completion of this module, the learner should be able to:


– Position Hitachi Data Systems synchronous replication
– Position Hitachi Universal Replicator software
– Position Hitachi TrueCopy® Extended Distance software
– Describe the benefits of integrating a SAN extension
– Describe Dense Wave Division Multiplexing (DWDM)
– Identify the protocols used by channel extenders
– Describe the issues involved in long-distance channel extensions

HDS Confidential: For distribution only to authorized parties. Page 3-1


Technology Recommendations
Product Comparisons

Product Comparisons

• Hitachi TrueCopy® Synchronous Software

“Best” Customer has two locations very close together with dark
applications fiber or DWDM (within 100miles [160km]).
Actual distance between systems dependent upon
application response time sensitivity.

Host write Expect double the current host response time plus delay
response due to distance.
time Distance delay is additive and can be calculated as a
(latency) minimum of 1ms per 62miles (100km) of circuit length,
each direction (times two).

- Application tolerance for response time key

• Universal Replicator software (Hitachi Universal Storage Platform™ or


Universal Storage Platform V or VM or Hitachi Network Storage Controller
model NSC55)

“Best” Customer has two sites but is sensitive to telecom pricing


applications and willing to pay more for upfront equipment and be
flexible with their recovery requirements in order to avoid
a high telecom cost.
Host write Expect 1 to 2 milliseconds increase when writes are not
response time physically written to journals, but transferred from MDKC
(latency) cache to RDKC cache.
Expect 2 to 4 milliseconds increase when writes must be
physically written to journal disks.

Note TrueCopy Extended Distance software will not be covered in the exam.

Page 3-2 HDS Confidential: For distribution only to authorized parties.


Technology Recommendations
Product Comparisons

• TrueCopy Extended Distance Software


– Hitachi Adaptable Modular Storage models AMS500, AMS1000,
AMS2000 family
– TrueCopy license cannot be installed on same system
– First pool product requires an system reboot
– ShadowImage cascade is not supported

“Best” Customer has flexible recovery requirements and modular


applications storage systems at two sites

Host write TrueCopy Extended Distance adds an additional 1 to 2


response time milliseconds on top of Simplex
(latency)

HDS Confidential: For distribution only to authorized parties. Page 3-3


Technology Recommendations
External ShadowImage over Distance

External ShadowImage over Distance

• There are only a few cases where this configuration is useful. Universal Replicator
software and TrueCopy software are our remote replication products and should
be positioned as such.
• ShadowImage software in "PAIR" or "COPY" state does not provide data
consistency on the S-VOL's (even when using ShadowImage Consistency
Groups!). The resting state must be PSUS.
• If the externalized volume has been used in a LUSE or carved into a custom
volume on the Universal Storage Platform/Universal Storage Platform V, it's data
cannot be accessed directly from the externalized array because the structural
information defining those custom volumes is contained on the Universal Storage
Platform/Universal Storage Platform V itself.
• Mode 459 should be set, which holds PSUS until all differential has been
transferred out of cache and onto the S-VOL.
• If you intend to mount the S-VOL at the remote site, you must perform a manual
GUI-based Disconnect Operation of the external volumes and you must have
Check Path and Restore Operation in the Procedure.
• After mounting the S-VOL from the recovery site, a full initial copy will be required
to resynchronize the data.

Page 3-4 HDS Confidential: For distribution only to authorized parties.


Technology Recommendations
Channel Extension

Channel Extension

• Channel Extension stretches storage networks over distance


– Extend production SAN if:
• Using DWDM or similar (campus)
• Prepared to merge fabrics over sites and links
– Do not extend production SAN if:
• Extending SAN over telecom
• Extending SAN over medium to long distance
• Unprepared to merge fabrics over sites and links

Server Channel Extenders on SAN or…

SAN SAN
Channel Extenders on Storage

Storage Storage

Talk about RSCN storms and segmented fabrics.

HDS Confidential: For distribution only to authorized parties. Page 3-5


Technology Recommendations
DWDM

DWDM

• Dense Wave Division Multiplexing (DWDM) is an optical networking


technology
– Useful for short to medium distance SAN environments
– Offers very high bandwidth short to medium distance
– Enables multiple protocols across shared link

Multiplexer Fiber

Page 3-6 HDS Confidential: For distribution only to authorized parties.


Technology Recommendations
Buffer Credits

Buffer Credits

• Buffer credits are a flow control mechanism for Fibre Channel


– Each frame placed on the fabric uses one buffer credit.
– Within a fiber length a fixed number of Fibre Channel frames can be present.
– If the flow consumes all available buffer credits, the device stops sending
frames until it receives acknowledgements. Acknowledgement frees buffer
credits.
– A Fibre Channel frame is 2.76m (4km) long at 1Gbit.
• If the speed is doubled, frame length is halved
– Longer distances require devices with more buffer credits.
– To achieve maximum bandwidth, devices must have buffer credits equal to
number of frames present in the cable.

The distance is a total of 40km (20km in both directions), you need 10 buffer credits
in each direction. If you double the speed, you reduce the length of the frame to half.
The length of the frame is 2km. 40km/2km per frame = 20 buffer credits.

HDS Confidential: For distribution only to authorized parties. Page 3-7


Technology Recommendations
Exercise: Buffer Credits

Exercise: Buffer Credits

• How many buffer credits do you need to fill a 2Gb link between two sites
that are (13.8m) 20km apart – show your math?

• Answer
– 20km x 2(round trip) = 40km / 2km(frame length at 2Gb) =
20 Buffer Credits

10

This applies to any environment without a channel extension, such as DWDM.

Page 3-8 HDS Confidential: For distribution only to authorized parties.


Technology Recommendations
Channel Extenders with Hitachi Data Systems Storage

Channel Extenders with Hitachi Data Systems Storage

Vendor and Product Method

Cisco / Brocade FCIP Connectivity over virtual or


physical fabric.
Qlogic 6142 E-port connectivity through a fibre
channel switch routing over the
telecom link
Ciena CN2000 Provides an extended ISL resulting in a
single merged fabric

11

The actual protocols do not matter.

HDS Confidential: For distribution only to authorized parties. Page 3-9


Technology Recommendations
Channel Extension Fundamentals

Channel Extension Fundamentals

• Create a unique fabric for each telecommunications link


• Create a zone for each initiator and RCU target pair
• When utilizing E-port extension be aware of the behavior that connected
fabrics will merge
• E-port typically do not provide multiple protocol support for example
both FCIP and TCP/IP
• Enable compression
• Assume 1.8:1
• Verify supported connectivity options via the most recent ECN

12

Page 3-10 HDS Confidential: For distribution only to authorized parties.


Technology Recommendations
Exercise

Exercise

• Given only the following information, which replication product would you
recommend?
– Case 1:
• Two sites: Dallas and Fort Worth, 49 km
• Uses DWDM

– Case 2:
• Two sites
• Chooses telecom savings over recovery requirements

13

HDS Confidential: For distribution only to authorized parties. Page 3-11


Technology Recommendations

Page 3-12 HDS Confidential: For distribution only to authorized parties.


4. Workload Analysis
Module Objectives

• Upon completion of this module, the learner should be able to:


– Identify key elements of workload profiles
– Identify the tools to capture workload metrics
– Explain considerations when using workload data collection tools
– Describe how to analyze the workload

HDS Confidential: For distribution only to authorized parties. Page 4-1


Workload Analysis
Workload Analysis Tools

Workload Analysis Tools

• For Mainframe
– Excalibur — Internal Hitachi Data Systems sizing tool (requires training from
ATC Americas and collector script) analyzes single interval
• No data collection tools need to be installed to collect Resource
Measurement Facility (RMF) data
– SAS — Can be used for data analysis, no standard kit available
– RMF Magic — Third party analysis tool, no Hitachi Data Systems license
agreement
• Open Systems
– RCEA — from Hitachi Data Systems Tools Competency Center (TCC)
(requires common data collection scripts)
– Microsoft Excel (manual)

Note: RCEA: Remote Copy Expert Assistant

Page 4-2 HDS Confidential: For distribution only to authorized parties.


Workload Analysis
Workload Data Sources for Open Systems

Workload Data Sources for Open Systems

Platform Tool Comments

Windows Performance Monitor May need to be turned on


TCC collection tool available
Solaris iostat TCC collection tool available for RCEA
vxstat Only for systems with VERITAS
Breaks out workload by file systems
HPUX sar Con: Does not break out writes
iostat Con: Does not break out writes
Glance+ TCC collection tool available
AIX iostat TCC collection tool available
Linux iostat Multiple iostat versions with inconsistent output
Vmware esxtop In development
Netware Controller based No native tools
collectors only
Storage TMEA For systems currently on Hitachi Data Systems storage

TMEA: Tuning Manager Expert Assistant

HDS Confidential: For distribution only to authorized parties. Page 4-3


Workload Analysis
Workload Study

Workload Study

• Preparing for Data Collection


– RCEA
• Scripts for supported operating systems
– TMEA
• Command Device
• Outbound FTP access
• SAN attached Solaris or Windows host

• What is a workload study


– An analysis of a customers host workload
– Identifies workload characteristics that are needed to size the replication
environment

• What do we calculate when sizing a replication environment?


– Bandwidth (MB/sec)
– Number of Front End Director Microprocessors (FED MPs) on the primary
and secondary systems
– Journal or POOL capacity
– Journal throughput

Page 4-4 HDS Confidential: For distribution only to authorized parties.


Workload Analysis
Inputs for Workload Analysis

Inputs for Workload Analysis

• What do you use to make those calculations?


All are based on workload (write MB/sec and write IOPS)
– Telecom bandwidth requirements (depending on product and usage)
• Peak write MB/sec
• Write MB/sec rolling average
– Number of FED MPs to support the replication process
• Based on write IOPS
• For Universal Replicator, also impacted by the number of journal groups
– Journal/POOL Capacity
• Based on write MB/sec
• RPO
• Available telecom bandwidth
• Expected downtime duration
• For Universal Replicator, throughput of parity groups used for journals

These are the inputs to the workload analysis.

HDS Confidential: For distribution only to authorized parties. Page 4-5


Workload Analysis
Block Size

Block Size

• What about block size?


– Some of the RSD specifications and user guides will reference block size
– Most (all?) open systems platforms do not directly provide a block size
– Block size can be estimated by dividing MB/sec by IOPS to get average MB
per IO
– If a formula multiplies IOPS by block size that equates to bytes/s
• What about write ratio?
– When given a combined throughput or IOPS metric and a write ratio, multiply
to determine the portion of write activity to apply to replication sizing

• ROT: If you don’t have enough information to calculate block size, use 8k
for open systems

If you do not have enough information to calculate block size, use 8k for open
systems.

Page 4-6 HDS Confidential: For distribution only to authorized parties.


Workload Analysis
Formulas

Formulas

• When applying these formulas, include additional considerations for growth over
time, compensation for limited data sets, redundancy requirements, and more.

* For Universal Replicator software, you may want to consider bandwidth outage duration
** maxRollingAverage for TCE is based on cycle time (half RPO)
*** maxRollingAverage for all other formula are based on RPO
9

The “rated speed” values are obtained from the RSD sizing guidelines.

HDS Confidential: For distribution only to authorized parties. Page 4-7


Workload Analysis
Workload Data (MB/sec)

Workload Data (MB/sec)

“Write MB/sec” can be obtained from a variety of sources. The most common is
from a host’s performance utilities like iostat or Performance Monitor. If a Hitachi
storage system is already in place, TMEA is a good option. Hitachi Tuning Manager
software can also be used. In the case of native host-based data collectors, the
resulting data may not be formatted as “MB/sec” but simple calculations can be
used to massage the data into the proper format.

Page 4-8 HDS Confidential: For distribution only to authorized parties.


Workload Analysis
Identify Peak Workload

Identify Peak Workload

=max(Write MB/sec)

11

Peak workload is useful for sizing replication when you do not have a substantial
buffering capability. To maintain a pair status, TrueCopy Synchronous will be sized
to peak. Universal Replicator may be sized to peak if the customer’s RPO is very low.

HDS Confidential: For distribution only to authorized parties. Page 4-9


Workload Analysis
Calculating Interval Write Volume

Calculating Interval Write Volume

12

There are cases when you need to know how much write traffic has occurred. For
example, if you are going to suspend a TrueCopy pair, the resynchronization
process will transfer that information to the remote storage system. To estimate
resync duration, you will need to know how much work needs to be performed.

Page 4-10 HDS Confidential: For distribution only to authorized parties.


Workload Analysis
Write Volume Over Time

Write Volume Over Time

13

Now that we know how to calculate write volume for a single interval, here’s how to
calculate for a range.

HDS Confidential: For distribution only to authorized parties. Page 4-11


Workload Analysis
Rolling Average

Rolling Average

The rolling average is used to quickly characterize intervals that are longer than the
data collection interval. The actual data collection metrics are averages of activity
during that interval. A rolling average is similar, but allows the analyst to see how
the “window” of average workload moves over time. The “wider” the rolling
average, that is, the more intervals that are included in the average, the closer the
curve will move towards the absolute average workload.

Page 4-12 HDS Confidential: For distribution only to authorized parties.


Workload Analysis
Data from Multiple Hosts

Data from Multiple Hosts

• For each interval, sum the workload values for all hosts
• Align the interval boundaries by time and date
• Avoid “averages of averages”
• When working with clusters it is possible some hosts will report zero
activity and suddenly begin showing active workload due to cluster failover
events.

15

For multiple hosts or volumes within a host, over a given interval add together the
workload values.
Align the interval boundaries as much as possible to avoid skewing the results.
Make simultaneous peaks stack together, not line up next to one another.
Avoid making “averages of averages” whenever possible.
You may choose to sum the peaks of each host irrespective of time just to get a
glimpse of “the perfect storm”. Rarely do we plan for such an event, but it may be
work reviewing.

HDS Confidential: For distribution only to authorized parties. Page 4-13


Workload Analysis
Desired Amount of Data

Desired Amount of Data

• How much data is needed? As much as possible!


– Peak − Be sure you see a peak workload within the customers processing
cycle. Ask the customer when they experience peak workloads and try to
capture that peak.
– End of Month − Six weeks ensures you will see an end-of-month cycle. End
of year would be better.
– Varies by Customer − Different customers have different processing cycles
depending on the nature of the application.

16

Page 4-14 HDS Confidential: For distribution only to authorized parties.


Workload Analysis
Workload Analysis

Workload Analysis

• What bandwidth
should be
recommended for
TrueCopy software
with an RPO of “very
low”?
• How about Universal
Replicator with an
RPO of 30 minutes?
• What about TrueCopy
Extended Distance
with an RPO of one
hour?

17

HDS Confidential: For distribution only to authorized parties. Page 4-15


Workload Analysis
Using Workload Analysis

Using Workload Analysis

• Exclude Non-essential Volumes


– Bandwidth is expensive
– Support customer objectives
– Verify needed volumes

• What are some examples?

18

Do not replicate more than is necessary to support the replication objectives. Verify
that volumes being analyzed are really needed. For example, re-index operations or
DB dumps could use a disk temporarily that experience heavy workloads but
contributes nothing to help the customer achieve their goals.

Page 4-16 HDS Confidential: For distribution only to authorized parties.


19
Wait (ms)

0
5
10
15
20
25
30
35
40
45
50
date+day+hr
05/27 Fri 23:00
05/28 Sat 22:10
05/29 Sun 21:20
05/30 Mon 20:30
05/31 Tue 19:40
06/01 Wed 18:50
Response Time

06/02 Thu 18:00


06/03 Fri 17:10
06/04 Sat 16:20
06/05 Sun 15:30

data1
06/06 Mon 14:40
06/07 Tue 13:50
06/08 Wed 13:00

data2
06/09 Thu 12:10
06/10 Fri 11:20
06/11 Sat 10:30
06/12 Sun 09:40
06/13 Mon 08:50
06/14 Tue 08:00
06/15 Wed 07:10

arch data dump redo app


06/16 Thu 06:20
06/17 Fri 05:30
06/18 Sat 04:40
06/19 Sun 03:50

06/20 Mon 03:00


home orabin 06/21 Tue 02:10
06/22 Wed 01:20
06/23 Thu 00:30
All Disks

06/23 Thu 23:40


06/24 Fri 22:50
06/25 Sat 22:00
06/26 Sun 21:10
06/27 Mon 20:20
06/28 Tue 19:30
06/29 Wed 18:40

HDS Confidential: For distribution only to authorized parties.


06/30 Thu 17:50
07/01 Fri 17:00
07/02 Sat 16:10
07/03 Sun 15:20

Page 4-17
Response Time
Workload Analysis
Workload Analysis
Example

Example

• Example of Actual Workload Analysis


– Generic Workload Analysis spreadsheet

20

Page 4-18 HDS Confidential: For distribution only to authorized parties.


5. Connectivity
Requirements
Module Objectives

• Upon completion of this module, the learner should be able to:


– Define throughput and latency
– Identify throughput considerations
– Describe how to calculate Initial Copy time
– Describe how to reduce Initial Copy time
– Identify the impact of journal overflow
– Define inflow control and customer tradeoffs
– Describe the conditions under which the customer will experience TrueCopy
workload exceeding the design
– Troubleshoot the problem created by TrueCopy workload exceeding its
design
– Discuss bandwidth management within the context of leveraging the
customer’s WAN infrastructure

Connectivity Requirements is part of the design phase.

HDS Confidential: For distribution only to authorized parties. Page 5-1


Connectivity Requirements
Bandwidth Sizing Strategies

Bandwidth Sizing Strategies

• Size to Peak − Recommend bandwidth to accommodate peak workload


– Useful for low RPO where customer cannot afford to lose transactions in a
buffer mechanism
– Useful when no buffer mechanism is available
– Incurs the highest recurring bandwidth expense
• Size to Peak of Rolling Average − Recommend bandwidth to
accommodate the rolling average workload
– Useful for longer RPO
– Lower recurring bandwidth expense
– Overload is buffered to disk, pool, or tracked in bitmap
– Buffering comes at a cost
– Actually size higher than true average

y Size to peak of rolling average


Œ When workload exceeds the bandwidth, the additional write traffic will be
buffered to disk
ƒ Universal Replicator journals
ƒ TrueCopy Extended Distance POOL
ƒ With TrueCopy in “batches”, pairs will suspend; subsequent deltas are
queued up at the PVOL via the bitmapping function
Œ Buffering comes at a cost:
ƒ Universal Replicator journals and TrueCopy Extended Distance POOL
incur disk expense
ƒ Bitmapped TrueCopy is inconsistent during re-sync period
Œ Something of a misnomer: If you size to the true average, operations may
never catch up. Design some slack in the system. Size to the rolling average
based on the RPO.

Page 5-2 HDS Confidential: For distribution only to authorized parties.


Connectivity Requirements
Telecom Recommendations

Telecom Recommendations

Do not make telecom circuit recommendations – only bandwidth


requirements

Do recommend redundant circuits so if one is lost function will continue.

Telecom Capacity

Connection Bits Bytes


DS1/T1 1.544 Mbit/s 192.5 kB/s
E1 2.048 Mbit/s 256 kB/s
E2 8.448 Mbit/s 1.056 MB/s
E3 34.368 Mbit/s 4.296 MB/s
DS3/T3 44.736 Mbit/s 5.5925 MB/s
OC-1 51.84 Mbit/s 6.48 MB/s
OC-3/STM-1 155.52 Mbit/s 19.44 MB/s
OC-12/STM-4 622.08 Mbit/s 77.76 MB/s
OC-48/STM-16 2.448320 Gbit/s 306.104 MB/s
OC-192/STM-64 9.953280 Gbit/s 1.24416 GB/s
OC-768/STM-256 39.813120 Gbit/s 4.97664 GB/s

Source: http://en.wikipedia.org/wiki/List_of_device_bandwidths

HDS Confidential: For distribution only to authorized parties. Page 5-3


Connectivity Requirements
Reference Architecture Connectivity

Reference Architecture Connectivity

• Universal Replicator can be sized to rolling average unless the


RPO is very low
• To keep TrueCopy paired, size to peak workload
• If using a TrueCopy sync suspend cycle, size by formula and
use consistency groups
• TrueCopy Extended Distance software can be sized to peak
rolling average based on cycle time

TrueCopy sync suspend cycle:  Size to the total write volume during the suspend
duration divided by the length of the resync window. Some savings will occur if
there is strong locality of reference in the write activity, but for calculation purposes
this can not be predicted.
TrueCopy Extended Distance:  Can be sized to average workload, but pay attention
to RPO and that the update cycle will elongate during workloads that exceed
bandwidth.

Page 5-4 HDS Confidential: For distribution only to authorized parties.


Connectivity Requirements
Estimating Initial Copy Duration

Estimating Initial Copy Duration

• Initial copy time is a


function of
– Available bandwidth
– Capacity
– Workload
• Divide total capacity by
available bandwidth
• Workload during initial
copy

As volumes experience workload during initial copy, any additional updates must
also be transferred until the volumes are fully paired.
More background on the math of geometric series can be found at:
http://en.wikipedia.org/wiki/Evaluating_sums#Geometric_series

HDS Confidential: For distribution only to authorized parties. Page 5-5


Connectivity Requirements
Estimating Resync Duration

Estimating Resync Duration

• The same formula applies to resync time.


– Capacity is amount of write traffic while pairs were suspended.
– Pair percentage shows how much write traffic must be
transferred to the secondary volume.
– Rolling average of write workload over a period equal to the
suspension time can be used to estimate workload.

Page 5-6 HDS Confidential: For distribution only to authorized parties.


Connectivity Requirements
Reducing Initial Copy Time

Reducing Initial Copy Time

• Max Initial Copy Activity (Not applicable to In-System replication or


Adaptable Modular Storage)
– Increase parameter with respect to workload
• Copy Pace
– Universal Replicator
• A Copy Pace of Low allows Initial Copy throughput of up to 19 megabytes per
second (MB/sec) with host activity and 38MB/sec with no host activity per Journal
Group
• Medium allows up to 60MB/sec with host activity and 125MB/sec with no host
activity; in mainframe environments, Medium allows a maximum of 169MB/sec
• High is not recommended with host activity to the Universal Replicator software
volumes; however it allows 170MB/sec per path with no host activity

• Order devices in HORCM file from lowest change rate to highest change
rate

HDS Confidential: For distribution only to authorized parties. Page 5-7


Connectivity Requirements
RPO and Universal Replicator with Constrained Bandwidth

RPO and Universal Replicator with Constrained Bandwidth

• Resync time
depends on workload
and available
bandwidth
• Journals are a
buffering mechanism

10

Universal Replicator uses a queue. When you exceed your bandwidth you extend
your queue. How far behind you are is a function of current workload and
additional bandwidth to drain the queue.

Page 5-8 HDS Confidential: For distribution only to authorized parties.


Connectivity Requirements
Latency

Latency

Latency is the amount of it takes for a packet of data to get from one
designated point to another.

• Latency measures time and throughput measures bandwidth.

• Why is latency important in synchronous replication?


– Write time depends on distance
– Maximum distance depends on customer requirements
– No firm maximum distance
– Initial copy duration calculations do not take latency into account
• By design, latency is not an issue with asynchronous replication.

11

Why is latency important in synchronous replication?


y Writes will take longer as distances increase because I/O will not complete until
operation is acknowledged by distant site
y Maximum synchronous distance depends on how much impact on write times a
customer can endure
y There is no firm, maximum distance for synchronous copy
y Roundtrip time affects initial copy time so you need to use initial copy
calculations as a minimum duration – we know it will take longer
Asynchronous replication was designed to mitigate latency issues, so latency is
much less of an issue in those environments.

HDS Confidential: For distribution only to authorized parties. Page 5-9


Connectivity Requirements
Additional Sizing Considerations

Additional Sizing Considerations

• Protocol overhead
• Compression
• Growth over time
• Distance Latency
– Initial copy
– Update copy

Assumptions: Add 20% per assumption you make. You didn’t have enough
workload data to analyze? Add 20%. You didn’t get workload data from all servers?
Add another 20%?
Protocol Overhead
The conversion from pure Fibre Channel to iFCP or FCIP or whatever protocol
traffic ends up with a net increase in the total data transferred per frame. We use
10% as a factor to allow for this. This is conservative – the true protocol overhead is
between 2% and 5% in most cases.
Compression
As with all data compression, it depends on the data being compressed. When
replicating an image archive or a VTL, it is probable that the channel extender will
not provide additional data reduction. As a rule of thumb, we use 1.8:1 – this is
slightly lower than the 2:1 that then channel extender vendor use.
Distance Latency
Initial copy - During initial (or resync) copy, the number of LDEVs being copied
determines the number of copy jobs. Combined with the amount of distance latency,
this has a significant impact on both the data throughput and the impact to the host
resulting from the transfer process.
Update copy:  When the volumes are in PAIR state, the amount of distance latency
is generally not a factor. The channel extenders will mask the typical FC flow control
mechanism and allow it to operate at full speed even over a high latency link. This
generally works well up to ~150ms of round trip latency.

Page 5-10 HDS Confidential: For distribution only to authorized parties.


Connectivity Requirements
Inflow Control

Inflow Control

• Inflow control is a mechanism to throttle write traffic so that replication can


keep up with demand.
– When the journals start to fill up, it introduces delays to the host.
– Every single write operation will take longer to complete.
– Customers choose to use flow control when they are more interested in
replication integrity than host performance.

13

Resource: GSS Practitioners Guide for TrueCopy


(http://hdsnet.hds.com/cmsProdPubIntra/groups/public/documents/contentasset/0
1_047194.pdf)

HDS Confidential: For distribution only to authorized parties. Page 5-11


Connectivity Requirements
Impact of Journal Overflow

Impact of Journal Overflow

• Journals are a replication buffering resource.


• A journal overflow occurs when the journal capacity reaches 100%.
• Pairs will suspend P-VOL will be PFUS and S-VOL will be PSUE and a
SIM will be generated.
• When using continuous replication, this extends the RPO.
• S-VOL data will be consistent following a journal overflow
– Differential data tracked in bitmap
• S-VOL data is inconsistent during resynchronization following journal
overflow
• To avoid journal overflow, size journals and telecommunications
resources appropriately

14

Page 5-12 HDS Confidential: For distribution only to authorized parties.


Connectivity Requirements
Exercise

Exercise

• Review questions
– How does latency affect system performance?
– How does insufficient bandwidth affect system performance?

15

HDS Confidential: For distribution only to authorized parties. Page 5-13


Connectivity Requirements

Page 5-14 HDS Confidential: For distribution only to authorized parties.


6. Configuration Sizing
Module Objectives

• Upon completion of this module, the learner should be able to:


– Describe how to size a synchronous solution
– Describe how to size for Universal Replicator software
– Describe how to size a TrueCopy Extended Distance software solution
– Describe point-in-time sizing
– Describe a balanced system design
– Determine volume placement
– Describe cache implications
– Describe a trouble scenario with host delay

HDS Confidential: For distribution only to authorized parties. Page 6-1


Configuration Sizing
Size a Point-in-Time Solution

Size a Point-in-Time Solution

• TrueCopy Cascaded off ShadowImage Replication Software

Bandwidth Lowest (size to peak RPO average) due to no Update Copy.


TrueCopy pair does not need to catch up to production updates)
Host response time Sensitive to ShadowImage placement and ShadowImage resync
duration
Application Interaction Application quiesce required for PSUS
Complexity
Implementation Complexity 4
Ranking
Storage Capacity Requirements 2:2
(Primary to Secondary)
FED processor requirements per Greater than 2 dependent on workload
subsystem

Page 6-2 HDS Confidential: For distribution only to authorized parties.


Configuration Sizing
Size a Synchronous Solution

Size a Synchronous Solution


“Best” applications Customer has two locations very close together with dark fiber or DWDM (within 100 miles [160 km])
Host write response time Expect double the current host response time plus delay due to distance.
(latency) Distance delay is additive and can be calculated as a minimum of 1 ms per (69miles)100 km of circuit
length, each direction (times two)
Host throughput Rule of Thumb: throughput will be reduced by one percent per km
Strategies to reduce impact to Ensure that both arrays (especially the RCU) are tuned for performance and ensure additional cache
host response time is available to reduce back-end contention

Cache recommendation Per manuals, increase cache by 50% over the Simplex recommendation
FED Requirements FED MPs must be allocated for use as TrueCopy Initiators and RCU Target ports. These
microprocessors service two ports each and are rated at 2500 IOPS.
For bi-directional replication, both arrays will have MPs allocated to Initiator and RCU Targets
Strategies to increase Increase bandwidth and ensure balanced system design principles were followed
throughput
Inputs to a sizing decision Write IOPS
Write MB/s
Application latency tolerance
Distance
Outputs of a sizing decision Qty Replication Paths
Recommendations for application/database restructuring
Bandwidth recommendations
Bandwidth and deployment Do not deploy in bandwidth constrained environment
strategy Do not deploy over significant distances

Calculations Bandwidth = max write workload


# Initiator FEDs = WriteIOPS / 2500; minimum of 2
# RCU Target FEDs = WriteIOPS / 2500; minimum of 2

HDS Confidential: For distribution only to authorized parties. Page 6-3


Configuration Sizing
Size a Universal Replicator Solution

Size a Universal Replicator Solution


“Best” applications Customer has two sites but is sensitive to telecom pricing and willing to pay more for upfront equipment
and be flexible with their recovery requirements in order to avoid a high telecom cost
Host write response time Expect 1 to 2ms increase when writes are not physically written to journals, but transferred from MDKC
(latency) cache to RDKC cache.
Expect 2 to 4ms increase when writes must be physically written to journal disks.
Host throughput When the disk journal is not in use, throughput will be reduced by less than 10%.
When disk journals are in use, host throughput will be constrained by the maximum journal throughput,
which is a function of the speed, number and % busy of the HDDs used for journals.

Strategies to reduce Monitor system performance and address bottlenecks


impact to host response Add parity groups to improve journal performance (see below).
time Separate journal volumes from data volumes in a dedicated CLPR
Cache recommendation Per manuals, increase cache by 25% over the Simplex recommendation. Add 1GB of cache per journal
group on the R-DKC.
FED Requirements FED MPs must be allocated from both arrays for use as both TC Initiators and RCU Target ports. These
MPs service two ports each and are rated for 7000 IOPS.
2 FED MPs should be reserved from other duties for each Journal Group and may need to scale up based
on write IOPS (scales up every 1500 write IOPS)
Strategies to increase Increase the number of parity groups (disk spindles) servicing journals.
throughput It is common for one parity group to be required for journal groups for every three parity groups in
production.
Increase telecommunications bandwidth
Inputs to a sizing Write IOPS
decision Write MB/Sec
Network outage duration estimate
Number of journal groups

Outputs of a sizing Qty journal parity groups and volumes


decision Journal disk type (HDD)
Qty replication paths
Additional cache
Recommendations for application/database restructuring
Number of FED Ports
Bandwidth and Confirm the aggregate throughput for parity groups dedicated to journals is equivalent to peak write
deployment strategy workload.
Procure bandwidth equivalent to the workload in a peak period the length of the RPO. This will be below
peak workload but above average workload. The greater the difference between the peak and target
workload, the longer the achievable RPO and the more disk necessary for journals.
Confirm journal capacity is sufficiently large so additional workload can be buffered to the journal and
subsequently drained.
Calculations Bandwidth = Peak rolling average write workload x protocol overhead / compression
Journal capacity >= Max workload x RPO
Journal parity groups >= max workload / Universal Replicator parity group throughput (50 of spec)
Journal Disk Example: Assume write a workload of 210 MB/sec (not uncommon). Using 144GB disks for
journal with RAID5 (3+1), a minimum of 14 journal parity groups (56 HDDs) should be provided to support
replication. This quantity is dependent only on the write workload and is independent of total usable
capacity.
# Initiator FEDs (on both arrays)= WriteIOPS / 7000

# RCU Target FEDs (on both arrays)= WriteIOPS / 7000

Page 6-4 HDS Confidential: For distribution only to authorized parties.


Configuration Sizing
Universal Replicator Mode Settings

Universal Replicator Mode Settings

• Mode Settings to Consider


– 454, On: If using separate CLPR for journaling, the average CWP will be
used to determine destaging schedule instead of the highest CLPR CWP
– 308, On: A path blockade will generate a SIM, but not a PSUE
– 449, On: Disables monitoring and detection of path blockade
– 690, On: Prevents JNL Restore if R-DKC CWP exceeds 60%

HDS Confidential: For distribution only to authorized parties. Page 6-5


Configuration Sizing
Size a TrueCopy Extended Solution

Size a TrueCopy Extended Solution


“Best” applications Customer has an AMS 500,1000, or 2000 at two sites and wants to replicate over distance with a non-zero
RPO
Host write response time TrueCopy Extended adds an additional 1 to 2ms on top of Simplex
(latency)

Host throughput Throughput may be reduced but it’s independent of distance, and typically less than 5%

Strategies to reduce Install maximum cache and balance workload across controllers and multiple parity groups.
impact to host response
time

Cache recommendation Ensure that both arrays have maximum cache.

FED Requirements TrueCopy Extended requires two ports on each array for replication (no more, no less).

Strategies to increase Install maximum cache and balance workload across controllers and multiple parity groups.
throughput Pools should be on dedicated parity groups behind each controller
Inputs to a sizing Write IOPS
decision Write MB/s
Cycle time dependent of number of consistency groups and RPO
Outputs of a sizing Qty CT Groups
decision Required bandwidth
Pool size
Recommendations for application/database restructuring
Bandwidth and Procure bandwidth equivalent to cycle time peak rolling average given expected compression and overhead
deployment strategy
Calculations Bandwidth = cycle time peak rolling average/compression + safety factor
Pool = (maxRollingAverage(write MB/sec) x RPO/2) x 1.2

Page 6-6 HDS Confidential: For distribution only to authorized parties.


Configuration Sizing
Balanced System Design

Balanced System Design

• Basic Elements of a Balanced System Design


– Ensure the non-production components have similar capabilities to avoid
introducing bottlenecks.
– Often beneficial to have a symmetric design so recovery operations can occur
seamlessly.
– Parity groups impact back end performance.
• The number of parity groups on the primary must match the number of
parity groups on the secondary.
– Replication between generations of hardware can be hazardous due to
differences in performance.

• Rationale for a Simple Design


– Simpler design is preferred over more complex design.
– Fewer system components provide fewer opportunities for configuration error.
– Simple designs are easier and less costly to administer and maintain.
– In the event of a recovery, it is more likely that a simple design will perform
appropriately under extreme conditions.

10

HDS Confidential: For distribution only to authorized parties. Page 6-7


Configuration Sizing
Volume Placement

Volume Placement

• For Universal Replicator, journal volumes should be placed in a separate


Cache Logical Partition (CLPR) on the M-DKC only
• For in-system replication, primary and secondary volumes should be on
distinct parity groups
• Isolate parity groups used for journaling or pool volumes from parity
groups used for production data.
• Isolate journal groups or pool volumes from each other on dedicated parity
groups.
• Consistency groups should contain all volumes that require I/O
consistency for recovery.

11

Page 6-8 HDS Confidential: For distribution only to authorized parties.


Configuration Sizing
Cache

Cache

No replication TrueCopy TrueCopy Universal


(Simplex) Synchronous Extended Replicator

Cache recom- Matrix recommends Per manuals, increase Maximum Cache Per manuals, increase
mendation 1GB cache per TB cache by 50% over the cache by 25% over the
storage. Simplex Simplex
However cache recommendation recommendation. Add
recommendation 1GB of cache per
should be based on journal group on the
workload, example: R-DKC.
1GB per 100 IOPS.

12

From GSS Remote Copy Product Sizing Matrix

HDS Confidential: For distribution only to authorized parties. Page 6-9


Configuration Sizing
Host Delay

Host Delay

• Four issues that will cause host delay are:


– Inflow control
– Response time increases due to distance when using TrueCopy
– High utilization on FED MPs or parity groups
– High Cache Write Pending
• Priority to destaging data
• If backend resources not balanced
• Previous system imbalances

13

High Cache Write Pending


y With high cache write pending (>30%), the subsystem will give priority to de-
staging data from cache over serving host I/O which will be experienced as an
increase in response time
y If the back-end resources are not balanced with regard to front end workload,
cache write pending will increase
y Remote copy implementations can expose previously latent system imbalances

Page 6-10 HDS Confidential: For distribution only to authorized parties.


Configuration Sizing
HDP and Replication

HDP and Replication

• ShadowImage
– No quick restore available
• Quick restore is default behavior so take care with Protection Manager
– Normal to Hitachi Dynamic Pool (HDP) replication will result in a thick HDP S-
VOL
– HDP to HDP replication will result in a thin HDP S-VOL

• Universal Replicator
– Normal to HDP replication will result in a thick HDP S-VOL
– HDP to HDP replication will result in a thin HDP S-VOL
– HDP journal volumes are not supported

14

HDS Confidential: For distribution only to authorized parties. Page 6-11


Configuration Sizing
Universal Replicator Notes

Universal Replicator Notes

• With remote replication between Universal Storage Platform (USP) and


USPV, LDEVs must reside within CU00 to CU3F (Applicable to TrueCopy)
• To mitigate high cache write pending due to journaling, place journal parity
groups in a dedicated cache logical partition (CLPR)
– CLPR on M-DKC only
– Rule of thumb for CLPR sizing is 4GB for a single journal group, 8GB+ for
multiple journal groups
– Review recommended mode settings for Universal Replicator
• Speed of Line Option
– Speed of line multiplied by journal group count must be less than available
bandwidth
• Cache Utilization
– Not used during initial copy
– Destages when write pending exceeds 50%
– Recommend dedicated CLPR

15

Page 6-12 HDS Confidential: For distribution only to authorized parties.


Configuration Sizing
Exercise: Universal Replicator Workload Analysis

Exercise: Universal Replicator Workload Analysis

• In the provided workload data spreadsheet, please complete the answers


in the “Results” worksheet based on the requirements below. If time
permits, please also create a line graph in the “KBs Chart” worksheet that
combines the real-time and rolling average total workloads.

• Business Requirements
– Two applications running on two hosts (hosta and hostb)
– Applications need to be consistent with each other for recovery
– Flexible RPO from near zero to four hours

• Technical Environment
– Two Universal Storage Platform V systems 500 miles apart – 20TB usable
each
– All parity groups are RAID-5(3D+1P) 146GB
– Two OC-3 telecom links
– Channel extenders with expected1.8:1 compression and 10% overhead

16

y Cache size determined by usable storage on each array


y One journal group due to required consistency point

HDS Confidential: For distribution only to authorized parties. Page 6-13


Configuration Sizing
Exercise: Universal Replicator Workload Analysis

Page 6-14 HDS Confidential: For distribution only to authorized parties.


Training Course Glossary
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

—A— AL-PA — Arbitrated Loop Physical Address

ACC— Action Code. A SIM System Information AMS —Adaptable Modular Storage
Message. Will produce an ACC which takes APID — An ID to identify a command device.
an engineer to the correct fix procedures in APF (Authorized Program Facility) — In z/OS and
the ACC directory in the MM (Maintenance OS/390 environments, a facility that permits
Manual) the identification of programs that are
ACE (Access Control Entry) — Stores access authorized to use restricted functions.
rights for a single user or group within the Application Management —The processes that
Windows security model manage the capacity and performance of
ACL (Access Control List)— stores a set of ACEs, applications
so describes the complete set of access ARB — Arbitration or “request”
rights for a file system object within the
Microsoft Windows security model Array Domain—all functions, paths, and disk
drives controlled by a single ACP pair. An
ACP (Array Control Processor) ― Microprocessor array domain can contain a variety of LVI
mounted on the disk adapter circuit board and/or LU configurations.
(DKA) that controls the drives in a specific
disk array. Considered part of the back-end, ARRAY UNIT - A group of Hard Disk Drives in one
it controls data transfer between cache and RAID structure. Same as Parity Group
the hard drives. ASIC — Application specific integrated circuit
ACP PAIR ― Physical disk access control logic. ASSY — Assembly
Each ACP consists of two DKA PCBs. To Asymmetric virtualization — See Out-of-band
provide 8 loop paths to the real HDDs virtualization.
Actuator (arm) — read/write heads are attached to Asynchronous— An I/O operation whose initiator
a single head actuator, or actuator arm, that does not await its completion before
moves the heads around the platters proceeding with other work. Asynchronous
AD — Active Directory I/O operations enable an initiator to have
ADC — Accelerated Data Copy multiple concurrent I/O operations in
progress.
ADP —Adapter
ATA — Short for Advanced Technology
ADS — Active Directory Service Attachment, a disk drive implementation that
Address— A location of data, usually in main integrates the controller on the disk drive
memory or on a disk. A name or token that itself, also known as IDE (Integrated Drive
identifies a network component. In local area Electronics) Advanced Technology
networks (LANs), for example, every node Attachment is a standard designed to
has a unique address connect hard and removable disk drives
AIX — IBM UNIX Authentication — The process of identifying an
AL (Arbitrated Loop) — A network in which nodes individual, usually based on a username and
contend to send data and only one node at a password.
time is able to send data.

HDS Confidential: For distribution only to authorized parties. Page 1


Availability — Consistent direct access to storage companies, including HDS, calculate
information over time capacity based on the assumption that 1
megabyte = 1000 kilobytes and 1
-back to top- gigabyte=1,000 megabytes.
CAPEX - capital expenditure - is the cost of
developing or providing non-consumable
—B— parts for the product or system. For
B4 — A group of 4 HDU boxes that are used to example, the purchase of a photocopier is
contain 128 HDDs the CAPEX, and the annual paper and toner
cost is the OPEX. (See OPEX).
Backend— In client/server applications, the client
part of the program is often called the front- CAS — Column address strobe is a signal sent to
end and the server part is called the back- a dynamic random access memory (DRAM)
end. Backup image—Data saved during an that tells it that an associated address is a
archive operation. It includes all the column address. CAS- column address
associated files, directories, and catalog strobe sent by the processor to a DRAM
information of the backup operation. circuit to activate a column address.
BATCTR — Battery Control PCB CCI — Command Control Interface
BED — Back End Director. Controls the paths to
CE — Customer Engineer
the HDDs
Centralized management —Storage data
Bind Mode — One of two modes available when
management, capacity management, access
using FlashAccess™, in which the
security management, and path
FlashAccess™ extents hold read data for
management functions accomplished by
specific extents on volumes (see Priority
software.
Mode).
CentOS — Community Enterprise Operating
BST — Binary Search Tree
System
BTU— British Thermal Unit
CFW— Cache Fast Write
Business Continuity Plan — Describes how an
CHA (Channel Adapter) ― Provides the channel
organization will resume partially- or
interface control functions and internal cache
completely interrupted critical functions
data transfer functions. It is used to convert
within a predetermined time after a disruption
the data format between CKD and FBA. The
or a disaster. Sometimes also called a
CHA contains an internal processor and 128
Disaster Recovery Plan.
bytes of edit buffer memory.
CH — Channel
-back to top-
CHA — Channel Adapter
CHAP — Challenge-Handshake Authentication
—C— Protocol
CA — Continuous Access software (see HORC) CHF — Channel Fibre
Cache — Cache Memory. Intermediate buffer CHIP (Client-Host Interface Processor) ―
between the channels and drives. It has a Microprocessors on the CHA boards that
maximum of 64 GB (32 GB x 2 areas) of process the channel commands from the
capacity. It is available and controlled as two hosts and manage host access to cache.
areas of cache (cache A and cache B). It is
fully battery-backed (48 hours) . CHK— Check
Cache hit rate — When data is found in the cache, CHN — CHannel adapter NAS
it is called a cache hit, and the effectiveness CHP — Channel Processor or Channel Path
of a cache is judged by its hit rate. CHPID — Channel Path Identifier
Cache partitioning — Storage management CH S— Channel SCSI
software that allows the virtual partitioning of
cache and allocation of it to different CHSN — Cache memory Hierarchical Star
applications Network
CAD — Computer-Aided Design CHT—Channel tachyon, a Fibre Channel protocol
controller
Capacity — Capacity is the amount of data that a
drive can store after formatting. Most data

Page 2 HDS Confidential: For distribution only to authorized parties.


CIFS protocol — common internet file system is a programs on a given computer to run
platform-independent file sharing system. A routines or access objects on another remote
network file system access protocol primarily computer
used by Windows clients to communicate file Controller — A device that controls the transfer of
access requests to Windows servers. data from a computer to a peripheral device
CIM — Common Information ModelCKD (Count- (including a storage system) and vice versa.
key Data) ― A format for encoding data on Controller-based Virtualization — Driven by the
hard disk drives; typically used in the physical controller at the hardware
mainframe environment. microcode level versus at the application
CKPT — Check Point software layer and integrates into the
CL — See Cluster infrastructure to allow virtualization across
heterogeneous storage and third party
CLI — Command Line Interface products
CLPR (Cache Logical PaRtition) — Cache can be Corporate governance — Organizational
divided into multiple virtual cache memories compliance with government-mandated
to lessen I/O contention. regulations
Cluster — A collection of computers that are COW — Copy On Write Snapshot
interconnected (typically at high-speeds) for
the purpose of improving reliability, CPS — Cache Port Slave
availability, serviceability and/or performance CPU — Central Processor Unit
(via load balancing). Often, clustered CRM — Customer Relationship Management
computers have access to a common pool of
storage, and run special software to CruiseControl — Now called Hitachi Volume
coordinate the component computers' Migration software
activities. CSV — Comma Separated Value
CM (Cache Memory Module) ― Cache Memory. CSW (Cache Switch PCB) ― The cache switch
Intermediate buffer between the channels (CSW) connects the channel adapter or disk
and drives. It has a maximum of 64 GB (32 adapter to the cache. Each of them is
GB x 2 areas) of capacity. It is available and connected to the cache by the Cache
controlled as two areas of cache (cache A Memory Hierarchical Star Net (C-HSN)
and cache B). It is fully battery-backed (48 method. Each cluster is provided with the
hours) two CSWs, and each CSW can connect four
CM PATH (Cache Memory Access Path) ― caches. The CSW switches any of the cache
Access Path from the processors of CHA, paths to which the channel adapter or disk
DKA PCB to Cache Memory. adapter is to be connected through
arbitration.
CMD — Command
CU (Control Unit) — The hexadecimal number to
CMG — Cache Memory Group which 256 LDEVs may be assigned
CNAME — Canonical NAME CUDG —Control Unit DiaGnostics. Internal
CPM (Cache Partition Manager) — Allows for system tests.
partitioning of the cache and assigns a CV — Custom Volume
partition to a LU; this enables tuning of the
system’s performance. CVS (Customizable Volume Size) ― software
used to create custom volume sizes.
CNS— Clustered Name Space Marketed under the name Virtual LVI (VLVI)
Concatenation — A logical joining of two series of and Virtual LUN (VLUN)
data. Usually represented by the symbol “|”.
In data communications, two or more data -back to top-
are often concatenated to provide a unique
name or reference (e.g., S_ID | X_ID).
Volume managers concatenate disk address —D—
spaces to present a single larger address
DAD (Device Address Domain) — Indicates a site
spaces.
of the same device number automation
Connectivity technology — a program or device's support function. If several hosts on the
ability to link with other programs and same site have the same device number
devices. Connectivity technology allows system, they have the same name.

HDS Confidential: For distribution only to authorized parties. Page 3


DACL — Discretionary ACL - the part of a security DFW —DASD Fast Write
descriptor that stores access rights for users DIMM—Dual In-line Memory Module
and groups.
Direct Attached Storage — Storage that is directly
DAMP (Disk Array Management Program) ― attached to the application or file server. No
Renamed to Storage Navigator Modular other device on the network can access the
(SNM) stored data
DAS — Direct Attached Storage Director class switches — larger switches often
DASD—Direct Access Storage Device used as the core of large switched fabrics
Data Blocks — A fixed-size unit of data that is Disaster Recovery Plan (DRP) — A plan that
transferred together. For example, the X- describes how an organization will deal with
modem protocol transfers blocks of 128 potential disasters. It may include the
bytes. In general, the larger the block size, precautions taken to either maintain or
the faster the data transfer rate. quickly resume mission-critical functions.
Data Integrity —Assurance that information will be Sometimes also referred to as a Business
protected from modification and corruption. Continuity Plan.
Data Lifecycle Management — An approach to Disk Administrator — An administrative tool that
information and storage management. The displays the actual LU storage configuration
policies, processes, practices, services and Disk Array — A linked group of one or more
tools used to align the business value of data physical independent hard disk drives
with the most appropriate and cost-effective generally used to replace larger, single disk
storage infrastructure from the time data is drive systems. The most common disk
created through its final disposition. Data is arrays are in daisy chain configuration or
aligned with business requirements through implement RAID (Redundant Array of
management policies and service levels Independent Disks) technology. A disk array
associated with performance, availability, may contain several disk drive trays, and is
recoverability, cost and what ever structured to improve speed and increase
parameters the organization defines as protection against loss of data. Disk arrays
critical to its operations. organize their data storage into Logical Units
Data Migration— The process of moving data from (LUs), which appear as linear block paces to
one storage device to another. In this their clients. A small disk array, with a few
context, data migration is the same as disks, might support up to 8 LUs; a large
Hierarchical Storage Management (HSM). one, with hundreds of disk drives, can
support thousands.
Data Pool— A volume containing differential data
only. DKA (Disk Adapter) ― Also called an array control
processor (ACP); it provides the control
Data Striping — Disk array data mapping functions for data transfer between drives
technique in which fixed-length sequences of and cache. The DKA contains DRR (Data
virtual disk data addresses are mapped to Recover and Reconstruct), a parity generator
sequences of member disk addresses in a circuit. It supports four fibre channel paths
regular rotating pattern. and offers 32 KB of buffer for each fibre
Data Transfer Rate (DTR) — The speed at which channel path.
data can be transferred. Measured in DKC (Disk Controller Unit) ― In a multi-frame
kilobytes per second for a CD-ROM drive, in configuration, the frame that contains the
bits per second for a modem, and in front end (control and memory components).
megabytes per second for a hard drive. Also,
often called simply data rate. DKCMN ― Disk Controller Monitor. Monitors
temperature and power status throughout the
DCR (Dynamic Cache Residency) ― see machine
FlashAccess™
DKF (fibre disk adapter) ― Another term for a
DE— Data Exchange Software DKA.DKU (Disk Unit) ― In a multi-frame
Device Management — Processes that configure configuration, a frame that contains hard disk
and manage storage systems units (HDUs).
DDL — Database Definition Language DLIBs — Distribution Libraries
DDNS —Dynamic DNS DLM —Data Lifecycle Management
DFS — Microsoft Distributed File System

Page 4 HDS Confidential: For distribution only to authorized parties.


DMA— Direct Memory Access EREP — Error REporting and Printing
DM-LU (Differential Management Logical Unit) — ERP — Enterprise Resource Management
DM-LU is used for saving management ESA — Enterprise Systems Architecture
information of the copy functions in the
cache ESC — Error Source Code
DMP — Disk Master Program ESCD — ESCON Director
DNS — Domain Name System ESCON (Enterprise Systems Connection) ― An
input/output (I/O) interface for mainframe
Domain — A number of related storage array computer connections to storage devices
groups. An “ACP Domain” or “Array Domain” developed by IBM.
means all of the array-groups controlled by
the same pair of DKA boards. Ethernet — A local area network (LAN)
OR architecture that supports clients and servers
― The HDDs managed by one ACP PAIR and uses twisted pair cables for connectivity.
(also called BED) EVS — Enterprise Virtual Server
DR — Disaster Recovery ExSA — Extended Serial Adapter
DRR (Data Recover and Reconstruct) —Data
Parity Generator chip on DKA -back to top-
DRV — Dynamic Reallocation Volume
DSB — Dynamic Super Block —F—
DSP — Disk Slave Program Fabric — The hardware that connects
DTA —Data adapter and path to cache-switches workstations and servers to storage devices
in a SAN is referred to as a "fabric." The
DW — Duplex Write
SAN fabric enables any-server-to-any-
DWL — Duplex Write Line storage device connectivity through the use
Dynamic Link Manager — HDS software that of Fibre Channel switching technology.
ensures that no single path becomes Failback — The restoration of a failed system
overworked while others remain underused. share of a load to a replacement component.
Dynamic Link Manager does this by For example, when a failed controller in a
providing automatic load balancing, path redundant configuration is replaced, the
failover, and recovery capabilities in case of devices that were originally controlled by the
a path failure. failed controller are usually failed back to the
replacement controller to restore the I/O
-back to top- balance, and to restore failure tolerance.
Similarly, when a defective fan or power
supply is replaced, its load, previously borne
—E— by a redundant component, can be failed
ECC — Error Checking & Correction back to the replacement part.
ECC.DDR SDRAM — Error Correction Code Failed over — A mode of operation for failure
Double Data Rate Synchronous Dynamic tolerant systems in which a component has
RAm Memory failed and its function has been assumed by
a redundant component. A system that
ECN — Engineering Change Notice protects against single failures operating in
E-COPY — Serverless or LAN free backup failed over mode is not failure tolerant, since
ENC — Stands for ENclosure Controller, the units failure of the redundant component may
render the system unable to function. Some
that connect the controllers in the DF700
with the Fibre Channel disks. They also allow systems (e.g., clusters) are able to tolerate
for online extending a system by adding more than one failure; these remain failure
tolerant until no redundant component is
RKAs
available to protect against further failures.
ECM— Extended Control Memory
Failover — A backup operation that automatically
EOF — End Of Field switches to a standby database server or
EPO — Emergency Power Off network if the primary system fails, or is
temporarily shut down for servicing. Failover
ENC — Enclosure
is an important fault tolerance function of

HDS Confidential: For distribution only to authorized parties. Page 5


mission-critical systems that rely on constant FC-AL — Fibre Channel Arbitrated Loop. A serial
accessibility. Failover automatically and data transfer architecture developed by a
transparently to the user redirects requests consortium of computer and mass storage
from the failed or down system to the backup device manufacturers and now being
system that mimics the operations of the standardized by ANSI. FC-AL was designed
primary system. for new mass storage devices and other
Failure tolerance — The ability of a system to peripheral devices that require very high
continue to perform its function or at a bandwidth. Using optical fiber to connect
reduced performance level, when one or devices, FC-AL supports full-duplex data
more of its components has failed. Failure transfer rates of 100MBps. FC-AL is
tolerance in disk subsystems is often compatible with SCSI for high-performance
achieved by including redundant instances of storage systems.
components whose failure would make the FC-P2P — Fibre Channel Point-to-Point
system inoperable, coupled with facilities that FC-SW — Fibre Channel Switched
allow the redundant components to assume FCC — Federal Communications Commission
the function of failed ones. FC — Fibre Channel or Field-Change (microcode
FAIS — Fabric Application Interface Standard update)
FAL — File Access Library FCIP –Fibre Channel over IP, a network
storage technology that combines the
FAT — File Allocation Table features of Fibre Channel and the Internet
Fault Tolerat — Describes a computer system or Protocol (IP) to connect distributed SANs
component designed so that, in the event of over large distances. FCIP is considered a
a component failure, a backup component or tunneling protocol, as it makes a transparent
procedure can immediately take its place point-to-point connection between
with no loss of service. Fault tolerance can geographically separated SANs over IP
be provided with software, embedded in networks. FCIP relies on TCP/IP services to
hardware, or provided by some hybrid establish connectivity between remote SANs
combination. over LANs, MANs, or WANs. An advantage
FBA — Fixed-block Architecture. Physical disk of FCIP is that it can use TCP/IP as the
sector mapping. transport while keeping Fibre Channel fabric
services intact.
FBA/CKD Conversion — The process of FCP — Fibre Channel Protocol
converting open-system data in FBA format
to mainframe data in CKD format. FC RKAJ (Fibre Channel Rack Additional) —
Acronym referring to an additional rack unit(s) that
FBA — Fixed Block Architecture houses additional hard drives exceeding the
FBUS — Fast I/O Bus capacity of the core RK unit of the Thunder
9500V/9200 subsystem.
FC ― Fibre Channel is a technology for
transmitting data between computer devices; FCU— File Conversion Utility
a set of standards for a serial I/O bus FD — Floppy Disk
capable of transferring data between two
ports FDR— Fast Dump/Restore
FC-0 ― Lowest layer on fibre channel transport, it FE — Field Engineer
represents the physical media. FED — Channel Front End Directors
FC-1 ― This layer contains the 8b/10b encoding Fibre Channel — A serial data transfer
scheme. architecture developed by a consortium of
FC-2 ― This layer handles framing and protocol, computer and mass storage device
frame format, sequence/exchange manufacturers and now being standardized
management and ordered set usage. by ANSI. The most prominent Fibre Channel
standard is Fibre Channel Arbitrated Loop
FC-3 ― This layer contains common services (FC-AL).
used by multiple N_Ports in a node.
FICON (Fiber Connectivity) ― A high-speed
FC-4 ― This layer handles standards and profiles input/output (I/O) interface for mainframe
for mapping upper level protocols like SCSI computer connections to storage devices. As
an IP onto the Fibre Channel Protocol. part of IBM's S/390 server, FICON channels
FCA ― Fibre Adapter. Fibre interface card. increase I/O capacity through the
Controls transmission of fibre packets. combination of a new architecture and faster

Page 6 HDS Confidential: For distribution only to authorized parties.


physical link rates to make them up to eight GLM — Gigabyte Link Module
times as efficient as ESCON (Enterprise Global Cache — Cache memory is used on
System Connection), IBM's previous fiber demand by multiple applications, use
optic channel standard. changes dynamically as required for READ
Flash ACC ― Flash access. Placing an entire performance between
LUN into cache hosts/applications/LUs.
FlashAccess — HDS software used to maintain Graph-Track™ — HDS software used to monitor
certain types of data in cache to ensure the performance of the Hitachi storage
quicker access to that data. subsystems. Graph-Track™ provides
FLGFAN ― Front Logic Box Fan Assembly. graphical displays, which give information on
device usage and system performance.
FLOGIC Box ― Front Logic Box.
GUI — Graphical User Interface
FM (Flash Memory) — Each microprocessor has
FM. FM is non-volatile memory which
-back to top-
contains microcode.
FOP — Fibre Optic Processor or fibre open
FPC — Failure Parts Code or Fibre Channel —H—
Protocol Chip H1F — Essentially the Floor Mounted disk rack
FPGA — Field Programmable Gate Array (also called Desk Side) equivalent of the RK.
(See also: RK, RKA, and H2F).
Frames — An ordered vector of words that is the
basic unit of data transmission in a Fibre H2F — Essentially the Floor Mounted disk rack
Channel network. (also called Desk Side) add-on equivalent
similar to the RKA. There is a limitation of
Front-end — In client/server applications, the client
only one H2F that can be added to the core
part of the program is often called the front
RK Floor Mounted unit. (See also: RK, RKA,
end and the server part is called the back
and H1F).
end.
HLU (Host Logical Unit) — A LU that the
FS — File System
Operating System and the HDLM
FSA — File System Module-A recognizes. Each HLU includes the devices
FSB — File System Module-B that comprise the storage LU
FSM — File System Module H-LUN — Host Logical Unit Number (See LUN)
FSW (Fibre Channel Interface Switch PCB) ― A HA — High Availability
board that provides the physical interface HBA — Host Bus Adapter—An HBA is an I/O
(cable connectors) between the ACP ports adapter that sits between the host
and the disks housed in a given disk drive. computer's bus and the Fibre Channel loop
FTP (File Transfer Protocol) ― A client-server and manages the transfer of information
protocol which allows a user on one between the two channels. In order to
computer to transfer files to and from another minimize the impact on host processor
computer over a TCP/IP network performance, the host bus adapter performs
many low-level interface functions
FWD — Fast Write Differential automatically or with minimal processor
involvement.
-back to top-
HDD (Hard Disk Drive) ― A spindle of hard disks
that make up a hard drive, which is a unit of
—G— physical storage within a subsystem.
HD — Hard Disk
GARD — General Available Restricted Distribution
GB — Gigabyte HDev (Hidden devices) — Hitachi Tuning Manager
Main Console may not display some drive
GBIC — Gigabit Interface Converter letters in its resource tree, and information
GID — Group Identifier such as performance and capacity is not
available for such invisible drives. This
GID — Group Identifier within the Unix security
problem occurs if there is a physical drive
model
with lower PhysicalDrive number assigned
GigE — Giga Bit Ethernet that is in “damaged (SCSI Inquiry data

HDS Confidential: For distribution only to authorized parties. Page 7


cannot be obtained)” or “hidden by HDLM” HPC — High Performance Computing
status. HRC — Hitachi Remote Copy ― See TrueCopy
HDS — Hitachi Data Systems HSG — Host Security Group
HDU (Hard Disk Unit) ― A number of hard drives HSM — Hierarchical Storage Management
(HDDs) grouped together within a
subsystem. HSSDC — High Speed Serial Data Connector
HDLM — Hitachi Dynamic Link Manager software HTTP — Hyper Text Transfer Protocol
Head — See read/write head HTTPS — Hyper Text Transfer Protocol Secure
Heterogeneous — The characteristic of containing Hub — A common connection point for devices in
dissimilar elements. A common use of this a network. Hubs are commonly used to
word in information technology is to describe connect segments of a LAN. A hub contains
a product as able to contain or be part of a multiple ports. When a packet arrives at one
heterogeneous network," consisting of port, it is copied to the other ports so that all
different manufacturers' products that can segments of the LAN can see all packets. A
interoperate. Heterogeneous networks are switching hub actually reads the destination
made possible by standards-conforming address of each packet and then forwards
hardware and software interfaces used in the packet to the correct port.
common by different products, thus allowing HXRC — Hitachi Extended Remote Copy
them to communicate with each other. The Hub — Device to which nodes on a multi-point bus
Internet itself is an example of a or loop are physically connected
heterogeneous network.
HiRDB — Hitachi Relational Database -back to top-
HIS — High Speed Interconnect
HiStar — Multiple point-to-point data paths to
cache
—I—
IBR — Incremental Block-level Replication
Hi Track System — Automatic fault reporting
system. IBR —Intelligent Block Replication
HIHSM — Hitachi Internal Hierarchy Storage ID — Identifier
Management IDR — Incremental Data Replication
HMDE — Hitachi Multiplatform Data Exchange iFCP — Short for the Internet Fibre Channel
HMRC F — Hitachi Multiple Raid Coupling Feature Protocol, iFCP allows an organization to
extend Fibre Channel storage networks over
HMRS — Hitachi Multiplatform Resource Sharing
the Internet by using TCP/IP. TCP is
HODM — Hitachi Online Data Migration responsible for managing congestion control
Homogeneous — Of the same or similar kind as well as error detection and recovery
services. iFCP allows an organization to
HOMRCF — Hitachi Open Multiple Raid Coupling
create an IP SAN fabric that minimizes the
Feature; Shadow Image, marketing name for
Fibre Channel fabric component and
HOMRCF
maximizes use of the company's TCP/IP
HORC — Hitachi Open Remote Copy ― See infrastructure.
TrueCopy
In-band virtualization — Refers to the location of
HORCM — Hitachi Open Raid Configuration the storage network path, between the
Manager application host servers in the storage
Host — Also called a server. A Host is basically a systems. Provides both control and data
central computer that processes end-user along the same connection path. Also called
applications or requests. symmetric virtualization.
Host LU — See HLU Interface —The physical and logical arrangement
supporting the attachment of any device to a
Host Storage Domains—Allows host pooling at the
connector or to another device.
LUN level and the priority access feature lets
administrator set service levels for Internal bus — Another name for an internal data
applications bus. Also, an expansion bus is often referred
to as an internal bus.
HP — Hewlett-Packard Company

Page 8 HDS Confidential: For distribution only to authorized parties.


Internal data bus — A bus that operates only —J—
within the internal circuitry of the CPU,
communicating among the internal caches of Java (and Java applications). — Java is a widely
memory that are part of the CPU chip’s accepted, open systems programming
design. This bus is typically rather quick and language. Hitachi’s enterprise software
is independent of the rest of the computer’s products are all accessed using Java
operations. applications. This enables storage
administrators to access the Hitachi
IID — Stands for Initiator ID. This is used to enterprise software products from any PC or
identify LU whether it is NAS System LU or workstation that runs a supported thin-client
User LU. If it is 0, that means NAS System internet browser application and that has
LU and if it is 1, then the LU is User LU. TCP/IP network access to the computer on
IIS — Internet Information Server which the software product runs.
I/O — Input/Output — The term I/O (pronounced Java VM — Java Virtual Machine
"eye-oh") is used to describe any program, JCL — Job Control Language
operation or device that transfers data to or
from a computer and to or from a peripheral JBOD — Just a Bunch of Disks
device. JRE —Java Runtime Environment
IML — Initial Microprogram Load JMP —Jumper. Option setting method
IP — Internet Protocol
IPL — Initial Program Load -back to top-

IPSEC — IP security
iSCSI (Internet SCSI ) — Pronounced eye skuzzy. —K—
Short for Internet SCSI, an IP-based kVA— Kilovolt Ampere
standard for linking data storage devices
over a network and transferring data by kW — Kilowatt
carrying SCSI commands over IP networks.
iSCSI supports a Gigabit Ethernet interface -back to top-
at the physical layer, which allows systems
supporting iSCSI interfaces to connect
directly to standard Gigabit Ethernet —L—
switches and/or IP routers. When an LACP — Link Aggregation Control Protocol
operating system receives a request it
LAG — Link Aggregation Groups
generates the SCSI command and then
sends an IP packet over an Ethernet LAN— Local Area Network
connection. At the receiving end, the SCSI LBA (logical block address) — A 28-bit value that
commands are separated from the request, maps to a specific cylinder-head-sector
and the SCSI commands and data are sent address on the disk.
to the SCSI controller and then to the SCSI
LC (Lucent connector) — Fibre Channel connector
storage device. iSCSI will also return a
that is smaller than a simplex connector (SC)
response to the request using the same
protocol. iSCSI is important to SAN LCDG—Link Processor Control Diagnostics
technology because it enables a SAN to be LCM— Link Control Module
deployed in a LAN, WAN or MAN.
LCP (Link Control Processor) — Controls the
iSER — iSCSI Extensions for RDMA optical links. LCP is located in the LCM.
ISL — Inter-Switch Link LCU — Logical Control Unit
iSNS — Internet Storage Name Service LD — Logical Device
ISPF — Interactive System Productivity Facility LDAP — Lightweight Directory Access Protocol
ISC — Initial shipping condition LDEV (Logical Device) ― A set of physical disk
ISOE — iSCSI Offload Engine partitions (all or portions of one or more
disks) that are combined so that the
ISP — Internet service provider
subsystem sees and treats them as a single
area of data storage; also called a volume.
-back to top- An LDEV has a specific and unique address

HDS Confidential: For distribution only to authorized parties. Page 9


within a subsystem. LDEVs become LUNs to —M—
an open-systems host.
LDKC — Logical Disk Controller MAC — Media Access Control (MAC address = a
Manual. unique identifier attached to most forms of
networking equipment.
LDM — Logical Disk Manager
MIB — Management information base
LED — Light Emitting Diode
MMC — Microsoft Management Console
LM — Local Memory
MPIO — multipath I/O
LMODs — Load Modules
Mapping — Conversion between two data
LNKLST — Link List addressing spaces. For example, mapping
Load balancing — Distributing processing and refers to the conversion between physical
communications activity evenly across a disk block addresses and the block
computer network so that no single device is addresses of the virtual disks presented to
overwhelmed. Load balancing is especially operating environments by control software.
important for networks where it's difficult to Mb — Megabits
predict the number of requests that will be
issued to a server. If one server starts to be MB — Megabytes
swamped, requests are forwarded to another MBUS — Multi-CPU Bus
server with more capacity. Load balancing
MC — Multi Cabinet
can also refer to the communications
channels themselves. MCU — Main Disk Control Unit; the local CU of a
remote copy pair.
LOC — Locations section of the Maintenance
Metadata — In database management systems,
Logical DKC (LDKC) — An internal architecture
data files are the files that store the database
extension to the Control Unit addressing
information, whereas other files, such as
scheme that allows more LDEVs to be
index files and data dictionaries, store
identified within one Hitachi enterprise
administrative information, known as
storage system. The LDKC is supported only
metadata.
on Universal Storage Platform V/VM class
storage systems. As of March 2008, only one MFC — Main Failure Code
LDKC is supported, LDKC 00. Refer to MIB — Management Information Base, a database
product documentation as Hitachi has of objects that can be monitored by a
announced their intent to expand this network management system. Both SNMP
capacity in the future. and RMON use standardized MIB formats
LPAR — Logical Partition that allow any SNMP and RMON tools to
monitor any device defined by a MIB.
LRU — Least Recently Used
Microcode — The lowest-level instructions that
LU — Logical Unit; Mapping number of an LDEV
directly control a microprocessor. A single
LUN (Logical Unit Number) ― One or more machine-language instruction typically
LDEVs. Used only for open systems. LVI translates into several microcode
(logical volume image) identifies a similar instructions.
concept in the mainframe environment.
LUN Manager — HDS software used to map
Logical Units (LUNs) to subsystem ports.
LUSE (Logical Unit Size Expansion) ― Feature
used to create virtual LUs that are up to 36
times larger than the standard OPEN-x LUs.
LVDS — Low Voltage Differential Signal
LVM — Logical Volume Manager

-back to top-

Microprogram — See Microcode

Page 10 HDS Confidential: For distribution only to authorized parties.


Mirror Cache OFF — Increases cache efficiency —O—
over cache data redundancy.
OEM — Original Equipment Manufacturer
MM — Maintenance manual.
OFC — Open Fibre Control
MPA — Micro-processor adapter
OID — Object identifier
MP — Microprocessor
OLTP — On-Line Transaction Processing
MPU— Microprocessor Unit
ONODE — Object node
Mode— The state or setting of a program or
device. The term mode implies a choice -- OPEX – Operational Expenditure – An operating
that you can change the setting and put the expense, operating expenditure, operational
system in a different mode. expense, operational expenditure or OPEX
is an on-going cost for running a product,
MSCS — Microsoft Cluster Server business, or system. Its counterpart is a
MS/SG — Microsoft Service Guard capital expenditure (CAPEX).
MTS — Multi-Tiered Storage Out-of-band virtualization — Refers to systems
MVS — Multiple Virtual Storage where the controller is located outside of the
SAN data path. Separates control and data
on different connection paths. Also called
-back to top-
asymmetric virtualization.
ORM— Online Read Margin
—N— OS — Operating System
NAS (Network Attached Storage) ― A disk array
connected to a controller that gives access to -back to top-
a LAN Transport. It handles data at the file
level.
NAT — Network Address Translation —P—
NAT — Network Address Translation Parity — A technique of checking whether data
has been lost or written over when it’s moved
NDMP — Network Data Management Protocol, is from one place in storage to another or when
a protocol meant to transport data between it’s transmitted between computers
NAS devices
Parity Group — Also called an array group, is a
NetBIOS — Network Basic Input/Output System group of hard disk drives (HDDs) that form
Network — A computer system that allows sharing the basic unit of storage in a subsystem. All
of resources, such as files and peripheral HDDs in a parity group must have the same
hardware devices physical capacity.
NFS protocol — Network File System is a protocol Partitioned cache memory — Separate workloads
which allows a computer to access files over in a ‘storage consolidated’ system by dividing
a network as easily as if they were on its cache into individually managed multiple
local disks. partitions. Then customize the partition to
NIM — Network Interface Module match the I/O characteristics of assigned
LUs
NIS — Network Information Service (YP)
PAT — Port Address Translation
Node ― An addressable entity connected to an
I/O bus or network. Used primarily to refer to PATA — Parallel ATA
computers, storage devices, and storage Path — Also referred to as a transmission
subsystems. The component of a node that channel, the path between two nodes of a
connects to the bus or network is a port. network that a data communication follows.
Node name ― A Name_Identifier associated with The term can refer to the physical cabling
a node. that connects the nodes on a network, the
signal that is communicated over the
NTP — Network Time Protocol pathway or a sub-channel in a carrier
NVS — Non Volatile Storage frequency.
Path failover — See Failover
-back to top- PAV — Parallel Access Volumes

HDS Confidential: For distribution only to authorized parties. Page 11


PAWS — Protect Against Wrapped Sequences implemented by hardware, software, or a
PBC — Port By-pass Circuit combination of the two. At the lowest level, a
protocol defines the behavior of a hardware
PCB — Printed Circuit Board connection.
PCI — Power Control Interface PS — Power Supply
PCI CON (Power Control Interface Connector PSA — Partition Storage Administrator
Board)
PSSC — Perl SiliconServer Control
Performance — speed of access or the delivery of
information PSU — Power Supply Unit
PD — Product Detail PTR — Pointer
PDEV— Physical Device P-VOL — Primary Volume
PDM — Primary Data Migrator
-back to top-
PDM — Policy based Data Migration
PGR — Persistent Group Reserve
—Q—
PK — Package (see PCB)
QD — Quorum Device
PI — Product Interval
QoS — Quality of Service —In the field of
PIR — Performance Information Report
computer networking, the traffic engineering
PiT — Point-in-Time term quality of service (QoS), refers to
PL — Platter (Motherboard/Backplane) - the resource reservation control mechanisms
circular disk on which the magnetic data is rather than the achieved service quality.
stored. Quality of service is the ability to provide
different priority to different applications,
Port — In TCP/IP and UDP networks, an endpoint users, or data flows, or to guarantee a
to a logical connection. The port number certain level of performance to a data flow.
identifies what type of port it is. For example,
port 80 is used for HTTP traffic.
-back to top-
P-P — Point to Point; also P2P
Priority Mode— Also PRIO mode, is one of the
modes of FlashAccess™ in which the —R—
FlashAccess™ extents hold read and write R/W — Read/Write
data for specific extents on volumes (see
Bind Mode). RAID (Redundant Array of Independent Disks, or
Redundant Array of Inexpensive Disks) ― A
Provisioning — The process of allocating storage group of disks that look like a single volume
resources and assigning storage capacity for to the server. RAID improves performance
an application, usually in the form of server by pulling a single stripe of data from multiple
disk drive space, in order to optimize the disks, and improves fault-tolerance either
performance of a storage area network through mirroring or parity checking and it is
(SAN). Traditionally, this has been done by a component of a customer’s SLA.
the SAN administrator, and it can be a
tedious process. RAID-0 — Striped array with no parity

In recent years, automated storage provisioning, RAID-1 — Mirrored array & duplexing
also called auto-provisioning, programs have RAID-3 — Striped array with typically non-rotating
become available. These programs can parity, optimized for long, single-threaded
reduce the time required for the storage transfers
provisioning process, and can free the RAID-4 — Striped array with typically non-rotating
administrator from the often distasteful task parity, optimized for short, multi-threaded
of performing this chore manually transfers
Protocol — A convention or standard that enables RAID-5 — Striped array with typically rotating
the communication between two computing parity, optimized for short, multithreaded
endpoints. In its simplest form, a protocol transfers
can be defined as the rules governing the
syntax, semantics, and synchronization of
communication. Protocols may be

Page 12 HDS Confidential: For distribution only to authorized parties.


RAID-6 — Similar to RAID-5, but with dual rotating operational hardware components of the
parity physical disks, tolerating two physical Thunder 9500V/9200 subsystem. (See also:
disk failures RKA, H1F, and H2F)
RAM — Random Access Memory RKA (Rack Additional) — Acronym referring to
RAM DISK — A LUN held entirely in the cache “Rack Additional”, namely additional rack
area. unit(s) which house additional hard drives
exceeding the capacity of the core RK unit of
Read/Write Head — Read and write data to the the Thunder 9500V/9200 subsystem. (See
platters, typically there is one head per also: RK, RKA, H1F, and H2F).
platter side, and each head is attached to a
single actuator shaft RKAJAT — Rack Additional SATA disk tray
Redundant — Describes computer or network RLGFAN — Rear Logic Box Fan Assembly
system components, such as fans, hard disk RLOGIC BOX — Rear Logic Box
drives, servers, operating systems, switches, RMI (Remote Method Invocation) — A way that a
and telecommunication links that are programmer, using the Java programming
installed to back up primary resources in language and development environment, can
case they fail. A well-known example of a write object-oriented programming in which
redundant system is the redundant array of objects on different computers can interact in
independent disks (RAID). Redundancy a distributed network. RMI is the Java
contributes to the fault tolerance of a system. version of what is generally known as a RPC
Reliability —level of assurance that data will not be (remote procedure call), but with the ability to
lost or degraded over time pass one or more objects along with the
Resource Manager — Hitachi Resource request.
Manager™ utility package is a software suite RoHS — Restriction of Hazardous Substances (in
that rolls into one package the following four Electrical and Electronic Equipment)
pieces of software: ROI — Return on Investment
• Hitachi Graph-Track™ performance ROM — Read-only memory
monitor feature
Round robin mode — A load balancing technique
• Virtual Logical Volume Image (VLMI) in which balances power is placed in the
Manager (optimizes capacity utilization), DNS server instead of a strictly dedicated
• Hitachi Cache Residency Manager feature machine as other load techniques do. Round
(formerly FlashAccess) (uses cache to robin works on a rotating basis in that one
speed data reads and writes), server IP address is handed out, then moves
to the back of the list; the next server IP
• LUN Manager (reconfiguration of LUNS, address is handed out, and then it moves to
or logical unit numbers). the end of the list; and so on, depending on
RCHA — RAID Channel Adapter the number of servers being used. This
RC — Reference Code or Remote Control works in a looping fashion. Round robin DNS
is usually used for balancing the load of
RCP — Remote Control Processor geographically distributed Web servers.
RCU — Remote Disk Control Unit Router — a computer networking device that
RDMA — Remote Direct Memory Access forwards data packets toward their
Redundancy — Backing up a component to help destinations, through a process known as
ensure high availability. routing.
Reliability — An attribute of any commuter RPO (Recovery Point Option) — point in time that
component (software, hardware, or a recovered data should match.
network) that consistently performs RPSFAN — Rear Power Supply Fan Assembly
according to its specifications. RS CON — RS232C/RS422 Interface Connector
RID — Relative Identifier that uniquely identifies a RSD — Raid Storage Division
user or group within a Microsoft Windows
domain R-SIM—Remote Service Information Message
RISC — Reduced Instruction Set Computer RTO (Recovery Time Option) — length of time that
can be tolerated between a disaster and
RK (Rack) — Acronym referring to the main recovery of data.
“Rack” unit, which houses the core

HDS Confidential: For distribution only to authorized parties. Page 13


Sector - a sub-division of a track of a magnetic
-back to top- disk that stores a fixed amount of data.
Selectable segment size — can be set per
—S— partition
Selectable Stripe Size — Increases performance
SA — Storage Administrator
by customizing the disk access size.
SAA — Share Access Authentication - the process
Serial Transmission — The transmission of data
of restricting a user's rights to a file system
bits in sequential order over a single line.
object by combining the security descriptors
from both the file system object itself and the Server — A central computer that processes end-
share to which the user is connected user applications or requests, also called a
host.
SACK — Sequential Acknowledge
Service-level agreement (SLA) - A contract
SACL — System ACL - the part of a security
between a network service provider and a
descriptor that stores system auditing
customer that specifies, usually in
information
measurable terms, what services the
network service provider will furnish. Many
SAN (Storage Area Network) ― A network linking Internet service providers (ISP)s provide
computing devices to disk or tape arrays and their customers with an SLA. More recently,
other devices over Fibre Channel. It handles IT departments in major enterprises have
data at the block level. adopted the idea of writing a service level
agreement so that services for their
SANtinel — HDS software that provides LUN
customers (users in other departments within
security. SANtinel protects data from
the enterprise) can be measured, justified,
unauthorized access in SAN environments. It
and perhaps compared with those of
restricts server access by implementing
outsourcing network providers.
boundaries around predefined zones and is
used to map hosts in a host group to the Some metrics that SLAs may specify include:
appropriate LUNs. • What percentage of the time services will
SARD — System Assurance Registration be available
Document • The number of users that can be served
SAS — SAN Attached Storage, storage elements simultaneously
that connect directly to a storage area • Specific performance benchmarks to
network and provide data access services to which actual performance will be
computer systems. periodically compared
SAS — (Serial Attached SCSI) disk drive • The schedule for notification in advance of
configurations for Hitachi Simple Modular network changes that may affect users
Storage 100 systems
• Help desk response time for various
SATA — (Serial ATA) —Serial Advanced classes of problems
Technology Attachment is a new standard
• Dial-in access availability
for connecting hard drives into computer
systems. SATA is based on serial signaling • Usage statistics that will be provided.
technology, unlike current IDE (Integrated Service-Level Objective (SLO) - Individual
Drive Electronics) hard drives that use performance metrics are called service-level
parallel signaling. objectives (SLOs). Although there is no hard
SC (simplex connector) — Fibre Channel and fast rule governing how many SLOs may
connector that is larger than a Lucent be included in each SLA, it only makes
connector (LC). sense to measure what matters.
SC — Single Cabinet Each SLO corresponds to a single
performance characteristic relevant to the
SCM — Supply Chain Management
delivery of an overall service. Some
SCP — Secure Copy examples of SLOs would include: system
SCSI — Small Computer Systems Interface. A availability, help desk incident resolution
parallel bus architecture and a protocol for time, and application response time.
transmitting large data blocks up to a SES — SCSI Enclosure Services
distance of 15-25 meters.

Page 14 HDS Confidential: For distribution only to authorized parties.


SENC — Is the SATA (Serial ATA) version of the SMI-S — Storage Management Initiative
ENC. ENCs and SENCs are complete Specification
microprocessor systems on their own and SMP/E (System Modification Program/Extended)
they occasionally require a firmware — An IBM licensed program used to install
upgrade. software and software changes on z/OS
SFP — Small Form-Factor Pluggable module Host systems.
connector — A specification for a new SMS — Hitachi Simple Modular Storage
generation of optical modular transceivers.
The devices are designed for use with small SMTP — Simple Mail Transfer Protocol
form factor (SFF) connectors, and offer high SMU — System Management Unit
speed and physical compactness. They are Snapshot Image — A logical duplicated volume
hot-swappable. (V-VOL) of the primary volume. It is an
ShadowImage® — HDS software used to duplicate internal volume intended for restoration
large amounts of data within a subsystem SNIA — Storage Networking Industry Association,
without affecting the service and an association of producers and consumers
performance levels or timing out. of storage networking products, whose goal
ShadowImage replicates data with high is to further storage networking technology
speed and reduces backup time. and applications.
SHSN — Shared memory Hierarchical Star SNMP (Simple Network Management Protocol) —
Network A TCP/IP protocol that was designed for
SI — Hitachi ShadowImage® Replication software management of networks over TCP/IP, using
agents and stations.
SIM RC — Service (or system) Information
Message Reference Code SOAP (simple object access protocol) — A way for
a program running in one kind of operating
SID — Security Identifier - user or group identifier
system (such as Windows 2000) to
within the Microsoft Windows security model
communicate with a program in the same or
SIMM — Single In-line Memory Module another kind of an operating system (such as
SIM — Storage Interface Module Linux) by using the World Wide Web's
Hypertext Transfer Protocol (HTTP) and its
SIM — Service Information Message; a message
Extensible Markup Language (XML) as the
reporting an error; contains fix guidance
mechanisms for information exchange.
information
Socket — In UNIX and some other operating
SIz — Hitachi ShadowImage® Replication
systems, a software object that connects an
Software
application to a network protocol. In UNIX,
SLA —Service Level Agreement for example, a program can send and
SLPR (Storage administrator Logical PaRtition) — receive TCP/IP messages by opening a
Storage can be divided among various users socket and reading and writing data to and
to reduce conflicts with usage. from the socket. This simplifies program
development because the programmer need
SM (Shared Memory Module) ― Stores the
only worry about manipulating the socket
shared information about the subsystem and
and can rely on the operating system to
the cache control information (director
actually transport messages across the
names). This type of information is used for
network correctly. Note that a socket in this
the exclusive control of the subsystem. Like
sense is completely soft - it's a software
CACHE, shared memory is controlled as two
object, not a physical component.
areas of memory and fully non-volatile
(sustained for approximately 7 days). SPAN — Span is a section between two
intermediate supports. See Storage pool
SM PATH (Shared Memory Access Path) ―
Access Path from the processors of CHA, Spare — An object reserved for the purpose of
DKA PCB to Shared Memory. substitution for a like object in case of that
object's failure.
SMB/CIFS — Server Message Block Protocol /
Common Internet File System SPC — SCSI Protocol Controller
SMC — Shared Memory Control SpecSFS — Standard Performance Evaluation
Corporation Shared File system
SM — Shared Memory
SSB — Sense Byte

HDS Confidential: For distribution only to authorized parties. Page 15


SSC — SiliconServer Control TCP/IP — Transmission Control Protocol over
SSH — Secure Shell Internet Protocol
SSID — Subsystem Identifier TCP/UDP — User Datagram Protocol is one of the
core protocols of the Internet protocol suite.
SSL — Secure Sockets Layer Using UDP, programs on networked
SSVP — Sub Service Processor; interfaces the computers can send short messages known
SVP to the DKC as datagrams to one another.
Sticky Bit — Extended Unix mode bit that prevents TCS — TrueCopy Synchronous
objects from being deleted from a directory TCz — Hitachi TrueCopy® Remote Replication
by anyone other than the object's owner, the software
directory's owner or the root user
TDCONV (Trace Dump CONVerter) ― Is a
STR — Storage and Retrieval Systems software program that is used to convert
Storage pooling — The ability to consolidate and traces taken on the system into readable
manage storage resources across storage text. This information is loaded into a special
system enclosures where the consolidation spreadsheet that allows for further
of many appears as a single view. investigation of the data. More in-depth
Striping — A RAID technique for writing a file to failure analysis.
multiple disks on a block-by-block basis, with TGTLIBs— Target Libraries
or without parity. Target — The system component that receives a
Subsystem — Hardware and/or software that SCSI I/O command, an open device that
performs a specific function within a larger operates at the request of the initiator
system. THF — Front Thermostat
SVC — Supervisor Call Interruption Thin Provisioning — Thin Provisioning allows
S-VOL — Secondary Volume space to be easily allocated to servers, on a
SVP (Service Processor) ― A laptop computer just-enough and just-in-time basis.
mounted on the control frame (DKC) and Throughput — The amount of data transferred
used for monitoring, maintenance and from one place to another or processed in a
administration of the subsystem specified amount of time. Data transfer rates
Symmetric virtualization — See In-band for disk drives and networks are measured in
virtualization. terms of throughput. Typically, throughputs
are measured in kbps, Mbps and Gbps.
Synchronous— Operations which have a fixed
time relationship to each other. Most THR — Rear Thermostat
commonly used to denote I/O operations TID — Target ID
which occur in time sequence, i.e., a Tiered storage —A storage strategy that matches
successor operation does not occur until its data classification to storage metrics. Tiered
predecessor is complete. storage is the assignment of different
Switch— A fabric device providing full bandwidth categories of data to different types of
per port and high-speed routing of data via storage media in order to reduce total
link-level addressing. storage cost. Categories may be based on
Software — Switch levels of protection needed, performance
requirements, frequency of use, and other
considerations. Since assigning data to
-back to top- particular media may be an ongoing and
complex activity, some vendors provide
software for automatically managing the
—T— process based on a company-defined policy.
T.S.C. (Technical Support Center) ― A chip Tiered Storage Promotion — Moving data between
developed by HP, and used in various tiers of storage as their availability
devices. This chip has FC-0 through FC-2 on requirements change
one chip.
TISC — The Hitachi Data Systems internal
TCA ― TrueCopy Asynchronous Technical Information Service Centre from
TCO — Total Cost of Ownership which microcode, user guides, ECNs, etc.
can be downloaded.

Page 16 HDS Confidential: For distribution only to authorized parties.


TLS — Tape Library System VHDL — VHSIC (Very-High-Speed Integrated
TLS — Transport Layer Security Circuit) Hardware Description Language
TMP — Temporary VHSIC — Very-High-Speed Integrated Circuit
TOC — Table Of Contents VI — Virtual Interface, a research prototype that is
undergoing active development, and the
TOD — Time Of Day details of the implementation may change
TOE — TCP Offload Engine considerably. It is an application interface
Topology — The shape of a network or how it is that gives user-level processes direct but
laid out. Topologies are either physical or protected access to network interface cards.
logical. This allows applications to bypass IP
processing overheads (copying data,
TPF — Transaction Processing Facility computing checksums, etc.) and system call
Transfer Rate — See Data Transfer Rate overheads while still preventing one process
from accidentally or maliciously tampering
Track — Circular segment of a hard disk or other
storage media with or reading data being used by another.
VirtLUN —VLL. Customized volume; size chosen
Trap — A program interrupt, usually an interrupt
caused by some exceptional situation in the by user
user program. In most cases, the Operating Virtualization —The amalgamation of multiple
System performs some action, and then network storage devices into what appears
returns control to the program. to be a single storage unit. Storage
virtualization is often used in a SAN, and
TRC — Technical Resource Center
makes tasks such as archiving, back up, and
TrueCopy — HDS software that replicates data recovery easier and faster. Storage
between subsystems. These systems can be virtualization is usually implemented via
located within a data center or at software applications.
geographically separated data centers. The
9900V adds the capability of using TrueCopy VLL — Virtual Logical Volume Image/Logical Unit
to make copies in two different locations Number
simultaneously. VLVI — Virtual Logic Volume Image, marketing
TSC — Technical Support Center name for CVS (custom volume size)
VOLID — Volume ID
TSO/E — Time Sharing Option/Extended
Volume — A fixed amount of storage on a disk or
-back to top-
tape. The term volume is often used as a
synonym for the storage medium itself, but it
is possible for a single disk to contain more
—U— than one volume or for a volume to span
more than one disk.
UFA — UNIX File Attributes
VTOC — Volume Table of Contents
UID — User Identifier
V-VOL — Virtual volume
UID — User Identifier within the UNIX security
model
-back to top-
UPS — Uninterruptible Power Supply — A power
supply that includes a battery to maintain
power in the event of a power outage. —W—
URz — Hitachi Universal Replicator software WAN —Wide Area Network
USP — Universal Storage Platform™ WDIR — Working Directory
USP V — Universal Storage Platform™ V WDIR — Directory Name Object
USP VM — Universal Storage Platform™ VM WDS — Working Data Set
WFILE — Working File
-back to top-
WFILE — File Object
WFS — Working File Set
—V— WINS — Windows Internet Naming Service
VCS — Veritas Cluster System

HDS Confidential: For distribution only to authorized parties. Page 17


WMS — Hitachi Workgroup Modular Storage
system
—X—
WTREE — Working Tree
XAUI — "X"=10, AUI = Attachment Unit Interface
WTREE — Directory Tree Object
XFI — Standard interface for connecting 10 Gig
WWN (World Wide Name) ― A unique identifier Ethernet MAC device to XFP interface
for an open-system host. It consists of a 64-
XFP — "X" = 10 Gigabit Small Form Factor
bit physical address (the IEEE 48-bit format
Pluggable
with a 12-bit extension and a 4-bit prefix).
The WWN is essential for defining the XRC — Extended Remote Copy
Hitachi Volume Security software (formerly
SANtinel) parameters because it determines -back to top-
whether the open-system host is to be
allowed or denied access to a specified LU
or a group of LUs. —Y—
WWN — World Wide Name — A unique identifier
for an open systems host. It consists of a 64- -back to top-
bit physical address (the IEEE 48-bit format
with a 12-bit extension and a 4-bit prefix).
The WWN is essential for defining the —Z—
SANtinel parameters because it determines Zone — A collection of Fibre Channel Ports that
whether the open systems host is to be are permitted to communicate with each other via
allowed or denied access to a specified LU the fabric
or a group of LUs.
Zoning — A method of subdividing a storage area
network into disjoint zones, or subsets of nodes on
WWNN — World Wide Node Name ― A globally the network. Storage area network nodes outside
unique 64-bit identifier assigned to each a zone are invisible to nodes within the zone.
Fibre Channel node process. Moreover, with switched SANs, traffic within each
zone may be physically isolated from traffic
WWPN (World Wide Port Name) ― A globally outside the zone.
unique 64-bit identifier assigned to each
Fibre Channel port. Fibre Channel ports’
-back to top-
WWPN are permitted to use any of several
naming authorities. Fibre Channel specifies a
Network Address Authority (NAA) to
distinguish between the various name
registration authorities that may be used to
identify the WWPN.

-back to top-

Page 18 HDS Confidential: For distribution only to authorized parties.


Evaluating this Course
1. Log in to the Hitachi Data Systems Learning Center page at
https://learningcenter.hds.com
2. Select the Learning tab on the upper-left corner of the Hitachi Data Systems
Learning Center page.
3. On the left panel of the Learning page, click Learning History. The Learning
History page appears.
4. From the Title column of the Learning History table, select the title of the course
in which you have enrolled. The Learning Details page for the enrolled course
appears.
5. Select the More Details tab.

6. Under Attachments, click the Class Eval link. The Class Evaluation form opens.
Complete the form and submit.

HDS Confidential: For distribution only to authorized parties. Page 1


Evaluating this Course

Page 2 HDS Confidential: For distribution only to authorized parties.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy