SAP HANA On IBM Power Systems: Books
SAP HANA On IBM Power Systems: Books
Dino Quintero
Luis Bolinches
Rodrigo Ceron
Mika Heino
John Wright
Redbooks
International Technical Support Organization
July 2019
SG24-8432-00
Note: Before using this information and the product it supports, read the information in “Notices” on
page vii.
This edition applies to Red Hat Enterprise Linux V7.5, PowerHA SystemMirror for Linux V7.2.2.2, and
SAP HANA V2.0.
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .x
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .x
Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Chapter 1. Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 About this publication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 The SAP HANA platform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.1 What is new in SAP HANA on IBM Power Systems . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3 High availability for SAP HANA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.3.1 Disaster recovery: SAP HANA System Replication . . . . . . . . . . . . . . . . . . . . . . . . 7
1.3.2 High availability: SAP HANA Host Auto-Failover . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.3.3 High availability: SAP HANA System Replication . . . . . . . . . . . . . . . . . . . . . . . . . 10
Chapter 7. SAP HANA System Replication for high availability and disaster recovery
scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
7.1 SAP HANA System Replication methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
7.1.1 SAP HANA System Replication requirements . . . . . . . . . . . . . . . . . . . . . . . . . . 107
7.2 Implementing SAP HANA System Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
7.3 SAP HANA System Replication and takeover tests . . . . . . . . . . . . . . . . . . . . . . . . . . 115
7.3.1 Creating a test table and populating it . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
7.3.2 Performing a takeover. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
Chapter 9. SAP HANA and IBM VM Recovery Manager high availability and disaster
recovery. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
9.1 Business continuity and recovery orchestrator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
9.2 Power Systems HA and DR solutions for SAP HANA. . . . . . . . . . . . . . . . . . . . . . . . . 134
9.2.1 PowerHA SystemMirror for Linux: A cluster-based HA solution for SAP HANA . 134
9.2.2 IBM Geographically Dispersed Resiliency: A VM Restart Manager -based DR
solution for SAP HANA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
9.2.3 VM Recovery Manager HA: A VM Restart Manager -based HA solution for SAP
HANA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
9.2.4 SAP HANA HA management by using VM Recovery Manager HA . . . . . . . . . . 136
9.2.5 VM Recovery Manager HA: SAP HANA agent deployment and management. . 138
Appendix C. SAP HANA software stack installation for a scale-out scenario . . . . . 157
Differences between scale-out and scale-up installations . . . . . . . . . . . . . . . . . . . . . . . . . 158
iv SAP HANA on IBM Power Systems: High Availability and Disaster Recovery Implementation Updates
Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
Installing HANA scale-out clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
Scale-out graphical installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
Scale-out text-mode installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
Storage Connector API setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
Postinstallation notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
Contents v
vi SAP HANA on IBM Power Systems: High Availability and Disaster Recovery Implementation Updates
Notices
This information was developed for products and services offered in the US. This material might be available
from IBM in other languages. However, you may be required to own a copy of the product or product version in
that language in order to access it.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area. Any
reference to an IBM product, program, or service is not intended to state or imply that only that IBM product,
program, or service may be used. Any functionally equivalent product, program, or service that does not
infringe any IBM intellectual property right may be used instead. However, it is the user’s responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document. The
furnishing of this document does not grant you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, MD-NC119, Armonk, NY 10504-1785, US
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.
Any references in this information to non-IBM websites are provided for convenience only and do not in any
manner serve as an endorsement of those websites. The materials at those websites are not part of the
materials for this IBM product and use of those websites is at your own risk.
IBM may use or distribute any of the information you provide in any way it believes appropriate without
incurring any obligation to you.
The performance data and client examples cited are presented for illustrative purposes only. Actual
performance results may vary depending on specific configurations and operating conditions.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.
Statements regarding IBM’s future direction or intent are subject to change or withdrawal without notice, and
represent goals and objectives only.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to actual people or business enterprises is entirely
coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the sample
programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,
cannot guarantee or imply reliability, serviceability, or function of these programs. The sample programs are
provided “AS IS”, without warranty of any kind. IBM shall not be liable for any damages arising out of your use
of the sample programs.
The following terms are trademarks or registered trademarks of International Business Machines Corporation,
and might also be trademarks or registered trademarks in other countries.
AIX® IBM Spectrum™ POWER9™
DB2® IBM Spectrum Accelerate™ PowerHA®
Db2® IBM Spectrum Scale™ PowerVM®
Enterprise Storage Server® IBM Spectrum Virtualize™ Redbooks®
GPFS™ POWER® Redbooks (logo) ®
IBM® POWER Hypervisor™ Storwize®
IBM Elastic Storage™ Power Systems™ SystemMirror®
IBM FlashSystem® POWER8® XIV®
Intel, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered trademarks of Intel
Corporation or its subsidiaries in the United States and other countries.
Linux is a trademark of Linus Torvalds in the United States, other countries, or both.
Microsoft, and the Windows logo are trademarks of Microsoft Corporation in the United States, other
countries, or both.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Other company, product, or service names may be trademarks or service marks of others.
viii SAP HANA on IBM Power Systems: High Availability and Disaster Recovery Implementation Updates
Preface
This IBM® Redbooks® publication updates Implementing High Availability and Disaster
Recovery Solutions with SAP HANA on IBM Power Systems, REDP-5443 with the latest
technical content that describes how to implement an SAP HANA on IBM Power Systems™
high availability (HA) and disaster recovery (DR) solution by using theoretical knowledge and
sample scenarios.
This book describes how all the pieces of the reference architecture work together (IBM
Power Systems servers, IBM Storage servers, IBM Spectrum™ Scale, IBM PowerHA®
SystemMirror® for Linux, IBM VM Recovery Manager DR for Power Systems, and Linux
distributions) and demonstrates the resilience of SAP HANA with IBM Power Systems
servers.
This publication is for architects, brand specialists, distributors, resellers, and anyone
developing and implementing SAP HANA on IBM Power Systems integration, automation,
HA, and DR solutions. This publication provides documentation to transfer the how-to-skills to
the technical teams, and documentation to the sales team.
Authors
This book was produced by a team of specialists from around the world working at the
International Technical Support Organization, Poughkeepsie Center:
Luis Bolinches has been working with IBM Power Systems servers for over 16 years and
has been working with IBM Spectrum Scale™ (formerly known as IBM General Parallel File
System (IBM GPFS™) for over 10 years. He works 50% of his time for IBM Lab Services in
Nordic, where he is the subject matter expert (SME) for HANA on IBM Power Systems, and
the other 50% on the IBM Spectrum Scale development team.
Rodrigo Ceron is an IBM Master Inventor and Senior Managing Consultant at IBM Lab
Services and Training. He has 19 years of experience in the Linux and UNIX arena, and has
been working for IBM for over 15 years, where he has received eight intellectual property
patents in multiple areas. He graduated with honors in Computer Engineering from the
University of Campinas (UNICAMP) and holds an IEEE CSDA credential. He is also an
IBM Expert Certified IT Specialist. His responsibilities are to engage customers worldwide to
deliver highly specialized consulting, implementation, and skill transfer services in his areas
of expertise: cognitive and artificial intelligence, SAP HANA, IBM Spectrum Scale, Linux on
Power, systems HA, and performance. He has also been fostering business development by
presenting these topics at IBM conferences globally, and writing technical documentations.
He has written seven IBM Redbooks publications so far, awarding him the tile of ITSO
Platinum author.
John Wright is a Technical Design Architect at Pure Storage. With over a decade of his 19
years of experience spent at IBM, John has a deep and varied skillset that was gained from
servicing multiple industry sectors across multiple vendor technologies. He specializes in
cloud (Amazon Web Services (AWS), OpenStack, and IBM PowerVC), Pure Storage
products, analytics (SAP HANA on IBM Power Systems and Hortonworks Data Platform on
Power Systems), and SUSE Linux. He has a background in traditional AIX and virtualization
environments, including complex data center migrations and hardware refresh projects. He
holds certifications with AWS and Pure Storage. John splits his time between delivering
services, designing new solutions that use the latest technology, and running onsite
workshops across the UK and Europe.
Wade Wallace
International Technical Support Organization, Austin Center
Ravi Shankar
IBM US
Parmod Kumar Garg, Anshu Goyal, Alok Chandra Mallick, Ashish Kumar Pande
Aricent, an IBM Business Partner
Find out more about the residency program, browse the residency index, and apply online at:
ibm.com/redbooks/residencies.html
x SAP HANA on IBM Power Systems: High Availability and Disaster Recovery Implementation Updates
Comments welcome
Your comments are important to us!
We want our books to be as helpful as possible. Send us your comments about this book or
other IBM Redbooks publications in one of the following ways:
Use the online Contact us review Redbooks form found at:
ibm.com/redbooks
Send your comments in an email to:
redbooks@us.ibm.com
Mail your comments to:
IBM Corporation, International Technical Support Organization
Dept. HYTD Mail Station P099
2455 South Road
Poughkeepsie, NY 12601-5400
Preface xi
xii SAP HANA on IBM Power Systems: High Availability and Disaster Recovery Implementation Updates
1
Chapter 1. Introduction
This chapter describes the goals of this publication, the contents that are covered, key
aspects of the SAP HANA solution on IBM Power Systems servers, and what is new since the
last publication of this book.
For more materials to complement this publication, see the SAP HANA Administration Guide.
The SAP HANA TDI architecture helps you to build an environment by using your existing
hardware, such as servers, storage, storage area networks (SANs), and network switches.
This architecture gives you freedom over the SAP HANA appliance model that was widely
used in the past. However, you must follow the SAP list for the supported hardware models
and configurations.
2 SAP HANA on IBM Power Systems: High Availability and Disaster Recovery Implementation Updates
1.2 The SAP HANA platform
There are various answers that you can give to the question “What is SAP HANA?” However,
the answer that can be emphasized is that SAP HANA is an SAP solution.
This is a simple but important definition. As shown in 2.1, “SAP requirements for SAP HANA
on IBM Power Systems implementations” on page 14, the core aspects of your HANA
implementation are defined by SAP guidelines and requirements. Factors such as supported
operating systems (OSes), core to memory ratios, allowed server hardware, allowed storage
hardware, networking requirements, and the HANA platform announcements roadmap, are
determined by SAP.
SAP HANA is the SAP database (DB) platform for multiple SAP solutions. In changing
direction to former classic SAP solutions, HANA is the processing core of it all. Operations
that formerly were performed at application layers moved into the DB layer and are now
performed by the HANA engines.
This changed the way that data was traditionally processed by using Online Transactional
Processing (OLTP), which gave way to the more dynamic Online Analytical Processing
(OLAP) or a mixed schema, which required a solution that could work with both types of data
processing. SAP was able to combine the processing of these two schemes because it
concluded that many similarities existed in both types of processing. The result was a single
DB able to use the same source of data for performing both kinds of operations, thus
eliminating the need for time-consuming extraction, transformation, and loading (ETL)
operations between an OLTP base into an OLAP base. SAP HANA is built to work with both
OLTP and OLAP data.
Traditionally, DBs store data by rows, with a data entry in each column. So, retrieving the data
means that a read operation on the entire row is required to build the results of a query.
Therefore, many data entries in the columns of a particular row are also read. However, in
today’s world of analytical processing, the user is interested in building reports that provide an
insight into a vast amount of data, but is not necessarily interested in knowing all of the details
about that data.
Reading numerous columns of a row to create an answer for an aggregation report that
targets only some of the columns’ data is perceived as a waste of I/O time because many
other columns that are not of interest are also read. Traditionally, this task was minimized by
the creation of index tables that lowered the amount of I/O at the expense of consuming more
space on disk for the indexes. SAP HANA provided a solution to this issue by storing data in
columns as opposed to rows. Analytical processing greatly benefits from this change.
Nevertheless, SAP HANA can work with both columnar and row data.
Sparsity of data is an aspect that has been treated by computer scientists since the early
days of computing. Countless data structures were proposed to reduce the amount of space
for storing sparse data. SAP HANA can potentially reduce the footprint of used memory by
applying compression algorithms that treat this sparsity of data and also treat default data
values.
SAP HANA works by loading all of the data into memory, which is why it is called an
in-memory DB. This is the most important factor that allows SAP HANA to run analytical
reports in seconds as opposed to minutes, or in minutes as opposed to hours, allowing
real-time analysis of analytical data.
In summary, these characteristics of SAP HANA allow SAP to strategically place it at the core
of its solutions. SAP HANA is the new platform core for all SAP applications.
Chapter 1. Introduction 3
1.2.1 What is new in SAP HANA on IBM Power Systems
Some concepts that are mentioned here are not entirely new, but they are new to this
publication. Here are the concepts that were added:
There is SAP HANA support of the new IBM POWER9™ series of servers: IBM Power
System S922, IBM Power System H922, IBM Power System S924, IBM Power System
H924, and IBM Power System L922. There is also support for all IBM POWER9 models
that are based on PowerVM®. This gives you a choice of either IBM POWER8® or
POWER9 servers to use for hosting your HANA environment. For more information, see
2.1.1, “Storage and file system requirements” on page 15.
Red Hat Enterprise Linux V7 is now a fully supported OS for SAP HANA on IBM Power
Systems. So, now you have a choice of using either SUSE Linux Enterprise Server or Red
Hat Enterprise Linux and you can choose the one with which your company IT
development and operations department is more familiar.
At the high availability (HA) level, HANA can now handle Invisible Takeover under an SAP
HANA System Replication (HSR) scenario. This works only for read-only transactions,
where the sessions that were connected to the primary system are restored on the
secondary one. Nevertheless, cluster management software at the OS layer is still
required for managing failover of the virtual IP address (VIPA). Figure 1-1 shows the whole
mechanism of Invisible Takeover.
A same source node can directly replicate to multiple systems without needing to chain
the replication along the way. This is called Multitarget Systems Replication. Figure 1-2 on
page 5 shows this concept, where server A is the source of data replication for both
servers B and C.
4 SAP HANA on IBM Power Systems: High Availability and Disaster Recovery Implementation Updates
Figure 1-2 SAP HANA System Replication: Sample Multitarget Systems Replication
Note: There are also other cluster vendors. The vendors that are listed are the
dominant ones among others.
For more information about SAP HANA, see the following websites:
– IBM Power Systems for SAP HANA
– SAP HANA solutions on IBM Power Systems
With the Secondary Time Travel mechanism, you can place the secondary system in
online mode and have it load replicated data to a point in the past. With this function, you
can easily recover data that was accidentally deleted on the primary system. In order for
this function to work, the replication modes must be either logreplay or
logreplay_readaccess. You can control the amount of change history to keep by using the
timetravel_max_retention_time parameter in global.ini. Make sure that the secondary
system data and log areas have enough space to handle the amount of time travel data
that you want to handle.
Chapter 1. Introduction 5
1.3 High availability for SAP HANA
The costs of downtime have increased over time, so companies are paying more attention to
HA today than in the past. Also, the costs for HA solutions have decreased considerably in a
way that makes much more sense to invest in protecting business continuity than to
undertake the downtime costs.
No one has 100% business continuity, which is why SAP HANA on IBM Power Systems offers
HA and disaster recovery (DR) solutions. Figure 1-3 shows the possible scenarios for HA and
DR that you can implement. This publication focuses on HANA and SUSE Linux Enterprise
Server, SAP HANA, Red Hat Enterprise Linux and PowerHA SystemMirror, and IBM VM
Recovery Manager DR mechanisms to provide HA. Other alternatives are documented in
SAP Note 2407186.
Figure 1-3 Available SAP HANA high availability and disaster recovery options
Note: The numbers in Figure 1-3 represent scenarios and not numbered steps.
From a business continuity perspective, you can protect your systems by creating a local HA
plan to ensure the minimum recovery time objective1 (RTO) possible, and also protect your
business from a complete site failure (DR). Scenarios 1, 2, and 3 in Figure 1-3 refer to HA,
and scenarios 4, 5, and 6 refer to DR.
In scenario 3, the failed virtual machines (VMs) are restarted on an adjacent server. This is a
shared storage topology that is used with Live Partition Mobility (LPM). The secondary server
or partition is inactive until the VMs are restarted (booted up) on it or if LPM is used for a
planned outage event. There is one physical copy and one logical copy. This solution is
outside of the scope of this publication.
1
The amount of time that it takes you to bring your system back online after a failure.
6 SAP HANA on IBM Power Systems: High Availability and Disaster Recovery Implementation Updates
Scenario 4, which is based on bare storage hardware replication mechanisms, is out of the
scope for this publication. In this scenario, build a HANA environment the same way as you
build the primary system, and leave the secondary system turned off. Only the HANA data
and log areas are replicated because each site instance has its own boot disk and HANA
binaries disk (/hana/shared). The RTO is the highest of all solutions, as shown in Figure 1-3
on page 6, because a full start and mount of the DB happens, and the recovery point
objective2 (RPO) is almost zero, but not zero.
IBM Geographically Dispersed Resiliency (GDR) is also a solution that can be used for HANA
DR3.
Scenario 6 is based on the replication of the VMs to a remote location. IBM Geographically
Dispersed Resiliency for Power Systems is the former product name. The DR server is
inactive until the replicated VMs are restarted on it. If the production system fails (or tested for
DR compliance), the VMs are restarted on a secondary system in the cluster. There are two
physical copies of the VMs and one logical copy in this particular configuration. This solution
is outside of the scope of this publication.
Figure 1-4 SAP HANA System Replication for Disaster Recovery scenario
2
The amount of data that is lost in a failure. Resilient IT systems attempt an RTO of 0.
3
IBM Geographically Dispersed Resiliency for Power Systems enables IBM POWER users to reliably realize low
recovery times and achieve recovery point objectives.
Chapter 1. Introduction 7
In essence, there is one HANA instance at the primary site and another one at the secondary
site. Each has their own independent storage areas for the HANA data, log, and shared
areas. In this DR scenario, the DR site has a fully duplicated environment for protecting your
data from a total loss of the primary site. So, each HANA system has its own IP address, and
each site has its own SAP application infrastructure pointing to that site’s HANA DB IP
address.
The system replication technology within SAP HANA creates a unidirectional replication for
the contents of the data and log areas. The primary site replicates data and logs to the
secondary site, but not vice versa. The secondary system has a replication receiver status
(secondary system), and can be set up for read-only DB access, thus not being idle.
If there is a failure in the primary site, all you need to do is perform a takeover operation on
the secondary node. This is a DB operation that is performed by the basis team and informs
the secondary node to come online with its full range of capabilities and operate as a normal,
and independent instance. The replication relationship with the primary site is broken. When
the failed node comes back online, it is outdated in terms of DB content, but all you need to do
is create the replication in the reverse order, from the secondary site to the primary site. After
your sites are synchronized again, you can choose to perform another takeover operation to
move the DB back to its original primary site.
According to SAP High Availability Guide, this scenario provides an RPO = 0 (synchronous
replication) and a low to medium RTO.
8 SAP HANA on IBM Power Systems: High Availability and Disaster Recovery Implementation Updates
This scenario builds a real HANA cluster where the DB itself knows it is working as a cluster,
as shown in Figure 1-5.
Figure 1-5 HANA scale-out architecture for storage area network deployments (shared disk)
Note: The typical difference between SAN and network-attached storage (NAS) is that an
NAS is a single storage device that operates on data files, and SAN is a local network of
multiple devices that operate on disk blocks. However, to connect to a SAN, you must have
the server class devices with SCSI Fibre Channel.
Each node has its own boot disk. The HANA data and log disks are either assigned to all
nodes as shared disks by using the storage connector API to ensure that no two nodes
access the same disk at the same time, or shared among the nodes as data and log file
systems that use a TDI-supported file system such as Network File System (NFS) or IBM
Enterprise Storage Server. Additionally, a third area, the HANA shared file system, is shared
among all nodes either through NFS or IBM Spectrum Scale in both deployment options.
Also, this architecture needs a dedicated, redundant, and low-latency 10 Gbps Ethernet or
InfiniBand network for the HANA nodes to communicate as a cluster environment, which is
called the internode communication network.
Note: Internode communication cannot run over InfiniBand. You need a minimum of 10 Gb
bandwidth that is tuned according to Recommendations for Network Configuration. For
filers, you need either InfiniBand (recommend 56 Gbps) or Ethernet. When it comes to
Ethernet, the new deployments do not use 10 Gbps. A best practice is to use 40 Gbps
single root input/output virtualization (SR-IOV) (no LPM).
This scenario has a master node, a set of worker nodes, and a set of standby nodes. The
most common implementations have just one standby node, so the HANA cluster can handle
the failure of a single node of either given node type. More standby nodes are required to
handle simultaneous node failures.
Chapter 1. Introduction 9
Whenever a worker node fails, the services on the failed node are taken over by a standby
node, which also reloads the portion of the data on the failed node into its memory. The
system administrator does not need to perform any manual actions. When the failed node
rejoins the cluster, it joins as a standby node. If the master node fails, one of the remaining
worker nodes takes over the role as master to prevent the DB from being inaccessible, and
the standby comes online as a worker node. For a comprehensive description about how
failover occurs, see SAP HANA Host Auto-Failover.
In the event of a node failure, the SAP application layer uses a load-balancing configuration to
allow any node within the cluster to take on any role. There is no concept of virtual IP
addresses for the HANA nodes. Explaining how to set up the application layer for this
particular environment is out of the scope for this publication.
Note: There is an option that is supported by SUSE High Availability Extension (HAE) to
combine Host Auto-Failover for local HA with HSR for DR. The virtual IPs are an optional
step or you can give the application servers a list of candidates to check (which is more
effort to maintain).
According to SAP High Availability Guide, this scenario provides an RPO=0 and a medium
RTO.
From a cost point of view, the standby nodes use all of their entitled processor and memory
resources and stay idle until a failover happens. The only room for cost optimization here is to
use dedicated donating processors in logical partitions (LPARs). Memory cannot be
cost-optimized. Also, in scale-out clusters with less than 2 TB of memory per node, no data is
handled by the master node, thus requiring an extra worker node.
You can think of this scale-up architecture as a two-node active/stand-by environment. This
scenario is what most SAP customers are used to when using other DBs other than HANA,
for example, a two-node active-passive SAP + IBM DB2® DB that is controlled by PowerHA
SystemMirror on AIX. It is most likely that these users migrate to HANA and apply this kind of
architecture to their new HANA environment.
10 SAP HANA on IBM Power Systems: High Availability and Disaster Recovery Implementation Updates
Figure 1-6 depicts this scenario.
Figure 1-6 Two-node HANA scale-up with SAP HANA System Replication plus SUSE Linux HA
This scenario shows two independent HANA systems, where one system is the primary
system and the other is the secondary system. The primary system is in active mode and
replicates data by using SAP HANA System Replication to the secondary system, which is in
a passive/stand-by mode. The secondary instance can also be in read-only mode. Different
from replication for DR, in this HA scenario the replication is synchronous, which ensures an
RPO of zero and a low RTO.
Each supported OS, SUSE Linux and Red Hat Enterprise Linux, have their own mechanisms
to create and manage the cluster at the OS level. Those mechanisms are defined as HA
Solution Partner in Figure 1-6.
Compared to the HA scenario that is described in 1.3.2, “High availability: SAP HANA Host
Auto-Failover” on page 8, this design does not use a network for HANA inter-node
communication, but instead uses a separate network for replicating the data from one node
to the other. Even though you can replicate data through the existing data network, use a
dedicated, redundant network based on 10 Gbps technologies to avoid competing for
bandwidth on the data network. Our best practices throughout this publication use a
dedicated network for data replication.
Important: As data is replicated from the source system to the destination system by using
HSR, you need twice as much space for the HANA data, log, and shared areas because
the disks are not shared between the two nodes, and each node has its own disks.
According to the SAP HANA High Availability Guide, this scenario provides an RPO=0 and a
low RTO, being the most preferred HA architecture by SAP.
Chapter 1. Introduction 11
12 SAP HANA on IBM Power Systems: High Availability and Disaster Recovery Implementation Updates
2
Hint: SAP Notes change constantly. Validate all notes before you start implementing your
HANA environment because SAP guidelines and statements change frequently. For SAP
Notes, see SAP ONE Support Launchpad.
It is a best practice to read the release notes for familiarity of features and requirements.
Table 2-1 shows a summary of some important SAP Notes to which you must pay special
attention.
Table 2-1 SAP Notes that are related to SAP HANA on IBM Power Systems implementations
SAP note Title
2230704 SAP HANA on IBM Power Systems with multiple LPARs per
physical host
The following sections describe important aspects of an SAP HANA on IBM Power Systems
implementation that uses the guidelines that are described in the notes in Table 2-1. These
rules must be followed in order for the system to be compliant and supported by SAP. It is also
considered a best practice to discuss these guidelines with SAP before starting the
implementation because they can have an impact on your systems architecture. There are no
comments that are documented in the following sections regarding the day-to-day
requirements, but we certainly apply all of them throughout the implementations in this
publication, and mention them when doing so.
14 SAP HANA on IBM Power Systems: High Availability and Disaster Recovery Implementation Updates
2.1.1 Storage and file system requirements
SAP HANA requires a minimum of four file systems:
The data file system: Where all the data is stored.
The log file system: Where the logs are stored.
The shared file system: Where the binary file and file-based backups are stored.
The /usr/sap file system: Where the local SAP system instance directories are stored.
As a best practice, for implementations that use storage area network (SAN) storage disks
with Extents File System (XFS), the data area must be divided into a minimum of four LUNs1,
the log area can be divided into multiple of four LUNs as well, and the shared area can be on
a single LUN or multiple ones. Their sizes vary according to the following SAP rules, which
are documented in SAP HANA Storage Requirements:
The minimal data area size requirement is 1.2 times the anticipated net data size on disk
if an application-specific sizing program can be used (for example, SAP HANA Quick
Sizer). If no sizing program can be used, then the minimum becomes 1x the amount of
RAM memory. Although there is no maximum limit, three times the size of the memory is a
good upper limit. Use multiples of four for the number of LUNs (4, 8, 12, and so on).
The minimal log area size is 0.5 times the size of memory for systems with less than or
equal to 512 GB of memory, or a fixed 512 GB for systems with more than 512 GB of
memory. As a best practice from our implementation experiences, using a log area equal
to the memory size for systems with less than 512 GB of memory is adequate to ensure
optimal performance.
The shared area size is 1x the size of the memory, up to the limit of 1 TB. For scale-out
configurations, this requirement is per group of four worker nodes, not per node.
SAP Note 2055470 requires the use of one of three file systems types for production SAP
HANA on IBM Power Systems environments for the data and log areas: XFS, Network File
System (NFS) (with a 10 Gbps dedicated, redundant network), or IBM Spectrum Scale in an
Elastic Storage Server configuration with a minimum 10 Gbps Ethernet or InfiniBand
connection. No other file system type is supported.
In addition to the file system type, the storage unit providing the LUNs must be certified by
SAP to work with HANA in a Tailored Datacenter Integration (TDI) methodology. A storage list
can be found at Certified and Supported SAP HANA Hardware Directory.
For the log area, you must use either low-latency disks, such as flash or solid-state drives
(SSDs), or ensure that the storage unit has a low-latency write cache area. This setup allows
changes to the data content in memory to be quickly written to a persistent device. These two
alternatives ensure that the speed of making the changes persistent on disk is as fast as
possible. After all, what good does an in-memory database (DB) provides if commit
operations must wait on slow disk I/O operations?
Note: Finally, and most important, the storage areas for data and log must pass the SAP
HANA Hardware Configuration Check Tool (HWCCT) file system tests.
1
Based on the number of paths to the storage. Our implementations use four N_Port ID Virtualization (NPIV) paths.
Similarly for Red Hat, you can obtain the image directly from the Red Hat downloads page.
You can download the no-charge trial ISO images to start, but you must have a valid Red Hat
Enterprise Linux Server license to register the system later, or your environment will not be
supported after 30 days.
Every customer who purchases SAP HANA on IBM Power Systems receives either a SUSE
Linux Enterprise Server license or Red Hat Enterprise Linux license from either IBM or SUSE
or Red Hat, depending on from whom the license was purchased. If the license is acquired
from IBM, then IBM supports any issues with the OS and is the starting point for opening OS
support tickets. If the license is acquired directly from SUSE or Red Hat, then SUSE or Red
Hat supports any issues with the OS and is the starting point for opening OS support tickets.
Important notice: The OS license code comes in a white envelope with the IBM hardware
if you purchased the license from IBM. Do not lose this envelope because if you do, you
must engage your sales representatives to obtain another license, and this process is
time-consuming and impacts your project schedule.
16 SAP HANA on IBM Power Systems: High Availability and Disaster Recovery Implementation Updates
2.2.2 Getting the IBM service and productivity tools for Linux on Power
IBM Power Systems is known for its high levels of reliability, availability, and serviceability
(RAS). The difference between an ordinary Linux for x86 image and a Linux on Power image
is that the latter has a layer of extra added value software to enable Linux to take advantage
of Power System hardware and virtualization features, such as dynamic logical partition
(DLPAR) operations, resource monitoring and control (RMC) communication with the
Hardware Management Console (HMC), and other functions.
Parts of the IBM RAS tools are distributed to Linux Business Partners such as SUSE, Red
Hat, and Ubuntu, and some others are available for download from IBM at no charge. So,
when you install Linux on Power, a subset of the RAS tools are already there. Nevertheless,
download the other packages from the IBM website, and any updates to the packages that
are included with the Linux distribution.
The RAS tools are based on the OS version that you use. To download and use them, see
Service and productivity tools.
Notice: Installing and updating the IBM Linux on Power RAS tools is a best practice for
SAP HANA on IBM Power Systems environments and other Linux on Power environments.
For packages that you cannot install in HANA logical partitions (LPARs), see SAP Note
2055470.
2.2.3 Getting the SAP HANA on IBM Power Systems installation files
The SAP HANA on IBM Power Systems installation files are downloadable from the SAP
Support Portal. You must have an SAP user ID (SAP user) with enough credentials to
download it. Only customers who purchased HANA licenses have access to the software.
What you must download from SAP’s support portal are the installation files, not each
individual SAP HANA component (server, client, studio, and so on). What you need to get is a
set of compressed RAR files. The first of them has a .exe extension, but these RAR files
work on Linux on Power as well.
Click Download software on the SAP Support Portal website. Then, click By Alphabetical
Index (A-Z) →H →SAP In-Memory (SAP HANA) →HANA Platform Edition →SAP
HANA Platform Edition →SAP HANA Platform Edition 2.0 →Installation to get the
HANA software.
Download all files for Linux on Power, including the HANA platform edition files and the
HANA Cockpit. The Cockpit is available for HANA 2.0 only.
Virtualization with PowerVM also enables you to handle the varying utilization patterns that
are typical in SAP HANA workloads. Dynamic capacity sizing allows for fast, granular
reallocation of compute resources among SAP HANA VMs. This approach to load-balancing
and tailoring the workload enhances agility compared to competing processor architectures
that require capacity to be allocated in larger chunks.
Another contributor to the flexibility of Power Systems servers is that they are deployed as
part of the SAP Tailored Datacenter Integration (TDI) model. The goal of this approach is to
reuse existing IT resources, such as server, storage, and networking assets. By supporting
TDI in the deployment of SAP HANA, Power Systems servers give organizations a choice of
the technology that they use compared to the rigidly defined hardware appliances that are
used in many competing SAP HANA infrastructures.
For more information about the SAP HANA TDI, see SAP HANA Server and Workload Sizing.
For more information about PowerVM and SAP HANA, see SAP HANA server infrastructure
with Power Systems.
For technical details about the PowerVM configuration for systems that run SAP HANA, see
SAP HANA on IBM Power Systems and IBM System Storage - Guides.
Note: Any information in this guide is superseded by the information at the links in this
chapter. Check those links for any updated information about SAP HANA on IBM Power
Systems.
These links provide and build a basic set of documents, but might not be complete for all
cases.
20 SAP HANA on IBM Power Systems: High Availability and Disaster Recovery Implementation Updates
Note: The following statements are based on multiple technical items, including NUMA
allocations, IBM POWER Hypervisor™ dispatcher wheel, multipath, network
communications optimization, and others.
It is not the goal of this chapter to explain in detail the reasons behind these statements. If
the reader wants to understand the reasoning behind them, see the linked documentation
in this chapter.
The specifics of the VIOS configuration when using LPAR with a production SAP HANA are:
If I/O virtualization is used, a dual-VIOS setup is mandatory. You can have more than two
VIOSes in the system separating different environments, such as production and test or
multiple customers. At the time of writing, Novalink and KMV is not supported for
virtualization.
Each VIOS must be configured with at least two dedicated or dedicated donating cores for
any serving SAP production systems. Size them as needed and monitor CPU usage to
adapt to workload changes over the lifetime of the system.
At least one Fibre Channel card per VIOS is needed. For high-end systems, be sure to
use optimal PCI placement. The HBA port speed must be at least 8 Gb minimum. A best
practice is 16 Gb, including the infrastructure.
At least one Ethernet card per VIOS is needed. Interfaces with 10 GbE are needed at a
minimum for scale-up systems. For scale-out systems, a speed of at least 10 GbE is
mandatory. For more information, see SAP HANA Network Requirements.
Note: There are strict PCI placement rules for optimal performance that are not
explicitly HANA-related. These rules are server- and card-dependant, and follow the
required PCI slot placement. If you require assistance for your particular system,
contact IBM Support.
Either dedicate a PCI card for the LPAR or use Ethernet virtualization with a Shared
Ethernet Adapter (SEA). Although single root input/output virtualization (SR-IOV) vNIC is
not yet explicitly used, use SR-IOV-capable cards, in particular if other LPARs are going to
be hosted in a system that already can use SR-IOV vNIC technology.
Use only supported storage virtualization with N_Port ID Virtualization (NPIV) when using
an SAP storage connector. Otherwise, use NPIV over other storage virtualizations on
PowerVM. Four NPIV ports per HANA LPAR must be used. Alternatively, as on Ethernet
you can also dedicate a PCI card to the LPAR and not use any virtualization; the 4-port
requirement remains regardless of your approach.
Jumbo frames with an MTU size of 9000 are required for native and VIOS-attached 10 Gb
Ethernet adapters to achieve the throughput key performance indicators (KPIs) that are
demanded by the SAP HANA Hardware Configuration Check Tool (HWCCT) tool. For
scale-up systems, there are no network KPIs.
Use Platform Large Send (PLSO).
For more information about setting up PLSO, MTU, and other SEA tuning, see Configuring
traditional largesend for SAP HANA on SLES with VIOS.
Similar to SAP NetWeaver systems, an SAP HANA database (DB) can be installed by
using a virtual IP address (VIPA). Beside the standard of using virtual IPs for SAP
applications, there are two cases where a virtual IP for the SAP HANA DB becomes
mandatory:
SAP Landscape Management (LaMa).
Most cluster solutions require having a virtual IP to fail over an SAP HANA System
Replication (HSR).
For more information, see SAP Note 962955 and SAP Note 1900823. For more
information about the Network Tuning Node, see SAP Note 2382421.
A simple overview of the configuration of a stand-alone system with HANA LPAR is shown in
Figure 3-1.
22 SAP HANA on IBM Power Systems: High Availability and Disaster Recovery Implementation Updates
3.3 Other considerations
This section shares other topics that are requirements to run SAP HANA on IBM Power
Systems, but that ease your overall experience with the solution.
In addition to these SAP Notes, all the documentation that is specified in Chapter 2, “Planning
your installation” on page 13, and at SAP HANA on IBM Power Systems and IBM System
Storage - Guides also are useful.
Start the LPAR and install the Base Operative System (BOS) by using the serial console on
the Hardware Management Console (HMC) until the SUSE or Red Hat Enterprise Linux
installer is available over the network with Virtual Network Computing (VNC). From this point,
follow the GUI installation procedure.
Note: There are other ways to install the BOS, such as using the command-line interface
(CLI). However, for this exercise, we use the GUI whenever possible.
There are specific recommendations for LPARs to be used for the HANA DB that you must be
aware of and follow. These recommendations are specified in Chapter 2, “Planning your
installation” on page 13, and in subsequent SAP Notes and SAP HANA on IBM Power
Systems and IBM System Storage - Guides.
Note: An LPAR is not the only way to install a HANA DB. It also can be installed in a full
system partition configuration. The only prerequisite is that it is installed on top of
PowerVM and its size is not over the limits. The process for installing the BOS is similar
either way.
This chapter also uses Virtual I/O Servers (VIOS) for I/O without a dedicated PCI slot to the
LPAR. We use N_Port ID Virtualization (NPIV) for the storage virtualization and Shared
Ethernet Adapters (SEAs) for the network virtualization, as shown in Chapter 3, “IBM
PowerVM and SAP HANA” on page 19.
Important: NPIV is a must for Fibre Channel because VDisk mapping introduces too much
latency for high-speed storage and increases CPU requirements on the VIOS.
26 SAP HANA on IBM Power Systems: High Availability and Disaster Recovery Implementation Updates
4.3.1 Starting the logical partition in SMS mode
Complete the following steps:
1. From the HMC GUI, select the HANA LPAR, which in this example is hana001. Click
Actions →Activate, as shown in Figure 4-1.
Note: Figure 4-1 shows the Hardware Management Console (HMC) running V9R1
M920. Your view can differ depending on which version of the HMC code you are
running.
Figure 4-1 Activating the logical partition from the Hardware Management Console
2. After you select the appropriate profile, click Advanced Settings and select Systems
Management Services under Boot Mode. Click Finish to start the LPAR. When the
LPAR starts, you see the window that is shown in Figure 4-3 on page 29.
28 SAP HANA on IBM Power Systems: High Availability and Disaster Recovery Implementation Updates
Figure 4-3 Confirming the partition activation
Note: Be sure to boot into SMS so that you can choose the installation device.
For more information, see IBM Knowledge Center, where this method, among others, is
explained in detail.
-------------------------------------------------------------------------------
Navigation Keys:
-------------------------------------------------------------------------------
Type menu item number and press Enter or select Navigation key:
Note: For more information about selecting a device to boot, see Example Using SMS
To Choose Boot Device.
2. Select Select Boot Options and press Enter, as shown in Example 4-2.
30 SAP HANA on IBM Power Systems: High Availability and Disaster Recovery Implementation Updates
4. SAN Zoning Support
-------------------------------------------------------------------------------
Navigation keys:
M = return to Main Menu
ESC key = return to previous screen X = eXit System Management
Services
-------------------------------------------------------------------------------
Type menu item number and press Enter or select Navigation key:
3. Select Select Install/Boot Device and press Enter. The panel that is shown in
Example 4-3 opens.
-------------------------------------------------------------------------------
Navigation keys:
M = return to Main Menu
ESC key = return to previous screen X = eXit System Management
Services
-------------------------------------------------------------------------------
Type menu item number and press Enter or select Navigation key:
4. Select CD/DVD and press Enter. The panel that is shown in Example 4-4 opens.
-------------------------------------------------------------------------------
Navigation keys:
M = return to Main Menu
ESC key = return to previous screen X = eXit System Management
Services
-------------------------------------------------------------------------------
Type menu item number and press Enter or select Navigation key:
5. Because you are using vSCSI, select 1. SCSI and press Enter. The panel that is shown in
Example 4-5 opens.
32 SAP HANA on IBM Power Systems: High Availability and Disaster Recovery Implementation Updates
-------------------------------------------------------------------------------
Navigation keys:
M = return to Main Menu
ESC key = return to previous screen X = eXit System Management
Services
-------------------------------------------------------------------------------
Type menu item number and press Enter or select Navigation key:
6. If the vSCSI adapter is properly configured, you see it in the SMS menu, as shown in
Example 4-5 on page 32. Select the correct vSCSI adapter and press Enter. The panel
that is shown in Example 4-6 opens.
-------------------------------------------------------------------------------
Navigation keys:
M = return to Main Menu
ESC key = return to previous screen X = eXit System Management
Services
-------------------------------------------------------------------------------
Type menu item number and press Enter or select Navigation key:
Note: If the CD/DVD is already set up as Current Position of boot number 1, choosing
the boot device is optional.
SCSI CD-ROM
( loc=U8286.42A.21576CV-V19-C5-T1-L8100000000000000 )
1. Information
2. Normal Mode Boot
3. Service Mode Boot
-------------------------------------------------------------------------------
Navigation keys:
M = return to Main Menu
ESC key = return to previous screen X = eXit System Management
Services
-------------------------------------------------------------------------------
Type menu item number and press Enter or select Navigation key:
8. Select 2. Normal Mode Boot and press Enter. The panel that is shown in Example 4-8
opens.
34 SAP HANA on IBM Power Systems: High Availability and Disaster Recovery Implementation Updates
-------------------------------------------------------------------------------
Navigation Keys:
-------------------------------------------------------------------------------
Type menu item number and press Enter or select Navigation key:
9. Select 1. Yes and press Enter. You exit from SMS mode and boot from the SUSE or Red
Hat Enterprise Linux installation media. In this example, you boot from the SUSE
installation media.
4.3.3 SUSE Linux Enterprise Server V12 SP3 for SAP applications installation
After you boot from the SUSE installation media, complete the following steps:
1. After a few seconds, you see the GRUB boot main menu, as shown in Example 4-9.
Example 4-9 SUSE Linux Enterprise Server 12 SP3 GRUB boot main menu
SUSE Linux Enterprise 12 SP3
+----------------------------------------------------------------------------+
|*Installation |
| Rescue System |
| Upgrade |
| Check Installation Media |
| local |
| Other options... |
| |
| |
| |
| |
| |
| |
+----------------------------------------------------------------------------+
Note: If no key is pressed before the countdown ends on the GRUB main boot menu, a
default installation is performed. Restart the LPAR and attempt to interrupt this panel
before the countdown expires. To do this, start from the boot partition step that is
described in 4.3.1, “Starting the logical partition in SMS mode” on page 27.
+----------------------------------------------------------------------------+
|setparams 'Installation' |
| |
| echo 'Loading kernel ...' |
| linux /boot/ppc64le/linux |
| echo 'Loading initial ramdisk ...' |
| initrd /boot/ppc64le/initrd |
| |
| |
| |
| |
| |
| |
+----------------------------------------------------------------------------+
For this installation, we know that the network device is eth0, which is going to be used for
the VNC installation. We also know the IP address that is going to be used on that
interface, which in this case is 10.10.12.83/24. We also know that the IP gateway is
10.10.12.1 and the DNS servers are 10.10.12.10 and 10.10.12.9. The host name is
going to be hana001 and the proxy IP address and port has no user authentication
required. Append to the linux line the text that is shown in Example 4-11.
The reason that we are setting a proxy is that in this example is so that we can use the
option to access the SUSE registration and updates. If you have a system that has direct
access to the internet or uses the Subscription Management Tool, you can ignore the
proxy configuration.
Note: If you are not sure of the network interface name, you can try eth* instead of
eth0, which sets the information in all eth devices.
36 SAP HANA on IBM Power Systems: High Availability and Disaster Recovery Implementation Updates
Note: You can adapt this line for your environment by using the following syntax:
ifcfg=eth0=<IP Address>/<Netmask range>,<Gateway>,<nameserver>
hostname=<host name> vnc=1 vncpassword='VNCPASSWORD'
proxy=http://USER:PASSWORD@proxy.example.com:PORT
After appending your information to the Linux entry in GRUB, the panel that is shown in
Example 4-12 opens.
+----------------------------------------------------------------------------+
|setparams 'Installation' |
| |
| echo 'Loading kernel ...' |
| linux /boot/ppc64le/linux ifcfg=eth0=10.10.12.83/24,10.10.12.1,10.10.12.1\|
|0 hostname=hana001 proxy=http://10.10.16.10:3128 vnc=1 vncpas\|
|sword=Passw0rd |
| echo 'Loading initial ramdisk ...' |
| initrd /boot/ppc64le/initrd |
| |
| |
| |
| |
+----------------------------------------------------------------------------+
3. Press Ctrl+x, which starts the SUSE installation with the chosen parameters on GRUB.
After a couple of minutes, the panel that is shown in Example 4-13 opens.
Example 4-13 Starting YaST2 and Virtual Network Computing boot message
starting VNC server...
A log file will be written to: /var/log/YaST2/vncserver.log ...
***
*** You can connect to <host>, display :1 now with vncviewer
*** Or use a Java capable browser on http://<host>:5801/
***
(When YaST2 is finished, close your VNC viewer and return to this window.)
Active interfaces:
eth0: 2e:82:88:20:14:1e
10.10.12.83/24
fe80::2c82:88ff:fe20:141e/64
38 SAP HANA on IBM Power Systems: High Availability and Disaster Recovery Implementation Updates
2. Select your language and keyboard settings, and if you agree with the SUSE license
terms, select the I agree to the License Terms check box, and click Next.
The System Probing Yast2 window opens, where you can enable multipath in this setup,
as shown in Figure 4-5.
Note: If you do not see the multipath window, there is a problem with the storage
configuration. Before continuing with the installation, see 3.2, “Virtual I/O Server” on
page 20.
4. In this scenario, we use the scc.suse.com system to register this installation. If your setup
has a local SMT server, you can use it instead.
Note: If you do not register now, you must do it later before the HANA DB software is
installed.
40 SAP HANA on IBM Power Systems: High Availability and Disaster Recovery Implementation Updates
After you input the registration information, click Next. When the registration completes, a
window opens that offers to enable the update repositories, as shown in Figure 4-7.
42 SAP HANA on IBM Power Systems: High Availability and Disaster Recovery Implementation Updates
6. Select the IBM DLPAR Utils for SLE 12 ppc64le and IBM DLPAR sdk for SLE 12
ppc64le extensions, and click Next. The Extension and Module Selection window opens,
as shown in Figure 4-9.
Figure 4-9 Yast2 IBM DLPAR Utils for SLE 12 ppc64le License Agreement window
Figure 4-10 Yast2 IBM DLPAR sdk for SLE 12 ppc64 License Agreement window
44 SAP HANA on IBM Power Systems: High Availability and Disaster Recovery Implementation Updates
8. If you agree with the license terms, select I Agree to the License Terms and click Next.
The Import GnuPG Key for IBM-DLPAR-utils repository window opens, as shown in
Figure 4-11.
Figure 4-11 Yast2 Import GnuPG Key for IBM-DLPAR-utils repository window
Figure 4-12 Yast2 Import GnuPG Key for IBM-DLPAR-Adv-toolchain repository window
46 SAP HANA on IBM Power Systems: High Availability and Disaster Recovery Implementation Updates
10.After you check that the ID and Fingerprint are correct, click Trust. The Choose Operation
System Edition selection window opens, as shown in Figure 4-13.
Note: Although RDP is not a requirement, we found that using RDP for operations is a
best practice in SAP Landscape. If that is not your case, clear the check box.
48 SAP HANA on IBM Power Systems: High Availability and Disaster Recovery Implementation Updates
12.No changes are needed in this window. Click Next. The Suggested Partitioning window
opens, as shown in Figure 4-15.
Note: Although you can change the partitioning, there is no actual requirement to do
so. For consistency, use the suggested defaults for the partitioning. If you change them,
use Btrfs because it is the default for SUSE V12.
14.Select the time zone for your system and click Next.
Note: You can click Other Settings and configure the Network Time Protocol (NTP)
settings. However, if you join a domain, we do not show those steps at installation time
to keep these instructions more generic. For more information about how to set up the
NTP client after installation, see SUSE Doc: Administration Guide - Time
Synchronization with NTP.
The Password for the System Administrator root window opens, as shown in Figure 4-17
on page 51.
50 SAP HANA on IBM Power Systems: High Availability and Disaster Recovery Implementation Updates
Figure 4-17 YaST2 Password for the System Administrator root window
15.After you input the password and confirm it, click Next.
Note: If YaST2 states that your password is weak, you see a warning. Input a stronger
password.
16.Click the Software link. The window that is shown in Figure 4-19 on page 53 opens.
Note: In our scenario, we install Gnome and other patterns that are not listed as
required but optional. You can modify the patterns that you want to install. What you
select must adhere to SAP Note 1984787. Use the current version.
52 SAP HANA on IBM Power Systems: High Availability and Disaster Recovery Implementation Updates
Figure 4-19 YaST2 Software window
17.In the Software selection, click the SAP HANA Server Base and High Availability
patterns. Click OK.
Note: If you are installing a stand-alone HANA DB without SUSE high availability (HA),
you can select only the SAP HANA Server Base pattern.
Figure 4-20 YaST2 SAP HANA Server Base pattern krb5-32-bit warning window
Note: SUSE is aware of this issue, and at the time of writing is working on a fix for it.
18.Accept the patterns by clicking OK. You are back in the Software window but with the
patterns selected, as shown in Figure 4-21 on page 55.
54 SAP HANA on IBM Power Systems: High Availability and Disaster Recovery Implementation Updates
Figure 4-21 YaST2 Software with patterns selected window
19.Click disable for the Firewall and SSH menu. If your HANA installation requires hardening
at the OS level, see Operating System Security Hardening Guide for SAP HANA.
20.Click Install to start the installation. On the confirmation window, click Install if you are
ready to perform the installation.
After several minutes, the installer completes and updates the system. The system also
restarts so that you can then connect by using RDP or SSH when the installation
completes.
Installing the service and productivity tools for SUSE Linux Enterprise
Server
If you did not install the service and productivity tools by using the Extension and Module
Selection selection that is shown in Figure 4-8 on page 42, then you must install the Service
and productivity tools for Linux on Power Servers on the LPAR that you installed. To do so,
you must download the binary files from the website and follow the instructions to install the
repositories. In this scenario, we download the binary files directly to the LPAR and install
them as shown in Example 4-14.
56 SAP HANA on IBM Power Systems: High Availability and Disaster Recovery Implementation Updates
Retrieving
http://public.dhe.ibm.com/software/server/POWER/Linux/yum/download/ibm-power-repo-
latest.noarch.rpm
warning: /var/tmp/rpm-tmp.kejIvd: Header V4 DSA/SHA1 Signature, key ID 3e6e42be:
NOKEY
Preparing... ################################# [100%]
Updating / installing...
1:ibm-power-repo-3.0.0-17 ################################# [100%]
After the file is installed in your system, you must run the /opt/ibm/lop/configure command
and accept the license.
Regardless, you can add the files by using the CLI or during the BOS installation. After the
IBM repositories are visible (run the zypper lr command), you can install the needed
software. Follow the instructions for Tools Installation instructions for SUSE. In this scenario,
because the LPAR is managed by an HMC, we install the tools that are shown in
Example 4-15.
Example 4-15 Installing the service and productivity tools binary files
redhana001:~ # zypper install ibm-power-managed-sles12
Refreshing service
'SUSE_Linux_Enterprise_Server_for_SAP_Applications_12_SP3_ppc64le'.
Loading repository data...
Reading installed packages...
Resolving package dependencies...
[ snip ]
Now, you can use dynamic LPAR operations on this LPAR for adding or removing devices,
such as CPU or memory.
Note: Enabling an MTU higher than 1500 can make some hosts unreachable or cause
severe performance degradation due to network configurations across the LAN. Check that
all the needed configurations are done before enabling jumbo frames. After you enable
MTU 9000, check it by running the ping -M do -s 8972 [destinationIP] command. If you
cannot reach the host with the ping command, it means that fragmentation happened,
which means that the MTU change worsened the performance. MTU changes are not
changes that are done only at the OS level; they must be incorporated with the settings in
all participating devices on the flow.
For single root input/output virtualization (SR-IOV), vNIC is not supported. However, you can
use SR-IOV-capable adapters. For more information about this topic and future
developments, see SAP HANA on IBM Power Systems and IBM System Storage - Guides.
Configuring the Network Time Protocol client for SUSE Linux Enterprise
Server
If you did not configure the NTP client at installation time (see Figure 4-16 on page 50), you
must do so before installing the HANA software. In this scenario, we use two NTP servers that
we configure manually. For more information and other configuration options, see SUSE Doc:
Administration Guide - Manually Configuring NTP in the Network.
Example 4-16 Adding Network Time Protocol IP servers to the ntp.conf file
hana001:~ # echo server 10.10.12.10 iburst >> /etc/ntp.conf
hana001:~ # echo server 10.10.12.9 iburst >> /etc/ntp.conf
2. Enable, start, and query the NTP service by running the systemctl command, as shown in
Example 4-17.
Example 4-17 The systemcl enable, start, and query Network Time Protocol commands
hana001:~ # systemctl enable ntpd
Created symlink from /etc/systemd/system/multi-user.target.wants/ntpd.service
to /usr/lib/systemd/system/ntpd.service.
hana001:~ # systemctl start ntpd
hana001:~ # systemctl status ntpd
? ntpd.service - NTP Server Daemon
Loaded: loaded (/usr/lib/systemd/system/ntpd.service; enabled; vendor
preset: disabled)
Drop-In: /run/systemd/generator/ntpd.service.d
••50-insserv.conf-$time.conf
Active: active (running) since Wed 2017-07-12 17:05:14 EDT; 1s ago
Docs: man:ntpd(1)
Process: 42347 ExecStart=/usr/sbin/start-ntpd start (code=exited,
status=0/SUCCESS)
Main PID: 42354 (ntpd)
Tasks: 2 (limit: 512)
58 SAP HANA on IBM Power Systems: High Availability and Disaster Recovery Implementation Updates
CGroup: /system.slice/ntpd.service
••42354 /usr/sbin/ntpd -p /var/run/ntp/ntpd.pid -g -u ntp:ntp -c
/etc/ntp.conf
••42355 ntpd: asynchronous dns resolver
3. Query the servers by running the ntpq command. You can see that both NTP servers were
contacted, as shown in Example 4-18.
Important: The log LUNs must either be on solid-state drives (SSDs) or flash
arrays, or the storage subsystem must provide a low-latency write cache area.
The LUNs are locally attached to the node, and each node has the number of LUNs
described before, for either a scale-up or scale-out scenario. For a scale-out scenario, you
must share each data and log LUN to each of the participating scale-out cluster nodes,
which means that you must zone and assign all data and log LUNs to all nodes. For more
information, see 5.3.2, “File systems for scale-out systems” on page 76.
Regarding the HANA shared area, there are different approaches depending on whether
you are working with a scale-up or with a scale-out scenario.
Scale-up systems do not need to share this area with any other nodes, so the simplest
approach is to create a local file system by using locally attached storage disks. In our
scale-up lab systems, in addition to the LUNs that are described in 5.1, “Storage layout” on
page 62, we also add one 50 GB LUN for HANA that is shared to the LPAR.
If you are planning on implementing scale-up systems, see 5.2, “Linux multipath setup” on
page 66.
Note: The size of the HANA shared file system in a scale-out configuration is 1x the
amount of RAM per every four HANA worker nodes. For example, if you have nodes with
140 GB of memory and you have up to four worker nodes, your shared file system size is
140 GB. If you have 5 - 8 working nodes, the size is 280 GB, and so on.
1
Our systems use bigger data and log areas than the guidelines that are presented. Those rules are minimum
values.
62 SAP HANA on IBM Power Systems: High Availability and Disaster Recovery Implementation Updates
Scale-out systems have the following alternatives for sharing the HANA shared area:
Place the /hana/shared file system on a highly available Network File System (NFS)
server and have each scale-out cluster connect to it over the network.
Create an IBM Spectrum Scale cluster with the scale-out nodes, and create an
IBM Spectrum Scale file system for the /hana/shared area.
Both alternatives yield the same result, which ensures all scale-out nodes can access the
contents of the /hana/shared file system. Be careful because this file system must be
protected by high availability (HA) technologies, or it becomes a single point of failure (SPOF)
for the entire scale-out cluster.
You might already have an existing NetWeaver shared file system in your landscape. If so,
consider taking advantage of it.
Tip: The minimum IBM Spectrum Scale version is 5.0.0 because prior versions caused
hdblcm to run into issues due to wrong reported subblock sizes.. For more information, see
SAP HANA and ESS: A Winning Combination, REDP-5436.
Recall that an IBM Spectrum Scale cluster with shared LUNs on all nodes provides a highly
available file system on those LUNs. Even if a node in the scale-out cluster fails, the
/hana/shared content is still accessible by all the remaining nodes. This is a reliable
architecture, not complex to implement, and it does not need for any environments to be set
up externally to the HANA scale-out nodes themselves.
Caution: You must use an NFS server with HA capability or your overall HANA scale-out
cluster will be in jeopardy because of a SPOF in your NFS server system.
Here are services that you must consider when implementing a highly available NFS server
topology in your environment:
Highly Available NFS service with DRBD and Pacemaker with SUSE Linux Enterprise
High Availability Extension
Highly Available NFS service on AIX with PowerHA SystemMirror
High Availability NFS service with DRBD and IBM Tivoli System Automation for
Multiplatform (SA MP) with SUSE Linux Enterprise High Availability Extension
SAP HANA on NetApp Systems with NFS
For more information about how to set up the NFS server export parameters and the clients
mount parameters, see 5.3.2, “File systems for scale-out systems” on page 76.
Note: If IBM XIV®, IBM Spectrum Accelerate™, IBM FlashSystem® A9000R or IBM
FlashSystem A9000 storage systems are used to provide storage for the HANA
environment, install the IBM Storage Host Attachment Kit. The IBM Storage Host
Attachment Kit is a software pack that simplifies the task of connecting a host to supported
IBM storage systems, and it provides a set of command-line interface (CLI) tools that help
host administrators perform different host-side tasks.
For more information, see IBM Storage Host Attachment Kit welcome page.
The multipath -ll (double lowercase L) command shows the storage disks that are attached
to your system and the paths to access them. Example 5-1 shows the output of one of our lab
systems that contains only the operating system (OS) installation disk that is attached to it.
First, check the line that is marked as 1 in the example. In that line, you can identify the LUN
ID of the target disk, which in this case is 2001738002ae12c88. Also, the disk device name is
dm-0. The characters dm are the standard Linux nomenclature for devices that are managed
by the device mapper service. The output tells you that this is an IBM XIV LUN. So, in
summary, you know that the XIV LUN with ID 2001738002ae12c88 is mapped as /dev/dm-0 in
that system. As this is the only disk you have so far, you know that this is the OS installation
disk.
Line 2 in the output shows some characteristics of that disk. The important information to
notice is the disk size, which in this case is 64 GB.
64 SAP HANA on IBM Power Systems: High Availability and Disaster Recovery Implementation Updates
Finally, line 3 onwards displays each one of the paths over which the disk is accessed. Our
system has four Fibre Channel adapters, and the XIV storage has two controllers, so you see
each disk through eight paths. Linux uses an ordinary disk device name for each path by
which a disk is accessed, and then joins those disks under a device mapper entry. In our
case, the sda through sdh devices are the same OS installation disk that is seen by each of
the eight paths, and then joined under the dm-0 device.
Note: By design, all paths to the XIV storage devices are active. Other storage device
types, such as IBM Spectrum Virtualize™ (formerly IBM SAN Volume Controller) work with
active and passive paths.
You are most likely asking yourself why we have not yet mapped all of the other HANA LUNs
to the system. The answer is based on experience. If you do so, the SUSE Linux Enterprise
Server disk probing mechanisms during the OS installation probe all the disks multiple times:
once during start, another during the initial disk partitioning layout suggestion, and another
one each time you change the partitioning scheme. Probing multiple disks multiple times is a
time-consuming task. So, to save time, attach the HANA LUNs after the OS is installed.
3. Now, you can run the multipath -ll command again and verify that all of your disks
appear in the listing. Send the output to the grep command to make the output shorter. If
you want to see all of the output, run multipath -ll alone. Check that all the disks are
there. Example 5-3 shows the output after attaching the remaining LUNs.
Example 5-3 Output of multipath -ll with all LUNs attached to the system
hanaonpower:~ # multipath -ll | grep dm-
2001738002ae12c8a dm-6 IBM,2810XIV
2001738002ae12c89 dm-5 IBM,2810XIV
2001738002ae12c88 dm-0 IBM,2810XIV
2001738002ae12c8f dm-10 IBM,2810XIV
2001738002ae12c8e dm-8 IBM,2810XIV
2001738002ae12c8d dm-9 IBM,2810XIV
2001738002ae12c8c dm-7 IBM,2810XIV
2001738002ae12c8b dm-4 IBM,2810XIV
2001738002ae12c92 dm-13 IBM,2810XIV
2001738002ae12c91 dm-12 IBM,2810XIV
2001738002ae12c90 dm-11 IBM,2810XIV
Notice in Example 5-3 that we have a total of 11 disks: dm-0, dm-4, dm-5, dm-6, dm-7, dm-8,
dm-9, dm-10, dm-11, dm-12, and dm-13. Also, these names can change when the system
restarts. It is not a best practice to rely on the device naming. A better approach is to use
aliases for the disks. Section 5.2, “Linux multipath setup” on page 66 provides information
about how to use aliases for the disks.
Each storage subsystem vendor has its own best practices for setting up the multipathing
parameters for Linux. Check your storage vendor’s documentation for their specific
recommendations.
The first piece of information that you must know about the /etc/multipath.conf file is that it
is composed of four sections:
The defaults section: Contains the parameters values that are applied to the devices.
The blacklist section: Excludes the devices in this list from having the default or specific
device parameter values applied.
The multipaths section: Defines aliases for the disks.
The devices section: Overrides the default parameters section to enable specific
parameter values according to the storage in use.
Example 5-4 is a fully functional multipath.conf file that is built for an IBM XIV Storage
System. We reference this file throughout this chapter to explain the concepts behind it.
blacklist {
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^(hd|xvd|vd)[a-z]*"
}
multipaths {
#ROOTVG
multipath {
wwid 2001738002ae12c88
alias ROOTVG
}
#USRSAP
multipath {
wwid 2001738002ae12c89
alias USRSAP
}
#HANA DATA
multipath {
wwid 2001738002ae12c8c
alias HANA_DATA_1_1
}
multipath {
wwid 2001738002ae12c8d
alias HANA_DATA_1_2
66 SAP HANA on IBM Power Systems: High Availability and Disaster Recovery Implementation Updates
}
multipath {
wwid 2001738002ae12c8e
alias HANA_DATA_1_3
}
multipath {
wwid 2001738002ae12c8f
alias HANA_DATA_1_4
}
#HANA LOG
multipath {
wwid 2001738002ae12c8a
alias HANA_LOG_1_1
}
multipath {
wwid 2001738002ae12c90
alias HANA_LOG_1_2
}
multipath {
wwid 2001738002ae12c91
alias HANA_LOG_1_3
}
multipath {
wwid 2001738002ae12c92
alias HANA_LOG_1_4
}
#HANA SHARED
multipath {
wwid 2001738002ae12c8b
alias HANA_SHARED01
}
}
devices {
device {
vendor "IBM"
product "2810XIV"
path_selector "round-robin 0"
path_grouping_policy multibus
rr_min_io 15
path_checker tur
failback 15
no_path_retry queue
}
}
Note: The mutlipath.conf file that is shown here is for the IBM XIV Storage System. All
storage system vendors have their own recommended settings for different storage
systems and OSes that must be followed. Appendix B, “Example of a multipath.conf file for
SAP HANA systems” on page 153 covers the IBM Spectrum Virtualize storage system
recommended multipath.conf settings for Linux OS.
The user_friendly_names parameter instructs the system to either not use friendly names
(no, which is the default option) and instead use the LUN WWID to name the devices under
the /dev/mapper directory, or use some better naming, such as /dev/mapper/mpath<n> when
using friendly names. Even though the form mpath<n> looks better than the LUN WWID for
listing the disks, we still recommend overriding the disk names with custom aliases entries, as
described in “The multipaths section” on page 68.
For more information about user_friendly_names, see SUSE Doc: Storage Administration
Guide - Configuring User-Friendly Names or Alias Names.
For more information about blacklisting devices, see SUSE Doc: Storage Administration
Guide - Blacklisting Non-Multipath Devices.
Example 5-4 on page 66 creates a multipath { } entry for each one of our 11 disks. Each
entry contains a description for the LUN WWID and then the alias we want to assign to it. How
do you know which LUN is supposed to be used with each alias? It was probably your storage
admin who created the LUNs per your request, so ask what the LUN ID is for each one of the
LUNs that are created for you. Verify this information by looking at the output of the multipath
-ll command, as shown in Example 5-3 on page 65, and check that the LUN IDs and size
match what you are expecting.
Important: Check that you correctly assign the log LUN ID to your log disk alias, especially
if you are directly specifying an SSD or flash disk to be used as the HANA log area.
Scale-out clusters that use the HANA shared area from a highly available NFS server
infrastructure do not see a locally attached HANA shared LUN, and do not need to define an
alias for the HANA shared disk. Scale-out clusters that use IBM Spectrum Scale for the HANA
shared area can still create an alias for the shared LUN on the cluster nodes.
The following naming convention is not mandatory, but is a best practice. Consider naming
your HANA data and log LUNs according to the following scheme:
HANA_DATA_<node number>_<data disk number>
HANA_LOG_<node number>_<log disk number>
68 SAP HANA on IBM Power Systems: High Availability and Disaster Recovery Implementation Updates
This naming scheme is especially useful in scale-out clusters where all data and log disks
must be mapped to all nodes. In this way, you can easily identify that disk HANA_DATA32 is
the second data disk from node 3.
Regardless of how many storage units you use, the best practice is to isolate the settings for
them inside a device { } definition, as shown in Example 5-4 on page 66. Our configuration
has a device { } entry for an IBM (vendor) 2810XIV (product) storage unit type. We then have
definitions for a multitude of parameters, such as path_selector, rr_min_io, and others.
These settings deliver better performance for a multipath configuration by using IBM XIV.
Each storage vendor has its own recommendations for which parameters to use in this
section and how to tune them for performance. Check their respective documentation for their
best practices.
Now, check what happens to the output of the multipath -ll command, as shown in
Example 5-6. We use a combination of the multipath -ll command with the grep command
to output only the important information we want to validate: alias, WWID, and disk size.
Example 5-6 Checking the changes that are applied to the multipath configuration
hanaonpower:~ # multipath -ll | grep IBM -A 1
USRSAP (2001738002ae12c89) dm-5 IBM,2810XIV
size=48G features='2 queue_if_no_path retain_attached_hw_handler' hwhandler='1 alua' wp=rw
--
HANA_LOG_1_1 (2001738002ae12c8a) dm-6 IBM,2810XIV
size=35G features='2 queue_if_no_path retain_attached_hw_handler' hwhandler='1 alua' wp=rw
--
HANA_LOG_1_2 (2001738002ae12c90) dm-11 IBM,2810XIV
size=35G features='2 queue_if_no_path retain_attached_hw_handler' hwhandler='1 alua' wp=rw
--
HANA_LOG_1_3 (2001738002ae12c91) dm-12 IBM,2810XIV
size=35G features='2 queue_if_no_path retain_attached_hw_handler' hwhandler='1 alua' wp=rw
--
HANA_LOG_1_4 (2001738002ae12c92) dm-13 IBM,2810XIV
size=35G features='2 queue_if_no_path retain_attached_hw_handler' hwhandler='1 alua' wp=rw
--
ROOTVG (2001738002ae12c88) dm-0 IBM,2810XIV
size=64G features='2 queue_if_no_path retain_attached_hw_handler' hwhandler='1 alua' wp=rw
--
HANA_SHARED01 (2001738002ae12c8b) dm-4 IBM,2810XIV
size=144G features='2 queue_if_no_path retain_attached_hw_handler' hwhandler='1 alua' wp=rw
--
All disks are now referenced by their aliases and are mapped as symbolic links in the
/dev/mapper/ directory. Example 5-7 lists our lab environment system’s disk aliases in
/dev/mapper.
Now that you have all your LUNs available and using aliases and the multipath daemon is set
up according to your storage vendor specifications, it is time to create the HANA file systems.
70 SAP HANA on IBM Power Systems: High Availability and Disaster Recovery Implementation Updates
Each file system is created on its own disks and its own LVM elements. The <SID> is the SAP
ID (SID) of the instance that you are installing. For example, if your SID is HP0, then the mount
points are /hana/data/HP0 and /hana/log/HP0.
Also, notice that the mount point setup for scale-out is different and is treated as described in
5.3.2, “File systems for scale-out systems” on page 76.
As the implementation for scale-up systems and scale-out clusters differs slightly, there is a
separate section for each scenario.
All tunings that are described in this section come from the IBM System Storage Architecture
and Configuration Guide for SAP HANA Tailored Datacenter Integration. The values that are
used in this publication are current at the time of writing. When you implement SAP HANA on
IBM Power Systems, review the white paper to check for current tuning recommendations and
guidance.
Note: If you work with Multiple Component One System (MCOS) implementations where
more than one HANA instance co-exists in the same OS, then use /hana/data<SID> and
/hana/log/<SID> as the mount points for your file systems, and name each one of the
instance’s volume groups (VGs) and logical volumes (LVs) uniquely to avoid confusion.
Example 5-8 Creating the HANA data logical volume manager volume group
hanaonpower:~ # vgcreate -s 1M --dataalignment 1M hanadata
/dev/mapper/HANA_DATA*
Physical volume "/dev/mapper/HANA_DATA01" successfully created
Physical volume "/dev/mapper/HANA_DATA02" successfully created
Physical volume "/dev/mapper/HANA_DATA03" successfully created
Physical volume "/dev/mapper/HANA_DATA04" successfully created
Volume group "hanadata" successfully created
Example 5-9 Creating the HANA data logical volume manager volume group
hanaonpower:~ # lvcreate -i 4 -I 256K -l 100%FREE -n datalv hanadata
Logical volume "datalv" created.
3. Create the file system on the newly created LV. The SAP supported file system that we
use in our examples is Extents File System (XFS). Example 5-10 shows this step. Notice
that we use some tuning parameters, such as:
– -b size=4096: Sets the block size to 4096 bytes.
– -s size=4096: Sets the sector size to 4096 bytes.
2. Create the LV. The tuning parameters are the same ones that are used for the data file
system, as shown in Example 5-12.
72 SAP HANA on IBM Power Systems: High Availability and Disaster Recovery Implementation Updates
3. Create the XFS file system on the newly created LV, as shown in Example 5-13. The
HANA log file system has the same settings that are used for the HANA data area.
2. When you create the LV, there is no need to apply any striping parameters because the
shared area is mapped onto one disk only. Example 5-15 shows the creation of the LV.
3. Create the XFS file system on the newly created LV. For the HANA shared area, there is
no need to apply any file system tuning flags, as shown in Example 5-16.
The HANA mount points are standardized by SAP, and must follow the guidelines that are
stated in 5.3, “File system creation and setup” on page 70. Complete the following steps:
1. Example 5-20 depicts the creation of those mount points according to the proper SAP
nomenclature.
74 SAP HANA on IBM Power Systems: High Availability and Disaster Recovery Implementation Updates
2. Append a section to the /etc/fstab file with the definitions of the HANA file systems.
Example 5-21contains the entries that you must append, in bold. Do not change any
existing entries and remember that your UUIDs are different from the ones in
Example 5-21.
Example 5-21 Creating entries in /etc/fstab for the HANA file systems
hanaonpower:~ # cat /etc/fstab
UUID=58e3f523-f065-4bea-beb5-5ac44313ad30 swap swap defaults 0 0
UUID=72f1be2c-022d-49f8-a6e6-a9c02122375a / btrfs defaults 0 0
UUID=72f1be2c-022d-49f8-a6e6-a9c02122375a /home btrfs subvol=@/home 0 0
UUID=72f1be2c-022d-49f8-a6e6-a9c02122375a /opt btrfs subvol=@/opt 0 0
UUID=72f1be2c-022d-49f8-a6e6-a9c02122375a /srv btrfs subvol=@/srv 0 0
UUID=72f1be2c-022d-49f8-a6e6-a9c02122375a /usr/local btrfs subvol=@/usr/local 0 0
UUID=72f1be2c-022d-49f8-a6e6-a9c02122375a /var/log btrfs subvol=@/var/log 0 0
UUID=72f1be2c-022d-49f8-a6e6-a9c02122375a /var/opt btrfs subvol=@/var/opt 0 0
UUID=72f1be2c-022d-49f8-a6e6-a9c02122375a /var/spool btrfs subvol=@/var/spool 0 0
UUID=72f1be2c-022d-49f8-a6e6-a9c02122375a /.snapshots btrfs subvol=@/.snapshots 0 0
#hana
/dev/mapper/hanadata-datalv /hana/data/<SID> xfs defaults 0 0
/dev/mapper/hanalog-loglv /hana/log/<SID> xfs defaults 0 0
/dev/mapper/hanashared-sharedlv /hana/shared xfs defaults 0 0
/dev/mapper/usrsap-saplv /usr/sap xfs defaults 0 0
3. Finally, as depicted in Example 5-22, mount all the HANA file systems and use df -h to
check that they are all mounted. Take the time to check the file system sizes as well.
Note: The /usr/sap file system is still local to each node, so follow the procedures that are
described in “The /usr/sap file system” on page 74 to create it on each scale-out cluster
node.
Regardless of which implementation is used, your nodes must pass the SAP HANA Hardware
Configuration Check Tool (HWCCT) File System I/O benchmark tests.
In this implementation, all three file systems are created in your Elastic Storage Server or
IBM Spectrum Scale infrastructure (data, log, and shared), and all scale-out nodes are
connected to them. For more information, see SAP HANA and ESS: A Winning Combination,
REDP-5436.
After reviewing that paper, confirm that all of your nodes can see the bolded file systems, as
shown in Example 5-23. Our example is from a four-node scale-out cluster, so the file
systems are sized.
Example 5-23 HANA scale-out cluster file systems that use Elastic Storage Server
saphana005:~ # mount
[ ... snip ...]
hanadata on /hana/data type gpfs (rw,relatime)
hanalog on /hana/log type gpfs (rw,relatime)
hanashared on /hana/shared type gpfs (rw,relatime)
saphana005:~ #
saphana005:~ # df -h
Filesystem Size Used Avail Use% Mounted on
[... snip ...]
hanadata 1.0T 257M 1.0T 1% /hana/data
hanalog 512G 257M 512G 1% /hana/log
hanashared 1.0T 257M 1.0T 1% /hana/shared
Basically, you must ensure that your NFS server infrastructure has HA built in, as described in
“HANA shared area that is managed with Network File System” on page 63. Then, you must
create an export entry on the NFS server to be shared to the HANA clients.
76 SAP HANA on IBM Power Systems: High Availability and Disaster Recovery Implementation Updates
In this book, we illustrate a single, overall export entry on the NFS server to share the entire
/hana folder instantly instead of having an entry for data (/hana/data), log (/hana/log), and
shared (/hana/shared). If you want to use NFS only for the shared area, which is a more
commonly used scenario, then adjust your settings. Even though we use a single export point
from the NFS server, each area (data, log, and shared) has its own file system on the NFS
server and uses a compliant file system type according to SAP Note 2055470, for example,
XFS. Apply all the tunings that are pertinent to each file system, as described in 5.3.1, “File
systems for scale-up systems” on page 71.
The NFS server export entry in the /etc/exports file for the /hana directory uses the
parameters that are outlined in Example 5-24.
The export entry is granted permission to be mounted only by the participating scale-out
nodes. This is a best practice that is followed to ensure that other systems do not have any
access to the HANA file systems of the scale-out cluster. Example 5-24 shows the /hana
exported to four hosts: node1, node2, node3, and node4. This is a single, long line that
contains all hosts for which the entry is exported, with each description node in the following
format:
<node_hostname>(fsid=0, crossmnt, rw, no_root_squash, sync, no_subtree_check)
After completing the required configuration on your NFS server with that export entry, each of
the scale-out nodes must mount it by using the correct parameters to ensure optimal
performance.
Create a /hana mount point on your scale-out cluster nodes, and add the following line to the
/etc/fstab file, where <nfsserver> is the resolvable host name of the NFS server:
<nfsserver>:/hana /hana nfs rw,soft,intr,rsize=8192,wsize=8192 0 0
Run the mount /hana command on all nodes and test whether they can all access that NFS
share with read/write permissions.
To obtain the current NFS best practices from SAP, see SAP HANA Storage Requirements,
especially if you are planning to use NFS for HANA data and log file systems.
Installing IBM Spectrum Scale and creating a file system is out of the scope of this
publication. For more information about implementing IBM Spectrum Scale, see
Implementing IBM Spectrum Scale, REDP-5254. We provide guidance about how to design
your IBM Spectrum Scale cluster to ensure HA. No special tuning is required for the file
system.
You use the Storage Connector API to create as many individual data and log file systems
equal to the master and worker nodes in the scale-out solution. Figure 5-1 illustrates such
scenario by using a four-node cluster that is composed of one master node, two worker
nodes, and one standby node. The standby node takes over if one of the other nodes fail.
data02
data03
pg01
pg02
pg03
data02
data03
pg01
pg02
pg03
Figure 5-1 Scale-out cluster: Data and log file systems that are under the Storage Connector API
78 SAP HANA on IBM Power Systems: High Availability and Disaster Recovery Implementation Updates
To get started, create each of the data and log file systems individually, each in its own VG
and LV, according to the explanations in “HANA data file system” on page 71 and “HANA log
file system” on page 72. Recall from “The multipaths section” on page 68 that we recommend
naming the LUNs after the standard HANA_DATA_<node number>_<data disk number> and
HANA_LOG_<node number>_<log disk number>. So, in a four-node cluster with one master
node, two worker nodes, and one standby node, we have the following (in the format
<VG name>-<LV name>: participating disks):
Master (represented as node 1):
– hanadata01-datalv01: /dev/mapper/HANA_DATA_1_*
– hanalog01-loglv01: /dev/mapper/HANA_LOG_1_*
Worker (represented as node 2):
– hanadata02-datalv02: /dev/mapper/HANA_DATA_2_*
– hanalog02-loglv02: /dev/mapper/HANA_LOG_2_*
Worker (represented as node 3):
– hanadata03-datalv03: /dev/mapper/HANA_DATA_3_*
– hanalog03-loglv03: /dev/mapper/HANA_LOG_3_*
Standby (represented as node 4): Does not have associated data or log file systems. but
takes over any of the other three nodes file systems if any of these nodes fail.
Important: All of the data and log LUNs are attached to all of the scale-out cluster nodes,
so any of the nodes can access and mount the file systems.
You can create all the file systems on just one node because all of them have access to all the
LUNs. You do not need to include the information of the file systems in the /etc/fstab file
because HANA in a scale-out cluster handles the mounting of them automatically when you
use the storage connector API. When you are done creating the file systems, run a vgscan
command on all nodes and check that they can all see the VGs that you created.
Now, from a disks and file systems perspective, everything is set up to trigger a HANA
scale-out installation.
The first change that you can make to improve performance is to add the rr_min_io_rq
parameter to your /etc/multipath.conf “device { }” and set it to 1.
Example 5-25 Choosing the NOOP I/O scheduler at start time with the elevator parameter
hanaonpower:~ # cat /etc/default/grub
# If you change this file, run 'grub2-mkconfig -o /boot/grub2/grub.cfg' afterward
to update /boot/grub2/grub.cfg.
# Uncomment to set your own custom distributor. If you leave it unset or empty,
the default
# policy is to determine the value from /etc/os-release
GRUB_DISTRIBUTOR=
GRUB_DEFAULT=saved
GRUB_HIDDEN_TIMEOUT=0
GRUB_HIDDEN_TIMEOUT_QUIET=true
GRUB_TIMEOUT=8
GRUB_CMDLINE_LINUX_DEFAULT="splash=silent quiet showopts elevator=noop"
GRUB_CMDLINE_LINUX=""
After making the changes to /etc/default/grub, run grub2-mkconfig to apply the changes to
the grub2 boot loader, as shown in Example 5-26.
80 SAP HANA on IBM Power Systems: High Availability and Disaster Recovery Implementation Updates
Afterward, if you also want to change the scheduler algorithm dynamically without restarting
your system, you can do so by using the /sys interface. Every device mapper disk that
represents your LUNs has an I/O scheduler interface that is accessed at
/sys/block/<dm-X>/queue/scheduler. Example 5-27 shows how to change the I/O scheduler
of one of our device mapper disks from cfq to noop.
for BLOCKDEV in `ls -1 /sys/block/| grep sd`; do echo "Device: $BLOCKDEV"; cat
/sys/block/$BLOCKDEV/queue/scheduler; done
for BLOCKDEV in `ls -1 /sys/block/| grep sd`; do echo "Device: $BLOCKDEV"; echo
noop > /sys/block/$BLOCKDEV/queue/scheduler; done
You must do the same task for each disk that you have in your system by using their dm-X
form. To get a list of such disks, use the commands from Example 5-7 on page 70.
Additionally, this chapter provides a quick guide about how to use SAP HANA Studio to
connect to the HANA instance to manage it.
This chapter demonstrates how to perform the installation by using both the GUI and the text
interfaces. The results are the same for either approach.
The GUI installation requires the installation of the X11 packages and a Virtual Network
Computing (VNC) server to which to connect. If you followed the operating system (OS)
installation guidelines that are described in SAP Note 2235581 (select the OS Configuration
guide for your OS), you already have an X11 capable with VNC-enabled environment, and all
you need to do is to connect to it by using a VNC client.
If you prefer to perform the installation in text mode, all you need is an SSH connection to your
system. It is simple because this approach uses a text-based preinstallation wizard to create
a response file and then uses it to drive the installation in unattended mode.
From an installation options point of view, you use HANA to select which components to
install. These components are the server, the client, the Application Function Library (AFL)
component, the Smart Data Access component, and so on. In this publication, we install only
the server and the client components because we are interested in providing you with the
guidelines for installing a highly available HANA environment.
Note: As documented in SAP Note 2423367, starting with HANA 2.0 SPS1, all databases
(DBs) are configured in multitenant mode only. There is no option to install a
single-container instance.
Before you start the installation, you need the following information as input for the installer:
The SAP ID (SID) of the instance that you plan to install.
The instance number of the instance that you plan to install.
The passwords that you plan to use for the <sid>adm user, the SYSTEM user, and the SAP
Host Agent user (sapadm).
The following sections provide information about how the instance number works. This
information is used later to access the instance through HANA Studio, and is also used to
assign the port number that is used for communicating with HANA. The port number has the
form of 3<instance number>15. If you use an instance number of 00, then the port number
that HANA listens to is 30015.
Also, if you plan to use SAP HANA System Replication (HSR) between two HANA instances,
the replication itself uses the next instance number to assign the port number for replication
services. For example, if the instances you want to replicate have a SID / instance number of
RB1 / 00, then connecting to those instances happens over port 30015 and the replication
between them over port 30115.
84 SAP HANA on IBM Power Systems: High Availability and Disaster Recovery Implementation Updates
6.2 Installation methods
This section guides you through the installation process of HANA.
Note: Throughout the following chapters, we show the installation of HANA by using SUSE
Linux. The same steps apply for Red Hat Enterprise Linux. We point out differences
between HANA for the SUSE and Red Hat Enterprise Linux installations where applicable.
If you have not yet done so, download the installer files for HANA, as described in 2.2.3,
“Getting the SAP HANA on IBM Power Systems installation files” on page 17. At the time of
writing, the HANA 2.0 on Power Systems installer is composed of four compressed files as
follows (the first one has a .exe extension, and the others a .rar extension that is appended
to them):
51052480_part1.exe
51052480_part2.rar
51052480_part3.rar
51052480_part4.rar
In SUSE, you can decompress those files by running the unrar command. As a best practice,
place these installation files in a directory inside /tmp to avoid having issues with file
permissions during the installation. Example 6-1 shows the command to decompress the
files, along with some of the expected output.
Creating 51052480 OK
Creating 51052480/DATA_UNITS OK
Creating 51052480/DATA_UNITS/HDB_CLIENT_LINUX_S390X_64 OK
Extracting 51052480/DATA_UNITS/HDB_CLIENT_LINUX_S390X_64/hdbsetup OK
[…]
Extracting 51052480/COPY_TM.TXT OK
Extracting 51052480/COPY_TM.HTM OK
Extracting 51052480/MD5FILE.DAT OK
Extracting 51052480/SHAFILE.DAT OK
All OK
When you complete the decompression of the files, you see that the contents are extracted
into a folder that is named after an SAP product number, and that it contains a data structure
similar to Example 6-2. The HANA installer scripts and components are inside the DATA_UNITS
folder.
If you choose to perform an installation by using a GUI, see 6.2.1, “GUI installation” on
page 86. If you plan to perform a text-mode installation, see 6.2.2, “Text-mode installation” on
page 97.
2. After connecting to the system, log in by using the root user and password.
3. After you are logged in, open a terminal and go to the DATA_UNITS/HDB_LCM_LINUX_PPC64LE
directory where you decompressed the HANA installer files. Then, run the hdblcmgui
command to begin the installation.
86 SAP HANA on IBM Power Systems: High Availability and Disaster Recovery Implementation Updates
4. After the GUI starts, provide the inputs that are required by the GUI installer as it
progresses. The first window shows a list of the available components, as shown in
Figure 6-2. Click Next and proceed with the installation.
88 SAP HANA on IBM Power Systems: High Availability and Disaster Recovery Implementation Updates
6. The next window prompts you for the components that you want to install. In our lab
environment, we installed only the server and client options, as shown in Figure 6-4. The
client is not required for a HANA installation because it is optional, so you can leave it out
if you do not plan to use it. Click Next to proceed.
90 SAP HANA on IBM Power Systems: High Availability and Disaster Recovery Implementation Updates
8. Input the information for the SID of your system. Most of the other parameters already are
predefined, such as the host name and the installation path. As a best practice, keep the
default values except for the SID and instance number, which you input according to your
planning in 6.1, “SAP HANA installation overview” on page 84. You can also change the
System Usage parameter according to what the system is used for. In our example
(Figure 6-6), we selected Custom - System usage is neither production, test nor
development. Your choice of System Usage relaxes some of the landscape test checks
against your environment. After completion, click Next.
92 SAP HANA on IBM Power Systems: High Availability and Disaster Recovery Implementation Updates
10.The window shows information about the host certificate. As a best practice, do not
change this information; accept the defaults, as shown in Figure 6-8. Click Next.
Note: For environments that employ central user management, such as Microsoft
Active Directory or Lightweight Directory Access Protocol (LDAP), you must create the
HANA <SID>adm user before running the HANA installation process. This action ensures
that the <SID>adm user has the proper system user ID of your choice, especially when
you already have an SAP landscape with an existing <SID>adm user in your Microsoft
Active Directory or LDAP user base.
94 SAP HANA on IBM Power Systems: High Availability and Disaster Recovery Implementation Updates
12.Define the password for the DB SYSTEM user, as shown in Figure 6-10.
At the same time, the HANA DB is being installed, you see a window and installation details
similar to Figure 6-12 on page 97.
96 SAP HANA on IBM Power Systems: High Availability and Disaster Recovery Implementation Updates
Figure 6-12 HANA system installation progress
When the installation finishes, your HANA DB is installed, running, and fully functional. To set
up your HANA Studio connection to your newly installed HANA system, go to 6.3,
“Postinstallation notes” on page 101.
Also, because this is a scale-up installation, answer no to the question where it prompts you if
you want to add more hosts to the system.
Choose an action
98 SAP HANA on IBM Power Systems: High Availability and Disaster Recovery Implementation Updates
3 | Exit (do nothing) |
----------------------------------------------------------------------------------
1 | server | No additional components
2 | all | All components
3 | afl | Install SAP HANA AFL (incl.PAL,BFL,OFL,HIE) version
2.00.010.0000.1491308763
4 | client | Install SAP HANA Database Client version 2.1.37.1490890836
5 | smartda | Install SAP HANA Smart Data Access version 2.00.0.000.0
6 | xs | Install SAP HANA XS Advanced Runtime version 1.0.55.288028
7 | epmmds | Install SAP HANA EPM-MDS version 2.00.010.0000.1491308763
Installing components...
Installing SAP HANA Database...
100 SAP HANA on IBM Power Systems: High Availability and Disaster Recovery Implementation Updates
6.3 Postinstallation notes
After you have installed HANA on your system, this section shows useful information to help
you get started accessing and managing your DB. Complete the following steps:
1. First, you must install HANA Studio on your workstation. The installer is inside the
DATA_UNITS/HDB_STUDIO_<platform> folder in your HANA system. Use an SCP client to
copy the folder corresponding to your workstation architecture and install the software on
your workstation. In our case, we use the MacOS 64-bit version of HANA Studio.
2. After installation, when you open HANA Studio on your workstation, you see a window
similar to Figure 6-13. Add a connection to your newly installed HANA instance by using
the Add System button on the left side of the Systems navigation tab, as shown by the red
arrow in Figure 6-13.
Chapter 6. SAP HANA software stack installation for a scale-up scenario 101
3. Input the required information, as shown in Figure 6-14. If your HANA environment is
already registered in your DNS server, use the host name to complete the information that
is depicted as 1 in the figure; otherwise, use the IP address. Complete the instance
number information for 2. Starting with HANA 2.0 SPS1, all systems are multi-tenant, so
make the proper selection as described by 3 and use the SYSTEM DB user to connect to the
DB, as shown by 4. Give it a proper description, as outlined by 5, and click Next.
Note: When you add System to HANA, you must select Multiple containers because
HANA V2.0 sps01 uses Multiple containers DB mode. Otherwise, you see the following
error message.
102 SAP HANA on IBM Power Systems: High Availability and Disaster Recovery Implementation Updates
4. Configure your connection to the DB by using the SYSTEM user and its password, as shown
in Figure 6-15. Click Finish to complete this setup.
Chapter 6. SAP HANA software stack installation for a scale-up scenario 103
After completion, double-click your instance entry on the left side of the Systems navigation
menu to open its properties, as shown in Figure 6-16. Go to the Landscape tab of your
instance to validate that all services are running.
Figure 6-16 HANA Studio: Checking the Landscape tab of your HANA system
You now have full control of your HANA system. From within the HANA Studio interface, you
can take backups of your instance, configure HSR, change the configuration of the DB
parameters, and much more.
Note: It is a best practice to configure the backup strategy of your HANA system.
For a complete guide about how to manage a HANA DB, see SAP HANA Administration
Guide.
If you are planning to configure HSR between two scale-up systems, or configuring high
availability (HA) with the SUSE HA Extension, see Chapter 7, “SAP HANA System
Replication for high availability and disaster recovery scenarios” on page 105.
104 SAP HANA on IBM Power Systems: High Availability and Disaster Recovery Implementation Updates
7
This chapter also provides a step-by-step description of how to set up HSR and perform some
simple tests before you place HSR into production.
One of the four replication modes is the asynchronous mode, in which the primary system
sends the data to the secondary one but does not wait for the secondary to acknowledge it. In
this manner, commit operations on the primary node do not require an acknowledgment from
the secondary one, which does not delay the overall process for creating or modifying data on
the primary system. However, an asynchronous operation means that your recovery point
objective (RPO) is nonzero. To achieve zero data loss, you must use synchronous replication,
which is usually done between systems at the same site. Nevertheless, the asynchronous
replication mode is the only alternative for replicating data between distant data centers
because making it synchronous in this case adds too much latency to commit operations in
the primary site.
For the synchronous mode, all data that is created or modified on the primary system is sent
to the secondary system, and commit operations must wait until the secondary system replies
that the data is safely stored on its disks. This process ensures an RPO of zero, but adds
latency to all write operations. If you choose to replicate data by using this mode, your
systems must adhere to the file system key performance indicators (KPIs) in which the
maximum latency for log writing is 1000 μs. In essence, the network connection between your
two systems must provide a low latency to enable this setup.
Another mode is the synchronous in-memory, where replication also happens synchronously,
but the commit is granted after the replicated data is stored onto the secondary node’s
memory. After that, the secondary node makes this data persistent on its disks. As the
memory of the secondary system is already loaded with data, the takeover is much faster.
The last mode is the synchronous with full sync option, which requires the secondary node to
be online and receiving the replicated data. If the secondary node goes offline, all
transactions on the primary node are suspended until the secondary node comes back
online.
The delta shipping mode (delta_shipping) is the classic mode, and it works by shipping all of
the modified data to the secondary system every 10 minutes by default. During these
10-minute intervals, the redo-logs are shipped to the secondary system. So, when a failure of
the primary node occurs, the secondary processes the redo-logs since the last delta shipping
point.
106 SAP HANA on IBM Power Systems: High Availability and Disaster Recovery Implementation Updates
In the log replay mode (logreplay), only the redo-logs are sent over to the secondary system,
and they are immediately replayed. This mode makes the transfer lighter in terms of network
consumption, and also makes the takeover operations faster.
The log replay with read access mode (logreplay_readaccess) works in a similar fashion to
the log replay mode, except that it also enables the receiver to operate in read-only mode, so
you can query the database (DB) with proper SQL statements.
You can use an isolated network or VLAN with HSR. It is not mandatory, but it is a best
practice to avoid having the replication network traffic compete with the data traffic on the
same logical network interface. If you decide to follow this best practice, see “Using a
dedicated network for SAP HANA System Replication” because there are some extra
configuration steps that you must make to your HANA instances global.ini profile.
Chapter 7. SAP HANA System Replication for high availability and disaster recovery scenarios 107
Using a dedicated network for SAP HANA System Replication
The primary network for HANA is the data network through which the application servers
access the DB. When you use a secondary network to handle HSR, your network layout looks
similar to Figure 7-1, where HSR is set up between two scale-up systems on the same site.
App servers
Users
Data Network
Data Network
HSR Network
Figure 7-1 HANA environment with a separate network for SAP HANA System Replication
In our implementation, the 10.10.12.0/24 network is used by our hosts for data traffic, and for
communicating with the external world. The 192.168.1.0/24 network is defined for HSR.
Example 7-1 illustrates how our /etc/hosts file is configured to meet the second requirement
in 7.1.1, “SAP HANA System Replication requirements” on page 107. Note the sections in
bold with the definitions for the IP addresses for the data and replication networks.
127.0.0.1 localhost
fe00::0 ipv6-localnet
ff00::0 ipv6-mcastprefix
ff02::1 ipv6-allnodes
ff02::2 ipv6-allrouters
ff02::3 ipv6-allhosts
108 SAP HANA on IBM Power Systems: High Availability and Disaster Recovery Implementation Updates
When configuring HSR, you must use the host name of the system to create the replication
scheme. However, the host names of the systems are always bound to the primary interface
that is used for data traffic. So, if you do not explicitly instruct HANA to use the replication
interface with HSR, it ends up using the data network.
All HANA instances have a global.ini configuration file. There are many parameters that
can be changed within this file. To instruct HANA to use the replication interface with HSR,
you must edit the following file:
/hana/shared/<SID>/exe/linuxppc64le/HDB_<HANA version>/config/global.ini
Open this file by using your favorite text editor and look for a section that is named
“[system_replication_hostname_resolution]”, as shown in Example 7-2. It is empty when
you first view it, and then you add the lines for your hosts, as shown by the highlights 1 and 2.
These are the entries that instruct HANA to use the IP addresses in the 192.168.1.0/24
network for HSR. You are still able to use the systems host name for replication.
Example 7-2 Configuring the global.ini file to use a dedicated replication network for SAP HANA
System Replication
[... cropped ...]
# .short_desc
# specifies the resolution of remote site hostnames to addresses for system replication
# .full_desc
# specifies the resolution of remote site hostnames to addresses for system replication.
# for multi-tier replication only the direct neighbors must be specified
# Format: ipaddress = internal hostname
# e. g. 192.168.100.1 = hanahost01
[system_replication_hostname_resolution]
192.168.1.81=hana002 1
192.168.1.82=hana003 2
#
# .short_desc
# Configuration of system replication communication settings
# .full_desc
# This section contains parameters that are related to configuration
# of various system replication communication settings.
[system_replication_communication]
# .short_desc
# the network interface the processes shall listen on, if system replication is enabled.
# .full_desc
#
# Possible values are:
# - .global: all interfaces
listeninterface = .global 3
[... cropped ...]
Also, check that the listeninterface parameter is set to .global inside the section
[system_replication_communication], as shown in Example 7-2 and depicted by 3.
Special consideration: You must be careful when you change the value of the
listeninterface parameter that is under the [system_replication_communication]
section because there are two different listeninterface parameters in that file, and the
other one has nothing to do with HSR.
Chapter 7. SAP HANA System Replication for high availability and disaster recovery scenarios 109
7.2 Implementing SAP HANA System Replication
This section illustrates how to replicate data between two scale-up systems. Both systems
already meet the three requirements that are described in 7.1.1, “SAP HANA System
Replication requirements” on page 107. Complete the following steps:
1. You must take a backup. Considering that we are using an ordinary lab environment that
contains no data inside the DB, we take a file-based backup. To do so, right-click your
HANA instance inside HANA Studio and go through the menu to trigger a backup of the
SYSTEM DB, as shown in Figure 7-2. This action opens a wizard that defaults to taking a
file-based backup if you do not change any of its predefined parameters. Create the
backup and proceed. Then, back up your SID tenant DB by repeating the process, but
instead choose Back Up Tenant Database, as shown in Figure 7-2. Back up both the
source and destination systems, which take part in the replication setup.
2. Get the systemPKI, the SSFS data, and key from the source system and copy them to the
secondary system. The locations where these files are kept are:
– /hana/shared/<SID>/global/security/rsecssfs/data/SSFS_<SID>.DAT
– /hana/shared/<SID>/global/security/rsecssfs/key/SSFS_<SID>.KEY
Example 7-3 shows how to perform these operations. In our example, we are logged on
one of the servers (hana002) and copy the files over to the other server (hana003).
Example 7-3 Exchanging the systemPKI SSFS data, and key files
hana002:/ # cd /hana/shared/RB1/global/security/rsecssfs
hana002:/hana/shared/RB1/global/security/rsecssfs # ls
data key
hana002:/hana/shared/RB1/global/security/rsecssfs # scp data/SSFS_RB1.DAT \
root@hana003:/hana/shared/RB1/global/security/rsecssfs/data/
Password:
SSFS_RB1.DAT 100% 2960 2.9KB/s
00:00
hana002:/hana/shared/RB1/global/security/rsecssfs # scp key/SSFS_RB1.KEY \
root@hana003:/hana/shared/RB1/global/security/rsecssfs/key
Password:
SSFS_RB1.KEY 100% 187 0.2KB/s
00:00
3. Enable replication on the source system. Avoid calling the systems primary and
secondary because their roles are interchangeable as you perform takeovers. This is why
we described them as source and destination, which is really how SAP refers to them in its
replication wizard.
110 SAP HANA on IBM Power Systems: High Availability and Disaster Recovery Implementation Updates
4. Enable HSR on the source system by right-clicking the source system in the HANA Studio
and go to the menu that is shown in Figure 7-3.
Figure 7-3 Enabling SAP HANA System Replication on the source system
5. Enable System Replication, as shown in Figure 7-4. All other options are disabled
because they are not applicable to a source system that is being set up for replication.
Click Next.
Chapter 7. SAP HANA System Replication for high availability and disaster recovery scenarios 111
6. Assign a Primary System Logical Name to your source system, as shown in Figure 7-5.
This parameter is also known as the site name, and the HANA Studio HSR status
information under the Landscape →System Replication tab refers to it as site name.
In our example, we decide to name this first system MONTANA. Good choices are names
that relate to the physical location of your system, such as data center locations or building
names.
That is all there is to setting up the source system for HSR. Let us now set up the
destination system. The destination instance must be shut down.
112 SAP HANA on IBM Power Systems: High Availability and Disaster Recovery Implementation Updates
7. After the destination instance is shut down, right-click your secondary instance in the
HANA Studio and open the replication wizard by going to the menu that is shown in
Figure 7-3 on page 111. After the system is shut down, then the options for replication are
different, as shown in Figure 7-6. The only action that you can take is to register the
system as the secondary system for replication, that is, the one that receives the replica.
Select Register secondary system and click Next.
Chapter 7. SAP HANA System Replication for high availability and disaster recovery scenarios 113
8. Set the Secondary System Logical name and the replication parameters for replication
mode and operation mode. In our example, we use CYRUS as the secondary system logical
name (or site name), as shown by 1 in Figure 7-7. We choose to replicate it synchronously,
as shown by 2. The operation mode that we choose is log replay, as shown by 3. Also, we
make this destination system point to the source one to get the data, as shown by 4, by
using the host name and HANA DB instance number of the source system. Click Finish to
complete the wizard.
1
2
3
114 SAP HANA on IBM Power Systems: High Availability and Disaster Recovery Implementation Updates
9. The destination system starts in replication mode. You can check that the replication is
running by double-clicking the source instance in the HANA Studio to open its properties,
and then selecting Landscape →System Replication, as shown in Figure 7-8. Check
all entries that are marked as ACTIVE in the column REPLICATION_STATUS.
The following sections assume that your HSR is configured and synchronized.
Chapter 7. SAP HANA System Replication for high availability and disaster recovery scenarios 115
7.3.1 Creating a test table and populating it
To create a table in your HANA DB, complete the following steps:
1. Open the SQL console inside HANA Studio, as shown in Figure 7-9, by right-clicking your
HANA instance and selecting Open SQL Console.
2. After the SQL console opens, use the SQL statements in Example 7-4 to create a table,
populate it with one entry, and then query its contents. You can run one command at a
time, or paste them all, one per line, and run them together. To run an SQL statement in
the SQL console, press the F8 key or the Execute button at the upper right of the console.
116 SAP HANA on IBM Power Systems: High Availability and Disaster Recovery Implementation Updates
After running the three commands, the result of the SELECT statement displays an output
similar to the one that is shown in Figure 7-10.
You do not have to change any of the parameters. The host and instance numbers are
automatically input. Click Finish to proceed, as shown in Figure 7-11.
Figure 7-11 SAP HANA System Replication: Takeover to the destination HANA instance
Chapter 7. SAP HANA System Replication for high availability and disaster recovery scenarios 117
The takeover operation brings the destination system to its full active state, and you can now
connect to its DB and perform queries. A takeover does not shut down the DB on the source
system in case of a controlled test, so applications can still send operations to it. You can take
precautions to avoid applications from continuing to use the DB still running on the source
system.
We encourage you to open the destination system SQL console, as explained in 7.3.1,
“Creating a test table and populating it” on page 116, and run the SELECT statement from
Example 7-4 on page 116. If you can see the created entry in the test table in your destination
system, this means that the replication is working properly.
118 SAP HANA on IBM Power Systems: High Availability and Disaster Recovery Implementation Updates
8
This chapter covers only scenarios with two systems with HSR.
Enabling DR is a business decision and not a technical one. However, if you want to automate
the HSR part to obtain a certain degree of high availability (HA) on scale-up systems, you can
achieve this HA by adding an operating system (OS) HA solution. In this chapter, we cover the
Red Hat OS with PowerHA SystemMirror as the HA layer to manage HSR on two HANA
servers.
This section guides you on how to set up PowerHA SystemMirror on two HANA systems that
are physically at two different failure domains. Those systems are running Red Hat Enterprise
Linux for SAP Solutions Version V7.4, PowerHA SystemMirror for Linux V7.2.2.2, and SAP
HANA V2.0.
This publication uses Red Hat Enterprise Linux V7.4. To check that you have a supported
combination of Red Hat Enterprise Linux and HANA, see SAP Note 2235581 and Red Hat
Customer Portal.
We recommend to always check the PowerHA SystemMirror for Linux IBM Knowledge Center
before planning and setting up the cluster. Also, apply all the appropriate fixes to the OS and
PowerHA SystemMirror before proceeding.
120 SAP HANA on IBM Power Systems: High Availability and Disaster Recovery Implementation Updates
Note: Because HA is a complex topic and software clustering is not trivial, this chapter is
intended to be a starting point to show you how PowerHA SystemMirror and HANA can
interact together. It is not the intention of the authors that users in the field use this chapter
alone without knowing the HA infrastructures that are part of the whole solution, or have
expertise with PowerHA SystemMirror and Linux.
2. To fix any missing prerequisites, run a yum command and rerun the precheck command in
Example 8-1 to see whether there are any missing prerequisites, as shown in
Example 8-2.
Example 8-2 Installing the missing prerequisites of PowerHA SystemMirror and checking them
# yum -y install perl-Sys-Syslog perl-Pod-Parser sg3_utils
# ./installPHA --onlyprereqcheck
checking :Prerequisites packages for PowerHA SystemMirror
Success: All prerequisites of PowerHA SystemMirror installed
installPHA: No installation only prerequisite check was performed .
3. After you successfully pass the prerequisite check on both nodes, continue with the
installation. Then, on each node, run the installPHA command, as shown on
Example 8-3.
...
SNIP
...
The PowerHA SystemMirror software is now installed and you are ready to configure the
resources.
2. After you add the IP addresses and refresh the clcomd service on both nodes, you can
create the cluster. To create the cluster, run the commands on only one node, which is the
primary node that holds the HANA DB in normal operations. The PowerHA command can
be run on any node of the cluster. In this example, run the clmgr command to create the
cluster with nodes ph13na1 and ph13nb1, as shown in Example 8-5.
Configuring Cluster...
122 SAP HANA on IBM Power Systems: High Availability and Disaster Recovery Implementation Updates
3. Run the clmgr command to check the information about the cluster, as shown in
Example 8-6.
Note: You can also see that there are some properties that sound interesting, such as
NFS_SERVER, DISK_WWID, and TIE_BREAKER. We describe a few of them in this section. For
more information, see IBM Knowledge Center.
4. Now that the cluster is created, create the resources that this cluster manages. In our
case, we are interested in PowerHA SystemMirror managing HSR replication flows, and
VIPA. We use PowerHA SmartAssist to configure the HANA resource in the PowerHA
SystemMirror cluster. The information that is needed to set it up must be the same that
was used when configuring HSR. To set up the HANA resources with SmartAssist, run
clmgr on one node only, as shown in Example on page 124.
Example 8-7 Starting SmartAssist to manage SAP HANA System Replication on PowerHA
SystemMirror
# clmgr setup smart_assist APPLICATION=SAP_HANA SID=H13 INSTANCE=HDB13
Note: When using PowerHA SmartAssist, use the primary host names, not the aliases
or other names that do not match the primary host name on the nodes.
Primary names:
10.153.164.131 ph13na1.isicc.de.ibm.com ph13na1
10.153.164.136 ph13na2.isicc.de.ibm.com ph13na2
Other names on the block:
10.153.164.151 hanapr.isicc.de.ibm.com hanapr
10.153.164.156 hanadr.isicc.de.ibm.com hanadr
Because the host names at the OS level are ph13na1 and ph13na2, you must use them
when configuring the nodes on the PowerHA SmartAssist, not hanapr and hanadr.
The PowerHA SystemMirror support plan is for a scenario where both the nodes have
separate host names. The primary host name (for example, ph13na1 or ph13na2) is used
to create the PowerHA SystemMirror cluster, but a different host name (for example,
hanapr or hanadr) is used to install and configure HANA on both nodes. This support by
PowerHA SystemMirror is intended to be delivered in the 7.2.3 release.
--------------------
Parameter overview
# Parameter Value
--------------------
0 Finish
X Cancel
When you concur that the values are correct, select 0 (the number) to continue to deploy
the resources in the cluster. The output is similar to the one that is shown in Example 8-9.
124 SAP HANA on IBM Power Systems: High Availability and Disaster Recovery Implementation Updates
EXPLANATION: The policy is valid and can be activated.
USER ACTION: No action required.
Are you sure you want to activate a new automation policy?
Yes(y)or No(n)?
If there are some existing PowerHA SystemMirror resources (for example, Persistent IP or
disk heartbeat) and the Activate option is used, then those existing resources are deleted
by SmartAssist.
Option 1 is the same as the clmgr add smart_assist command, and option 2 is the same
as the clmgr update smart_assist command.
To learn more about the meanings of the corresponding options of the clmgr command,
see IBM Knowledge Center.
You now have a simple PowerHA SystemMirror cluster that manages HSR.
Note: If you must start over, you can clean up all the resources that were created by
PowerHA SmartAssist by running clmgr (clmgr delete smart_assist with the relevant
values of the SAP ID (SID), instance, and so on).
All communication to decide which node survives is done by using Ethernet. How resilient is
this communication network, and what is the chance of failing depends on many factors,
including the OS or virtualization configuration, switches, cabling, ISP, and many other items.
However, for a more robust solution you must add other communication technologies. This
section shows how you can add one shared disk to help decide which site will be the
surviving site to avoid split-brain scenarios.
It is important to know how the shared disk is configured in the storage subsystem. If it is not
resilient at either site or does not have a proper storage quorum, the situation can be worst.
Detailed planning for end-to-end robust HA is needed.
In our case, the LUN that is used for the tiebreaker is provided by an IBM Spectrum Virtualize
Enhanced Stretched cluster. The LUN is mirrored on both sites by an IBM Storwize® 7000
system and has a quorum on a third site that both sites can reach independently. For details
about this setup, see Implementing the IBM System Storage SAN Volume Controller with IBM
Spectrum Virtualize V8.2.1, SG24-7933.
The shared disk must be visible on both systems. You can make it visible to one node and
give it a physical volume UUID, as shown in Example 8-10.
# pvdisplay /dev/mapper/360050768019085e54800000000000227
"/dev/mapper/360050768019085e54800000000000227" is a new physical volume of
"4.00 GiB"
--- NEW Physical volume ---
PV Name /dev/mapper/360050768019085e54800000000000227
VG Name
PV Size 4.00 GiB
Allocatable NO
126 SAP HANA on IBM Power Systems: High Availability and Disaster Recovery Implementation Updates
PE Size 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID FU2ZSI-LR0e-S72J-Ji2N-sTlw-WiW2-fcPNds
Now you can present the LUN to the other node and ensure that the data is the same when
querying with the pvdisplay command, as shown in Example 8-10 on page 126.
When you have the same physical volume UUID on both nodes, you can define it as a
tiebreaker disk and query the data on PowerHA SystemMirror on one node only, as shown on
Example 8-11.
Note: The command that is shown in Example 8-11 creates a disk-based heartbeat
network in addition to the network heartbeat that is enabled by default. There is a different
command to create a tiebreaker.
With PowerHA SystemMirror, you can use a Network File System (NFS) as a tiebreaker,
which is efficient for HANA because NFS is used to serve SAP interfaces. As with the LUN for
tiebreaker setup, the NFS service must be highly available itself, and both sites must be able
to access the NFS service independently.
Note: If there is an even number of nodes in a cluster, the user must configure a tiebreaker.
When a heartbeat cannot be determined by both the network and disk, only one node
continues. Otherwise, both nodes become active. As a best practice, configure the NFS
tiebreaker. For more information, see the YouTube video PowerHA on Linux NFS Tie
Breaker Operations.
WARNING: MANAGE must be specified. Since it was not, a default of 'auto' will be used.
Cluster HDBH13 is running. We will try to bring the resource groups 'online' now, if any
exist.
128 SAP HANA on IBM Power Systems: High Availability and Disaster Recovery Implementation Updates
There are three applications that created by SmartAssist. You can show the applications and
details about them by running the clmgr command as shown in Example 8-13.
To list all the resources and their status, run the clRGinfo command, as shown in
Example 8-14.
Note: You can view the detailed output of the resources status by running the following
command:
clRGinfo -e
Example 8-15 Moving the primary SAP HANA System Replication replica to a second node
# clmgr move resource_group SAP_HDB_H13_HDB13_sr_primary_rg NODE=ph13nb1
The command that is shown in Example 8-15 stops HSR in ph13na1, which holds the primary
role at this moment. The command makes the HSR replica the primary on ph13nb1 and
moves the VIPA from ph13na1 to ph13nb1. If ph13na1 remains active, the cluster configures the
HSR replica with ph13nb1 as the primary and ph13na1 as the secondary.
Now, HSR is reversed. For more information about how to check the HSR status, see
Chapter 7, “SAP HANA System Replication for high availability and disaster recovery
scenarios” on page 105.
IBM Systems Lab Services has a program that is called Power to Cloud Rewards that
specifically covers HANA and PowerHA SystemMirror. The program has no cost and includes
an onsite workshop with the customer. To learn more on how to qualify for this program, see
IBM Power to Cloud Rewards Program IBM Power to Cloud Rewards Program.
Even if you cannot qualify for the Power to Cloud Reward program, you can still obtain the
expertise to design, deploy, and test your HANA and PowerHA SystemMirror with IBM
Systems Lab Services. For more information about the details of the offers and contact
information, see IBM Systems Lab Services.
130 SAP HANA on IBM Power Systems: High Availability and Disaster Recovery Implementation Updates
9
A recovery orchestrator is the HA and DR software component (solution) that enables and
manages the recovery of an IT infrastructure if there is an outage. A recovery orchestrator is
easy to deploy and manage, and can perform repeatable recovery without disruptions.
IBM Power Systems offers a rich set of HA and DR solutions. A few more options were added
recently, and this chapter provides an overview of the various solutions and which ones can
be used to manage HA and DR for SAP HANA.
A cluster DR model is shown in Figure 9-1 (the cluster DR model displays the DR solution, but
it also applies to HA except that in the case of HA no replication is involved), which is
contrasted with the virtual machine (VM) Restart DR model. Figure 9-1 shows that the entire
VM (including system disk, rootvg, data disks, and so on) is replicated by using storage
replication methods. These copies of VMs are used during a disaster to start the VMs on the
DR site. OSes in these VMs start, and then the workload is started to return to normal
operations. This model is more suited for cloud deployments and can scale to allow for DR
management of the entire data center.
132 SAP HANA on IBM Power Systems: High Availability and Disaster Recovery Implementation Updates
VM Restart Manager -based HA involves a similar concept of restarting the VM or logical
partition (LPAR) on some other host within a data center, and relies on a shared storage-
(between hosts) based image to start the VM. Also, in the case of HA, VMs are restarted
automatically as compared to manual restart in DR cases.
IBM Power Systems servers now offer both types of HA and DR solutions for the PowerVM
platform, as shown in Figure 9-2.
Figure 9-2 High availability and disaster recovery offerings from Power Systems servers
Chapter 9. SAP HANA and IBM VM Recovery Manager high availability and disaster recovery 133
9.2 Power Systems HA and DR solutions for SAP HANA
This section takes a closer look at the various Power Systems HA or DR solutions that
support SAP HANA HA/DR management.
You can use PowerHA SystemMirror for Linux to manage an HA for SAP HANA
replication-based environment. It supports both HANA replication-based hot standby or cold
restart-based HANA deployments. PowerHA provides a wizard to configure HA policies for an
SAP HANA environment.
For more information, see Chapter 8, “SAP HANA and IBM PowerHA SystemMirror” on
page 119 and IBM Knowledge Center.
For more information, see Implementing High Availability and Disaster Recovery Solutions
with SAP HANA on IBM Power Systems, REDP-5443.
134 SAP HANA on IBM Power Systems: High Availability and Disaster Recovery Implementation Updates
Figure 9-3 shows failure of a host and the resulting failover within the host group.
An administrator can enable any of the monitoring options that are available:
Host failure detections: This option detects failures of hosts and moves the VMs to the
remaining hosts in the host group. This is the default detection that is done for VMs
(applies to AIX, Linux, and IBM i LPARs on a PowerVM platform).
VM failure detections: An administrator can optionally choose to detect failures of VMs. To
enable this function, the administrator must install and initialize the VM agent component
inside AIX or Linux LPARs VMs.
Application failure detections: An administrator can optionally use the application
monitoring (AppMon) framework of VM agent to register and monitor the health of the
applications inside VMs.
Chapter 9. SAP HANA and IBM VM Recovery Manager high availability and disaster recovery 135
VM Recovery Manager HA monitors for failures and takes corrective actions based on a few
key components, as shown in Figure 9-4.
The administrator must install three key components before initializing and using VM
Recovery Manager HA:
VM agent: The VM agent enables the VM health monitoring and AppMon framework. VM
agent is provided for AIX and Linux. The administrator installs this component first and
initializes it (if they plan to do VM or AppMon). The VM agent can be initialized and
managed by using a single command inside the VM (called ksysvmmgr).
KSYS: Install the KSYS software in an AIX LPAR. The KSYS software installation also
deploys GUI-related agent software.
GUI server: This software can be installed in the KSYS itself or in another AIX LPAR.
After installation, the administrator can start the browser and connect to
http://ksys_hostname:3000. Use the KSYS LPAR login credentials to log in and then follow
the instructions to deploy a host group and enable HA management.
For more information about the installation and configuration of VM Recovery Manager HA,
see IBM Knowledge Center.
136 SAP HANA on IBM Power Systems: High Availability and Disaster Recovery Implementation Updates
Application monitoring framework
The AppMon framework inside VM enables simplified monitoring and management of
applications running in the LPAR, as shown in Figure 9-5.
The VM administrator can register and monitor any applications inside the LPAR. To do this
task, the administrator registers the application by running the ksysvmmgr command. As part
of the registration, the administration provides three methods to manage the application:
Start: This method allows AppMon to start the application.
Stop: This method when invoked stops the application.
Monitor: AppMon calls this method periodically (every 30 seconds) to monitor the health of
the application. Based on the monitor return status, the application status is marked as
green, yellow, or red, and the appropriate action is taken.
Chapter 9. SAP HANA and IBM VM Recovery Manager high availability and disaster recovery 137
Health monitoring of an application and the resulting actions are shown in Figure 9-6.
Figure 9-6 Application health monitoring (green →yellow →red status changes)
VM Recovery Manager HA provides HA agents for key middleware products. One of the
agents that is supplied is to manage cold restart-based SAP HANA HA management. The
next few sections describe the installation and HA management of SAP HANA by using this
agent.
138 SAP HANA on IBM Power Systems: High Availability and Disaster Recovery Implementation Updates
d. Install all the required RPMs while running ./hdblcm. The required RPMs for SUSE
Linux Enterprise Server and Red Hat Enterprise Linux are as follows:
• SUSE Linux Enterprise Server
libgomp1-7.3.1+r258313-6.1.ppc64le.rpm
libstdc++6-4.2.1-3mdv2008.0.ppc.rpm
glibc-2.28-497.2.ppc64le.rpm
IBM_XL_C_CPP_V13.1.5.1_LINUX_RUNTIME
• Red Hat Enterprise Linux
libxlc-13.1.6.1-171213.ppc64le.rpm
compat-sap-c++-6-6.3.1-1.el7_3.ppc64le.rpm
libtool-ltdl-2.4.6-25.fc29.ppc64le.rpm
2. Select all the default parameters (or defined values if you have any) except for SAP
System ID, SAP Admin/User, and Instance number. You can use the default values for the
SAP System ID, SAP Admin/User, and Instance number, but it is a best practice to provide
meaningful and planned values for them because they are used when the SAP system is
scaled up.
3. When prompted for passwords for system admin, SAP admin, and database (DB) user,
provide the passwords, confirm them, and proceed by confirming the details by pressing y
to proceed.
4. After waiting for approximately 15 - 20 minutes, you receive a notice about the
installation’s success.
5. To confirm that everything installed correctly, log in again to the host by using the newly
created SAP Admin user, and then use the following commands to verify the installation:
a. Check the SAP HANA status by running the following command:
sapcontrol -nr 02 -function GetSystemInstanceList
Where 02 is the instance number (change to the instance number that you provided
during the installation). You see OK if everything is working correctly.
Note: When running these commands from a non-SAP user, use the prefix
/usr/sap/hostctrl/exe/ before the command because it cannot be identified
otherwise.
Chapter 9. SAP HANA and IBM VM Recovery Manager high availability and disaster recovery 139
7. To start saphana, run the following command:
CMD="${EXE_DIR}/sapcontrol -nr ${INSTANCE_NO} -function StartService ${SID}"
(without wait time)
CMD="${EXE_DIR}/sapcontrol -nr ${INSTANCE_NO} -function WaitforServiceStarted 5
0"(with wait time)
Where the following terms are defined as follows:
EXE_DIR /usr/sap/hostctrl/exe (default)
EXE_DIR /home/hana/shared/${SAP_SID}/${INSTANCE_NAME}/exe (installed
path)
EXE_DIR /home/hana/shared/${SAP_SID}/exe/linuxppc64le/hdb
INSTANCE_NO 01 (configurable)
SID/SAP_SID S01 (configurable)
INSTANCE_NAME HBD01 (number same as INSTANCE_NO)
The agent uses the default SAP HANA agent scripts that are in the /usr/sbin/agents/sap
directory.
Alternatively, a user can use their own scripts by running the ksysvmmgr command:
ksysvmmgr add app sapapp <application_name> monitor_script=<monitor_script_path>
start_script=<start_script_path> stop_script=<stop_script_path>
140 SAP HANA on IBM Power Systems: High Availability and Disaster Recovery Implementation Updates
The following processes start:
s01adm 50381 50365 4 01:10 ? 00:04:56 hdbnameserver
s01adm 50531 50365 0 01:11 ? 00:00:42 hdbcompileserver
s01adm 50533 50365 0 01:11 ? 00:00:41 hdbpreprocessor
s01adm 50577 50365 5 01:11 ? 00:05:43 hdbindexserver -port 30103
s01adm 50579 50365 1 01:11 ? 00:01:11 hdbxsengine -port 30107
s01adm 51201 50365 0 01:11 ? 00:00:43 hdbwebdispatcher
Stop script
The SAP HANA stop script uses type=SAPHANA username=S01 database=HDB01 to stop the
SAP HANA instance. It takes about 1 minute to stop and end all SAP processes. In this
example, we use 100 seconds (the default) for the SAP HANA stop script run.
After the stop script run finishes, the stop script shuts down SAP processes by making
them unavailable, as shown in the following output:
# ps -aef | grep s01adm
s01adm 3111 1 0 Jun11 ? 00:02:38 /usr/sap/S01/HDB01/exe/sapstartsrv
pf=/usr/sap/S01/SYS/profile/S01_HDB01_bolts021.ausprv.stglabs.ibm.com -D -u
s01adm
s01adm 4216 1 0 02:55 ? 00:00:00 hdbrsutil -f -D -p 30101 -i 1537167311
s01adm 4249 1 0 02:55 ? 00:00:00 hdbrsutil -f -D -p 30103 -i 1537167313
root 4455 48831 0 02:56 pts/0 00:00:00 grep --color=auto s01adm
Monitor script
The monitor script checks the status of the SAP HANA instance. If the process returns a
green online status, then the check is a success; any other returned status is a failure.
The start and stop stabilization time must be more than the time that is required for the
script to run, that is, the start stabilization time must be more than 150 seconds, and the
stop stabilization time must be more than 100 seconds.
The SAP HANA monitor script checks the SAP HANA instance and returns the running
process list and the processes’ status. The processes are:
– hdbdaemon
– hdbcompileserver
– hdbindexserver
– hdbnameserver
– hdbpreprocessor
– hdbxsengine
The application status can be verified by checking the process information by running the
ps -aef command as the sapadmin user. The following output shows this information:
# ps -aef | grep s01adm
s01adm 3111 1 0 Jun11 ? 00:02:38
/usr/sap/S01/HDB01/exe/sapstartsrv
pf=/usr/sap/S01/SYS/profile/S01_HDB01_bolts021.ausprv.stglabs.ibm.com -D -u
s01adm
root 3741 48831 0 02:54 pts/0 00:00:00 grep --color=auto s01adm
s01adm 50357 1 0 01:10 ? 00:00:00 sapstart
pf=/usr/sap/S01/SYS/profile/S01_HDB01_bolts021.ausprv.stglabs.ibm.com
s01adm 50365 50357 0 01:10 ? 00:00:00
/usr/sap/S01/HDB01/bolts021.ausprv.stglabs.ibm.com/trace/hdb.sapS01_HDB01 -d
-nw -f /usr/sap/S01/HDB01/bolts021.ausprv.stglabs.ibm.com/daemon.ini
pf=/usr/sap/S01/SYS/profile/S01_HDB01_bolts021.ausprv.stglabs.ibm.com
s01adm 50381 50365 4 01:10 ? 00:04:56 hdbnameserver
s01adm 50531 50365 0 01:11 ? 00:00:42 hdbcompileserver
Chapter 9. SAP HANA and IBM VM Recovery Manager high availability and disaster recovery 141
s01adm 50533 50365 0 01:11 ? 00:00:41 hdbpreprocessor
s01adm 50577 50365 5 01:11 ? 00:05:43 hdbindexserver -port 30103
s01adm 50579 50365 1 01:11 ? 00:01:11 hdbxsengine -port 30107
s01adm 51201 50365 0 01:11 ? 00:00:43 hdbwebdispatcher
142 SAP HANA on IBM Power Systems: High Availability and Disaster Recovery Implementation Updates
LOG :: /usr/sbin/agents/saphana/startsaphana :: Tue Apr 24 01:47:43 EDT 2018 ::
SAP_HANA is not running. Starting SAP hana The start script is called to start SAP
HANA.
LOG :: /usr/sbin/agents/saphana/startsaphana :: Tue Apr 24 01:47:43 EDT 2018 ::
Calling doStart()...
LOG :: /usr/sbin/agents/sap/saphdbctrl :: Tue Apr 24 01:47:43 EDT 2018 ::
saphdb_ci_start start issued.
LOG :: /usr/sbin/agents/sap/sapsrvctrl :: Tue Apr 24 01:47:43 EDT 2018 :: Enter
function Control_sapstartsrv() Start.\n
LOG :: /usr/sbin/agents/sap/sapsrvctrl :: Tue Apr 24 01:47:43 EDT 2018 ::
Control_sapstartsrv sapcontrol StartService rc=0:
StartService
OK .\n
LOG :: /usr/sbin/agents/sap/saphdbctrl :: Tue Apr 24 01:47:43 EDT 2018 ::
saphdb_ci_start: sapsrvctrl -a start -p HDB S01_HDB01 OpState=0.
LOG :: /usr/sbin/agents/sap/sapstartctrl :: Tue Apr 24 01:47:43 EDT 2018 ::
Enter function Control_instance() Start_cmd.\n
LOG :: /usr/sbin/agents/sap/sapstartctrl :: Tue Apr 24 01:47:43 EDT 2018 ::
Control_instance sapcontrol Start rc=0:
24.04.2018 01:47:43
Start
OK
LOG :: /usr/sbin/agents/sap/sapstartctrl :: Tue Apr 24 01:48:55 EDT 2018 ::
Executed '/bin/su - s01adm -c
/home/hana/shared/S01/exe/linuxppc64le/hdb/sapcontrol -host localhost -nr 01
-function WaitforStarted 120 1' returncode: 0
LOG :: /usr/sbin/agents/sap/sapstartctrl :: Tue Apr 24 01:48:55 EDT 2018 ::
Start instance returned with a returncode of 0.
LOG :: /usr/sbin/agents/sap/sapstartctrl :: Tue Apr 24 01:48:55 EDT 2018 ::
Start completed successfully.
LOG :: /usr/sbin/agents/sap/saphdbctrl :: Tue Apr 24 01:48:55 EDT 2018 :: SAP
HDB Start done. rc:0
LOG :: /usr/sbin/agents/saphana/startsaphana :: Tue Apr 24 01:48:55 EDT 2018 ::
sap hana instance started !!
LOG :: /usr/sbin/agents/saphana/startsaphana :: Tue Apr 24 01:48:55 EDT 2018 ::
sap hana instance started successfully. SAP HANA started successfully.
LOG :: /usr/sbin/agents/saphana/monitorsaphana :: Tue Apr 24 01:48:57 EDT 2018
:: check status of sap hana instance.
LOG :: /usr/sbin/agents/sap/sapstartctrl :: Tue Apr 24 01:48:57 EDT 2018 ::
Enter function Control_instance() Check_cmd.\n
LOG :: /usr/sbin/agents/sap/sapstartctrl :: Tue Apr 24 01:48:58 EDT 2018 ::
Control_instance sapcontrol GetProcessList rc=3:
24.04.2018 01:48:58
GetProcessList
OK
0 name: hdbdaemon
0 description: HDB Daemon
Chapter 9. SAP HANA and IBM VM Recovery Manager high availability and disaster recovery 143
0 dispstatus: GREEN The monitor script is called again to check the status.
The status is GREEN.
0 textstatus: Running
0 starttime: 2018 04 24 01:47:44
0 elapsedtime: 0:01:14
0 pid: 54774
1 name: hdbcompileserver
1 description: HDB Compileserver
1 dispstatus: GREEN
1 textstatus: Running
1 starttime: 2018 04 24 01:47:51
1 elapsedtime: 0:01:07
1 pid: 54973
2 name: hdbindexserver
2 description: HDB Indexserver-S01
2 dispstatus: GREEN
2 textstatus: Running
2 starttime: 2018 04 24 01:47:52
2 elapsedtime: 0:01:06
2 pid: 55013
3 name: hdbnameserver
3 description: HDB Nameserver
3 dispstatus: GREEN
3 textstatus: Running
3 starttime: 2018 04 24 01:47:45
3 elapsedtime: 0:01:13
3 pid: 54839
4 name: hdbpreprocessor
4 description: HDB Preprocessor
4 dispstatus: GREEN
4 textstatus: Running
4 starttime: 2018 04 24 01:47:51
4 elapsedtime: 0:01:07
4 pid: 54975
5 name: hdbwebdispatcher
5 description: HDB Web Dispatcher
5 dispstatus: GREEN
5 textstatus: Running
5 starttime: 2018 04 24 01:48:40
5 elapsedtime: 0:00:18
5 pid: 55588
6 name: hdbxsengine
6 description: HDB XSEngine-S01
6 dispstatus: GREEN
6 textstatus: Running
6 starttime: 2018 04 24 01:47:52
6 elapsedtime: 0:01:06
6 pid: 55015
LOG :: /usr/sbin/agents/sap/sapstartctrl :: Tue Apr 24 01:48:58 EDT 2018 :: The
instance is running.
LOG :: /usr/sbin/agents/sap/saphdbctrl :: Tue Apr 24 01:48:58 EDT 2018 ::
saphdb_ci_status sapstartctrl -a status -p HDB S01_HDB01 OpState=1.
144 SAP HANA on IBM Power Systems: High Availability and Disaster Recovery Implementation Updates
LOG :: /usr/sbin/agents/saphana/monitorsaphana :: Tue Apr 24 01:48:58 EDT 2018
:: SAP_HANA is already running.
LOG :: /usr/sbin/agents/saphana/monitorsaphana :: Tue Apr 24 01:48:58 EDT 2018
:: sap hana is monitorable.
Chapter 9. SAP HANA and IBM VM Recovery Manager high availability and disaster recovery 145
146 SAP HANA on IBM Power Systems: High Availability and Disaster Recovery Implementation Updates
A
These settings can be forgotten or ignored. To fill this gap in a manner that is not supported by
IBM or anyone else, a tool that is called HOH was developed, which checks multiple
configuration settings.
This tool was developed with maintenance in mind, so the settings that it checks are not part
of the HOH core but of the JSON files that come with it. Hence, the tool is easy to maintain
when changes by any of the vendors occur.
Note: This tool is not an official one, and is not supported by anybody. If you choose to run
it, you run it at your own risk and accept all responsibility for it.
What it checks
At the time of writing, the current HOH version is Version 1.17. The tool can be found at
GitHub.
148 SAP HANA on IBM Power Systems: High Availability and Disaster Recovery Implementation Updates
If you already downloaded the HOH in the past and want to update to the latest version, run
git pull inside of the directory to where it was cloned, as shown in Example A-2.
To run the tool, go to the cloned repository and call it directly by passing one of the storages
(XFS, IBM Enterprise Storage Server®, or Network File System (NFS)), as shown in
Example A-3.
The purpose of HOH is to supplement the official tools like HWCCT not to
substitute them, always refer to official documentation from IBM, SuSE/RedHat, and
SAP
You should always check your system with latest version of HWCCT as explained on
SAP note:1943937 - Hardware Configuration Check Tool - Central Note
This software comes with absolutely no warranty of any kind. Use it at your own
risk
When you choose to continue at your own risk, the tool generates an output for your system.
A ready for use output with SUSE 12 SP2 is shown in Example A-4.
OK: SUSE Linux Enterprise Server 12 SP2 is a supported OS for this tool
The following individual SAP Notes recommendations are available via sapnote
Consider enabling ALL of them, including 2161991 as only sets NOOP as I/O
scheduler
150 SAP HANA on IBM Power Systems: High Availability and Disaster Recovery Implementation Updates
OK: ppc64-diag installation status is as expected
There are multiple issues to fix in Example A-4 on page 149. Address them and run the tool
again until it reports no errors. Then, you are ready to proceed to the next step.
Note: As a reminder about the time, be sure is fixed on the timedatectl level, and not ntpd
or chrony only. Hint: Run timedatectl set-ntp 1.
As a final comment on HOH, this tool is not supported by IBM or anybody because it is a
collaborative effort. If you have questions, bug reports, requests, and so on, add them to the
tool GitHub page.
multipaths {
#ROOTVG
multipath {
wwid 3600507640081811fe800000000003e4a
alias ROOTVG
}
#HANA DATA
multipath {
wwid 3600507640081811fe800000000003e7a
alias HANA_DATA_1_1
}
multipath {
wwid 3600507640081811fe800000000003e79
alias HANA_DATA_1_2
}
multipath {
wwid 3600507640081811fe800000000003e78
alias HANA_DATA_1_3
}
multipath {
wwid 3600507640081811fe800000000003e77
alias HANA_DATA_1_4
}
#HANA LOG
multipath {
wwid 3600507640081811fe800000000003e7e
alias HANA_LOG_1_1
}
multipath {
wwid 3600507640081811fe800000000003e7d
alias HANA_LOG_1_2
}
multipath {
154 SAP HANA on IBM Power Systems: High Availability and Disaster Recovery Implementation Updates
wwid 3600507640081811fe800000000003e7c
alias HANA_LOG_1_3
}
multipath {
wwid 3600507640081811fe800000000003e7b
alias HANA_LOG_1_4
}
#HANA SHARED
multipath {
wwid 3600507640081811fe800000000003e7f
alias HANA_SHARED01
}
}
devices {
device {
vendor "IBM"
product "2145"
path_grouping_policy group_by_prio
prio "alua"
path_checker "tur"
path_selector "service-time 0"
failback "immediate"
rr_weight "priorities"
no_path_retry "fail"
rr_min_io_rq 32
dev_loss_tmo 600
fast_io_fail_tmo 5
}
}
This is a base example that was tested with both Red Hat Enterprise Linux 7.x and SUSE
Linux Enterprise Server 12/15 series. Always refer to the official documentation of the specific
storage and versions that you are using before using this example in production.
Note: For all multipath.conf files, perform the following tests to check whether the
settings match for a better solution about how to get to the correct configuration:
1. Perform a rolling takeover of Virtual I/O Server (VIOS) and reintegration expectation:
path recovery. (Simulate a rolling VIOS upgrade. The start and stop sequence of VIOS
must match the time that is typically needed for VIOS maintenance).
2. Pull the cable and reattach.
3. Simulate a rolling maintenance of storage head nodes and check whether the paths are
recovered in SUSE Linux Enterprise Server.
156 SAP HANA on IBM Power Systems: High Availability and Disaster Recovery Implementation Updates
C
This chapter also provides the scale-out prerequisites that you must meet when you plan to
use the Storage Connector API for sharing the data and log areas among the cluster nodes.
The HANA binary files are installed in the /hana/shared directory, which is shared among all
the cluster nodes. As such, there is no duplication of the binary files on each node. After
installation, each worker node has an entry inside the /hana/data/<SID> and
/hana/log/<SID> directories, in the form of mntNNNN, characterizing a cluster-based layout of
data and logs.
If you are using a shared storage approach, Elastic Storage Server, or Network File System
(NFS), you do not need any special configuration for installing HANA. If you are using the
storage connector API, then you must start the installer with a setup file, as described in
“Storage Connector API setup” on page 164.
Note: When using shared storage for the HANA data and log areas, Elastic Storage
Server, or NFS, validate that these file systems are mounted on all nodes before installing
HANA. If you use the storage connector for the data and log areas, check that they are
unmounted on all nodes before installing HANA.
Prerequisites
Your nodes must comply with the following prerequisites before you start a HANA scale-out
installation:
The date and time of all the nodes must be synchronized. Use a suitable Network Time
Protocol (NTP) server to comply with this requirement. If you do not have any NTP servers
available, one of the nodes can act as one.
Ensure that all nodes can ping one another by name by using both their short and fully
qualified host names.
A scale-out environment is characterized by a true cluster that is built at the application
layer, that is, the HANA database (DB). To ease the management of the cluster, set up
password-less communication among the cluster nodes.
All our installations use a four-node cluster with three worker nodes (saphana005, hana006,
and hana007), and one standby node (hana008).
158 SAP HANA on IBM Power Systems: High Availability and Disaster Recovery Implementation Updates
Scale-out graphical installation
To start a graphical installation of HANA, follow the instructions in 6.2.1, “GUI installation” on
page 86 until you get to the window that is shown in Figure 6-5 on page 90. Perform a
Multiple-Host System installation instead, as shown in Figure C-1 and shown as 1. Then,
check that the root user and password are entered correctly, as shown by 2. Keep the
installation path as /hana/shared, and then click Add Host, as shown by 3, to add the other
nodes into the cluster. The node where you are running the wizard becomes the master node.
Appendix C. SAP HANA software stack installation for a scale-out scenario 159
Every time that you click Add Host, a window similar to Figure C-2 opens. Add one node at a
time by using its host name, which is shown by 1, and select the appropriate node role, which
is shown by 2. There is no need to change the other parameters.
In our scenario, we have two more worker nodes and one standby node. So, we perform the
add node step three times until we have a layout with all of our nodes, as shown in Figure C-1
on page 159 and shown by 3.
The remaining of the installation process looks the same as a scale-up installation from this
point. You can resume the installation by following the steps from Figure 6-6 on page 91
onward.
Choose an action
160 SAP HANA on IBM Power Systems: High Availability and Disaster Recovery Implementation Updates
-----------------------------------------------
1 | install | Install new system
2 | extract_components | Extract components
3 | Exit (do nothing) |
---------------------------------------------------------------------------------
1 | server | No additional components
2 | all | All components
3 | afl | Install SAP HANA AFL (incl.PAL,BFL,OFL,HIE) version
2.00.010.0000.1491308763
4 | client | Install SAP HANA Database Client version 2.1.37.1490890836
5 | smartda | Install SAP HANA Smart Data Access version 2.00.0.000.0
6 | xs | Install SAP HANA XS Advanced Runtime version 1.0.55.288028
7 | epmmds | Install SAP HANA EPM-MDS version 2.00.010.0000.1491308763
Appendix C. SAP HANA software stack installation for a scale-out scenario 161
Select roles for host 'hana007':
162 SAP HANA on IBM Power Systems: High Availability and Disaster Recovery Implementation Updates
Enter Certificate Host Name For Host 'saphana005' [saphana005]:
Enter Certificate Host Name For Host 'hana006' [hana006]:
Enter Certificate Host Name For Host 'hana007' [hana007]:
Enter Certificate Host Name For Host 'hana008' [hana008]:
Enter System Administrator (rb1adm) Password: ********
Confirm System Administrator (rb1adm) Password: ********
Enter System Administrator Home Directory [/usr/sap/RB1/home]:
Enter System Administrator Login Shell [/bin/sh]:
Enter System Administrator User ID [1001]:
Enter Database User (SYSTEM) Password: ********
Confirm Database User (SYSTEM) Password: ********
Restart system after machine reboot? [n]:
Additional Hosts
hana008
Role: Database Standby (standby)
High-Availability Group: default
Worker Group: default
Storage Partition: N/A
hana007
Role: Database Worker (worker)
High-Availability Group: default
Worker Group: default
Storage Partition: <<assign automatically>>
hana006
Role: Database Worker (worker)
High-Availability Group: default
Worker Group: default
Storage Partition: <<assign automatically>>
Installing components...
Installing SAP HANA Database...
Preparing package 'Saphostagent Setup'...
Appendix C. SAP HANA software stack installation for a scale-out scenario 163
Log file written to
'/var/tmp/hdb_RB1_hdblcm_install_2017-07-11_21.57.20/hdblcm.log' on host
'saphana005'.
Example C-2 A global.ini file to be used at installation time for using the logical volume manager
storage connector
# cat /hana/shared/global.ini
[storage]
ha_provider = hdb_ha.fcClientLVM
partition_1_data__lvmname = hanadata01-datalv01
partition_1_log__lvmname = hanalog01-loglv01
partition_2_data__lvmname = hanadata02-datalv02
partition_2_log__lvmname = hanalog02-loglv02
partition_3_data__lvmname = hanadata03-datalv03
partition_3_log__lvmname = hanalog03-loglv03
partition_*_*__prtype = 5
partition_*_*__mountoptions = -t xfs
164 SAP HANA on IBM Power Systems: High Availability and Disaster Recovery Implementation Updates
Postinstallation notes
After installing HANA on the scale-out cluster, you can connect to it by using the HANA Studio
interface. The process for adding the HANA instance is the same as outlined in 6.3,
“Postinstallation notes” on page 101. When you add the instance, use the master node of
your cluster (the one you from which you ran the installation) as the node to which to connect.
Note: When you add System to HANA, you must select Multiple containers because
HANA V2.0 sps01 uses multiple containers DB mode. Otherwise, you see the following
error message.
After adding the instance in HANA Studio, go to the Landscape → Services tab to
confirm that the services are distributed among all the nodes, as shown in Figure C-3.
Appendix C. SAP HANA software stack installation for a scale-out scenario 165
Also, review the Landscape → Hosts tab, as shown in Figure C-4. Node hana008 is
displayed as STANDBY for the services for our installation.
As a best practice, perform failover tests by shutting down the HANA service in one of the
worker nodes, or shut down the node, and observe that the standby node takes over its role to
open a HANA Studio connection to another node that is running to check the cluster status.
Note: A scale-out cluster can handle only as many simultaneous node outages as the
number of standby nodes in the cluster. For example, if you have only one standby node,
you can sustain an outage of a single node. If two nodes fail at the same time, your HANA
DB is brought offline. If you must protect your business against the failure of multiple nodes
at the same time, add as many standby nodes as you need.
166 SAP HANA on IBM Power Systems: High Availability and Disaster Recovery Implementation Updates
Related publications
The publications that are listed in this section are considered suitable for a more detailed
description of the topics that are covered in this book.
IBM Redbooks
The following IBM Redbooks publications provide more information about the topics in this
document. Some publications that are referenced in this list might be available in softcopy
only.
Implementing High Availability and Disaster Recovery Solutions with SAP HANA on IBM
Power Systems, REDP-5443
Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum
Virtualize V8.2.1, SG24-7933
You can search for, view, download, or order these documents and other Redbooks,
Redpapers, web docs, drafts, and additional materials, at the following website:
ibm.com/redbooks
Online resources
These websites are also relevant as further information sources:
IBM Infrastructure for SAP HANA
https://www.ibm.com/it-infrastructure/power/sap-hana
IBM PowerVM
https://www.ibm.com/us-en/marketplace/ibm-powervm
IBM Service and productivity tools
https://www14.software.ibm.com/webapp/set2/sas/f/lopdiags/home.html
Red Hat Enterprise Linux evaluation
https://access.redhat.com/products/red-hat-enterprise-linux/evaluation
SAP HANA
https://www.sap.com/products/hana/implementation/sizing.html
SAP HANA System Replication in pacemaker cluster
https://access.redhat.com/articles/3004101
168 SAP HANA on IBM Power Systems: High Availability and Disaster Recovery Implementation Updates
SAP HANA on IBM Power Systems: High Availability and Disaster Recovery Implementation Updates
(0.2”spine)
0.17”<->0.473”
90<->249 pages
Back cover
SG24-8432-00
ISBN 073845785x
Printed in U.S.A.
®
ibm.com/redbooks