Boe330 en Col17
Boe330 en Col17
Boe330 en Col17
.
.
PARTICIPANT HANDBOOK
INSTRUCTOR-LED TRAINING
.
Course Version: 17
Course Duration: 5 Day(s)
Material Number: 50157327
SAP Copyrights, Trademarks and
Disclaimers
No part of this publication may be reproduced or transmitted in any form or for any
purpose without the express permission of SAP SE or an SAP affiliate company.
SAP and other SAP products and services mentioned herein as well as their
respective logos are trademarks or registered trademarks of SAP SE (or an SAP
affiliate company) in Germany and other countries. Please see https://
www.sap.com/corporate/en/legal/copyright.html for additional trademark
information and notices.
Some software products marketed by SAP SE and its distributors contain proprietary
software components of other software vendors.
National product specifications may vary.
These materials may have been machine translated and may contain grammatical
errors or inaccuracies.
These materials are provided by SAP SE or an SAP affiliate company for
informational purposes only, without representation or warranty of any kind, and SAP
SE or its affiliated companies shall not be liable for errors or omissions with respect
to the materials. The only warranties for SAP SE or SAP affiliate company products
and services are those that are set forth in the express warranty statements
accompanying such products and services, if any. Nothing herein should be
construed as constituting an additional warranty.
In particular, SAP SE or its affiliated companies have no obligation to pursue any
course of business outlined in this document or any related presentation, or to
develop or release any functionality mentioned therein. This document, or any related
presentation, and SAP SE’s or its affiliated companies’ strategy and possible future
developments, products, and/or platform directions and functionality are all subject
to change and may be changed by SAP SE or its affiliated companies at any time for
any reason without notice. The information in this document is not a commitment,
promise, or legal obligation to deliver any material, code, or functionality. All forward-
looking statements are subject to various risks and uncertainties that could cause
actual results to differ materially from expectations. Readers are cautioned not to
place undue reliance on these forward-looking statements, which speak only as of
their dates, and they should not be relied upon in making purchasing decisions.
Typographic Conventions
Demonstration
Procedure
Warning or Caution
Hint
Facilitated Discussion
299 Unit 10: Disaster Recovery in the SAP Business Intelligence Platform 4.3
TARGET AUDIENCE
This course is intended for the following audiences:
● Systems Architect
Lesson 1
Describing the BI Platform Architecture, System, and System Elements 3
Lesson 2
Identifying Key Architecture Flows 29
Lesson 3
Reviewing SAP BusinessObjects BI Platform Security 37
Lesson 4
SAP BusinessObjects 4.3 and SAP Analytics Cloud Hybrid Solution 47
Lesson 5
SAP BusinessObjects BI Platform Support Tool 51
UNIT OBJECTIVES
LESSON OVERVIEW
This lesson describes the SAP BusinessObjects Business Intelligence (BI) platform
architecture, BI systems, and the elements contained within BI systems. It aslo describes key
benefits and features of the BI Suite.
LESSON OBJECTIVES
After completing this lesson, you will be able to:
● Describe the SAP strategic focus on BI
● Describe the BI suites
● Describe the BI platform system
● Describe the integration of the BI platform suite
Figure 1: BI Challenges
Strategic Focus on BI
A solid BI strategy is critical to the consolidation of BI assets and effort across any company.
Major barriers to building a successful BI strategy include the following problems:
● IT is not aligned with the business strategy.
● The organization makes decisions about technology or architecture structures without
considering the business problems that the organization is trying to solve.
Figure 2: BI Suite
● Data discovery
● Analysis
● Dashboards
● Reporting
● Visualization
● Predictive capabilities
● Delivery to mobile devices
The Evolution of BI
The use of business data has evolved significantly from simply gathering raw data, to more
complex efforts to use data for analysis and gaining business insights. Initially, business data
could only inform companies about what was happening with their businesses. As reporting
and user tools matured, organizations began to learn why certain activities occurred within
their businesses. More sophisticated analysis tools allowed organizations to determine what
would happen with the future of their businesses. As analytic capabilities in the tools became
more mature, organizations could predict what paths would be best for their business and
optimize reporting, analysis, and prediction to gain valuable insights into the operation of their
companies.
● Instant mobile BI
● Self-service data discovery
● Predictive analysis
Benefits of a BI Strategy to IT
Successful implementation of SAP BI solutions positively impact business users in the
following ways:
● IT can align itself with business partners and formal business needs.
● The IT organization can create prioritized roadmaps:
- Plan for short, medium, and long term projects
- Align with strategic business goals
- Deliver measurable results
● The IT organization can create business justifications for an enterprise-wide scope and an
end-to-end BI approach, including data management.
Figure 4: BI Platform
The SAP customer base has three core sets of BI client requirements: Agile Visualization,
Dashboards and Applications, and Reporting.
Within each of these requirement areas, SAP has products designed to meet the specific
needs of its large and diverse customer base.
Engaging Experiences
The following are possible with SAP products:
Information Distribution
The following are possible with SAP products:
Solution Areas
The following solution areas are covered:
Discovery
This area is focused on analysts and users within lines of business and departments. This
category allows users to acquire, cleanse, and visualize data with minimal end user
training. The interfaces have limited layout capabilities, but include powerful faceted
exploration and navigation. Solutions in this category are a great fit for in-memory data
source, such as SAP HANA, allowing users to go anywhere in the data.
Analysis
Workflows are the main focus of solutions in this category, but some layout is available.
They are designed to take advantage of modeled multi-dimensional sources (SAP HANA
and SAP BW) and data in Microsoft SQL Server Analysis Services. Solutions in this
category are designed from the ground up as OLAP clients, and are the best solution for
projects with demanding hierarchical navigation requirements.
Dashboards
The dashboards category focuses on the needs of IT and developers to create and deploy
interactive, visual dashboards. Unlike discovery and analysis, the dashboards are
typically built for others to consume using highly visual layout tools.
Reporting
Both SAP Crystal Report and Web Intelligence are focused on the mass distribution of
formatted data, with the primary difference being the report design experience. SAP
Crystal Reports use a desktop-based report designer, while the primary Webi design
experience is over the web. Both solutions allow for scheduled reports or on-demand
reporting.
Predictive
The solutions in this category include SAP Analytics Cloud (SAC).
BI Platform Terms
The following terms refer to the types of software running on a BI platform:
Server
This operating system level process (other systems use a daemon) hosts one or more
services. A server runs under a specific operating system account and has its own
process ID (PID).
Examples of a server include Central Management Server (CMS) and Adaptive
Processing Server.
Service
This server subsystem performs a specific function. The service runs within the memory
space of its server under the PID of the parent server.
Examples of a service include the Web Intelligence Scheduling Service, which is a
subsystem that runs within the Adaptive Job Server.
Node
This is a collection of BI platform servers running on the same host and managed by the
same Server Intelligence Agent (SIA). One or more nodes can be on a single host.
Installation Approaches
The BI platform can be installed in various ways:
● On a single computer
● Spread across different computers on an intranet
● Separated over a wide-area network (WAN)
BI Platform Tiers
The BI platform is composed of the following tiers, which are optimized for specific tasks and
operations:
Deployment Approaches
The BI platform can be deployed in various ways:
● Single-server deployment
● Multi-server distribution deployment
● Scaling up and/or scaling out
● High-availability deployment
This figure illustrates a highly available SAP BusinessObjects 4.3 deployment with system/
auditing databases and the storage tier as a network share.
Network latency
This is the time a packet takes to make a round-trip between two networked hosts.
Network bandwidth
This measures the amount of data that can be sent from point A to point B in a certain
amount of seconds.
Network latency is the time a packet takes to make a round-trip between two networked
hosts. The packet must arrive at its destination and return back to the originating client.
Network bandwidth measures the amount of data that can be sent from point A to point B in a
certain amount of seconds. Effectively, bandwidth is the real amount of data from point to
point, not the theoretical speed listed “on the box.” Real measurements are taken by sending
a known size of data across and measuring the time it takes for all of it to arrive. Relying only
on NIC card and router specifications is likely to result in imprecise and higher than effective
measurements of bandwidth. Those specifications indicate the maximum bandwidth that
appliance might achieve. Measuring bandwidth across shared connections and network
infrastructure will generally result in a substantially smaller number.
● Ttcp
● Iperf
● Bwping
● Netperf
Note:
While ping is generally available, network engineer support is typically required to
test bandwidth. Running tools like netperf without permission is likely to raise
alarms.
● HTTP(S) requests
● Reporting Database requests
● CMS database requests, and system and API calls
Note:
Each network connection type is impacted differently by network latency and
bandwidth limitations. The nature of network request types will determine the
deployment options. By understanding the request types, you will be better
prepared to choose among deployment options.
HTTP(S) Requests
These requests are usually long, cover a large distance, and are few per user:
● A single request results in a number of processes in the back end, such as logon, report
open, or refresh requests.
● An HTML response can be a frameset, which pulls in additional HTML pages.
● The HTML pages themselves can pull in images, JavaScript files, Query Panel, and other
objects.
● Images, JavaScript files, and others objects that have been cached will not download
again, but a timestamp check is performed to determine if the files are unchanged.
Clearing the browser cache forces a download.
● Using many simultaneous connections reduces latency but increases bandwidth concerns.
● Each browser allows a different number of simultaneous connections:
The Nature of CMS Database Requests and System and API Calls
The following are considerations when discussing the nature of CMS database requests and
system and API calls:
● Many of these requests and calls occur, and they occur frequently.
● They are short in duration—usually counted in milliseconds and less than one second.
● One HTTP request, such as logon, open report, refresh report, and others, can generate
multiple system and API calls.
● However, the exception is a request to retrieve documents from FRS, which is more
bandwidth-bound than latency-bound.
● The network would be impacted more by multiple and frequent requests than an HTTP(S)
request over distance.
● You might obtain a slight, barely noticeable gain on web requests, but it would be
insignificant compared to the costs incurred.
● CMS services ping other CMSs and BI platform services continuously.
Agile Visualization
Agile Visualization Functions and Components
The following are functions and components of agile visualization:
● Using SAP BusinessObjects Lumira Designer, they can quickly massage, transform, and
personalize data without scripting.
● They can access the tools everywhere using mobile, web-based, and desktop access, both
online and offline.
● They receive the best SAP BW support with Analysis.
Note:
SAP BusinessObjects Lumira Discovery will be deprecated in 2024. Its
replacement will be SAC Analytics Cloud. Please plan accordingly.
SAP BusinessObjects Lumira Discovery helps you understand your organization’s data,
personalize it, and create beautiful content:
SAP BusinessObjects Lumira Discovery provides agile visualizations that support real-time
understanding of data, both big and small. Users can combine their own desktop data with
other information, then share the outcomes with colleagues. Flexible, self-service cloud
solutions like SAP BusinessObjbects Lumira help your organization leave rigid spreadsheets
behind, generate insights on the go, and visually present a data-driven story.
SAP BusinessObjects Lumira Discovery fully integrates with the SAP core enterprise BI
platform, the SAP BusinessObjects Business Intelligence suite, to ensure true data
governance and facilitate collective insight.
● Depth of SAP Crystal Reports SDK and ability to embed SAP Crystal Reports.
● Ability to access the tools everywhere using mobile, web-based, and desktop access, both
online and offline.
● Quick construction of ad-hoc queries and reports, without knowledge of SQL or underlying
data structures.
● Market-leading self-service reporting solution for rapid report creation.
● Support for a multi-source, semantic layer to bring together different source of
information.
● Ability to access the tools everywhere using mobile, web-based, and desktop access, both
online and offline.
● Universes
● Query generation
● Calculator
● Local cache (a microcube)
● Query panel
● Database connectivity parameters
The semantic layer frees the business user from the complexity of the data structures and
technical names. It enables business users to access, interact, and analyze their data
regardless of the underlying data sources and schemas.
● A universe allows business users to analyze and report on corporate data in a non-
technical language. It is a collection of the following metadata objects:
- Dimensions
- Measures
- Hierarchies
- Attributes
- Pre-defined calculations
- Functions
- Queries
● There are two types of universes: multi-source-enabled relational universes and
dimensional universes.
The information design tool is an SAP BusinessObjects metadata design environment that
enables a designer to extract, define, and manipulate metadata from relational and OLAP
sources to create and deploy SAP BusinessObjects universes. The objects in the universe
form the metadata object layer (the business layer), which is built on a relational database
schema or an OLAP cube. The objects map directly to the database structures via SQL or
MDX expressions. A universe includes connections identifying the data sources so queries
can be run on the data. The role of the universe is to provide the business user with
semantically understandable business objects, such as, Customer, Country, Quarter,
Revenue, or Margin. The user is then free to analyze data and create reports using the
business objects that are relevant to their needs, without requiring knowledge of the
underlying data sources and structures.
Feature of Universes
● Access to all the major relational sources with multi-source enabled relational universes.
● Support for SAP NetWeaver Business Warehouse 7.0.1 and 7.3.
- With multi-source enabled relational universes.
- With direct access to BEx Queries (no universe).
● Support of OLAP sources: Microsoft Analysis Services 2005 and 2008 with dimensional
universes.
● General access capabilities via open database connectivity (ODBC).
Benefits of Universes
● The SAP BusinessObjects BI 4.3 Client Installer can be used to deploy all of the client-side
components for the BI platform. This does not include SAP Crystal Reports or SAP Crystal
Reports Dashboard Design.
● Platform Services
● Connection Services
● Data Federator Services
● SAP Crystal Report Services
● Web Intelligence Services
● Mobile Services
● Lifecycle Management
● Subversion
● Tomcat 8.0/9.0
This plug-in integrates into Microsoft Excel and Microsoft PowerPoint to be used as an
OLAP tool. With access to OLAP data sources, users can combine information from
different systems within a single workspace.
SAP BusinessObjects Analysis for online analytical processing (OLAP)
This OLAP tool (formerly called Voyager) works with multidimensional data. With access
to OLAP data sources, users can combine information from different systems within a
single workspace.
SAP Crystal Reports for Enterprise
This Java-based report design tool is used to create and integrate powerful reports in the
BI platform.
Data Federation Administration Tool
This tool (formerly known as Data Federator) enables multi-source universes by
distributing queries across data sources and lets you federate data through a single data
foundation. Administrators use this tool to optimize data federation queries and fine-tune
the data federation query engine for the best possible performance.
Information Design Tool
This SAP BusinessObjects metadata design environment lets a designer extract, define,
and manipulate metadata from relational and OLAP sources to create and deploy UNX
universes.
Promotion Management Tool (PMT)
The Promotion Management Tool (PMT) provides centralized views to monitor the
progress of the entire lifecycle process. The PMT is used to move content from one BI
platform to another of the same version.
Repository Diagnostic Tool
This tool scans, diagnoses, and repairs inconsistencies that might occur between a CMS
system database and an FRS file store. It can also report the status of repairs and what
actions have been completed.
Translation Management Tool
This tool defines multilingual universes and manages translations of universes and their
Web Intelligence documents and prompts.
Monitoring Database
This embedded Java Derby database stores system configuration and component
information for SAP Supportability in the SAP BusinessObjects BI platform.
Universes (UNX)
UNX universes are identified by the .unx file extension. These universes are based on the
new semantic layer in the SAP BusinessObjects BI platform and are built using the new
Information Design Tool.
This generic server hosts services responsible for processing requests for a variety of
sources.
SAP Crystal Reports Processing Server
The server responds to page requests from SAP Crystal Reports for Enterprise by
processing reports and generating encapsulated page format (EPF) files. The server
retrieves data for the report from the latest instance or directly from the data source.
After it generates the report, it converts the data to one or more EPF files, which are then
sent to the SAP Crystal Reports Cache Server. The EPF format supports page-on-
demand access, so only the requested page is returned, not the entire report. System
performance is improved and unnecessary network traffic is reduced for large reports.
Dashboard Analytics Server
This server is used by BI workspaces to create and manage corporate and personal BI
workspace module content.
Web interfaces
● Central Management Console (CMC)
● BI Launch Pad
Semantic Layer
● UNX
● Multi-source universes
Dashboards
Advanced Analysis – Web
The Central Management Console (CMC) is a Web-based tool, which offers a single interface
through which you can perform almost every day-to-day administrative task, including user
management, content management, and server management.
LESSON SUMMARY
You should now be able to:
● Describe the SAP strategic focus on BI
● Describe the BI suites
● Describe the BI platform system
● Describe the integration of the BI platform suite
LESSON OVERVIEW
This lesson describes the architecutural tiers of the the BI platform and identifies key process
flows through the tiers; for example, validating users, scheduling jobs, and on-demand
viewing of Web Intelligence documents.
LESSON OBJECTIVES
After completing this lesson, you will be able to:
● Describe key architecture flows
● Client
● Web
● Intelligence (Management)
● Storage
● Processing
● Data
The figure illustrates the location of components among the conceptual tiers.
2. The Web application server determines that the request is a logon request. The Web
application server sends the user name, password, and authentication type to the Central
Management Server (CMS) for authentication.
3. The CMS validates the user name and password against the appropriate database. In this
case, Enterprise authentication is used, so user credentials are authenticated against the
CMS system database.
4. Upon successful validation, the CMS creates a session in memory for the user.
5. The CMS sends a response to the Web application server to let it know that the validation
was successful.
6. The Web application server generates a logon token in memory for the user session. For
the rest of this session, the Web application server uses the logon token to validate the
user against the CMS. The Web application server also generates the next Web page to
send to the Web client.
7. The Web application server sends the next Web page to the Web server.
8. The Web server sends the Web page to the Web client.
2. The Web application server interprets the request and determines that the request is a
schedule request. The Web application server sends the schedule information, including
time and destination, to the specified CMS.
3. The CMS checks the CMS system database to ensure that the user has the rights required
to schedule the object. If the user has sufficient rights, the CMS adds a new record to the
CMS system database. The CMS also adds the instance to its list of pending schedules.
4. The CMS sends a response to the Web application server to let it know that the schedule
operation was successful.
5. The Web application server generates the next HTML page and sends it through the Web
server to the Web client.
2. The SIA looks in its cache to locate a CMS. If the SIA is configured to start a local CMS, and
the CMS is not running, the SIA starts the CMS and connects. If the SIA is configured to
use a running CMS (local or remote), it attempts to connect to the first CMS in its cache. If
the CMS is not currently available, it attempts to connect to the next CMS in its cache. If
no cached CMS is available, the SIA waits for one to become available. The CMS then
confirms the SIA’s identity to ensure that it is valid.
3. After the SIA has successfully connected to a CMS, it requests a list of servers to manage.
A SIA does not store information about the servers it manages. The configuration
information that dictates which server is managed by an SIA is stored in the CMS system
database and is retrieved from the CMS by the SIA when it starts.
4. The CMS queries the CMS system database for a list of servers managed by the SIA. The
configuration for each server is also retrieved.
5. The CMS returns the list of servers to manage, and their configuration, to the SIA.
6. For each server configured to start automatically, the SIA starts it with the appropriate
configuration and monitors its state. Each server started by the SIA is configured to use
the same CMS used by the SIA. Any servers that are not configured to start automatically
with the SIA will not start.
2. When the scheduled job time arrives, the CMS locates an available Web Intelligence
Scheduling Service running on an Adaptive Job Server. The CMS sends the schedule
request and all information about the request to the Web Intelligence Scheduling Service.
3. The Web Intelligence Scheduling Service locates an available Web Intelligence Processing
Server, based on the Maximum Jobs Allowed value configured on each Web Intelligence
Processing Server.
4. The Web Intelligence Processing Server determines the location of the Input File
Repository Server (FRS) that houses the document and the universe metalayer file on
which the document is based. The Web Intelligence Processing Server then requests the
document from the Input FRS. The Input FRS locates the Web Intelligence document as
well as the universe file on which the document is based, and then streams them to the
Web Intelligence Processing Server. (This step also requires communication with the CMS
and the CMS system database to locate the required server and objects).
5. The Web Intelligence document is placed in a temporary directory on the Web Intelligence
Processing Server. The Web Intelligence Processing Server opens the document in
memory. The QT.dll generates the SQL from the universe on which the document is
based. The Connection Server libraries included in the Web Intelligence Processing Server
are used to connect to the data source, which is a relational database in this example. The
required data passes through QT.dll back to the Report Engine in the Web Intelligence
Processing Server, where the document is processed. A new successful instance is
created.
6. The Web Intelligence Processing Server uploads the document instance to the Output File
Repository Server. This step also requires communication with the CMS and the CMS
system database to locate the required server and objects.
7. The Web Intelligence Processing Server notifies the Web Intelligence Scheduling Service
(on the Adaptive Job Server) that document creation is completed. If the document is
scheduled to go to a destination, such as file system, FTP, SMTP, or Inbox, the Adaptive
Job Server retrieves the processed document from the Output File Repository Server and
delivers it to the specified destinations. That is not the case in this example.
8. The Web Intelligence Scheduling Service updates the Central Management Server with
the job status.
9. The CMS updates the job status in its memory, and then writes the instance information
to the CMS system database.
1. The user sends the view request from the BI launch pad Web client through the Web
server to the Web application server, where the BI launch pad Web application is running.
2. The Web application server recognizes the request as a request to view a Web Intelligence
document. The Web application server checks the CMS to ensure that the user has
sufficient rights to view the document.
3. The CMS checks the CMS system database to determine if the user has the appropriate
rights to view the document.
4. The CMS sends a response to the Web application server to confirm that the user has
sufficient rights to view the document.
5. The Web application server sends a request to the Web Intelligence Processing Server,
requesting the document.
6. The Web Intelligence Processing Server requests the document, and the universe file on
which the requested document is built, from the Input FRS. The universe file contains
metalayer information, including row-level and column-level security. The Input FRS sends
a copy of the document and universe to the Web Intelligence Processing Server. This step
also requires communication with the CMS and the CMS system database to locate the
required server and objects.
7. The Web Intelligence Report Engine runs on the Web Intelligence Processing Server. The
Report Engine opens the document in memory and launches QT.dll and a Connection
Server in process. QT.dll generates, validates, and regenerates the SQL and connects to
the database to run the query. The Connection Server uses the SQL to get the data from
the database to the Report Engine, where the document is processed.
8. The Web Intelligence Processing Server sends the viewable document page that was
requested to the Web application server.
9. The Web application server forwards the document page to the Web server.
10. The Web server sends the requested page to be rendered in the Web client.
Official Product Tutorials — SAP Business In the SAP Community Network, search for
Objects Business Intelligence Platform 4.x the title.
LESSON SUMMARY
You should now be able to:
● Describe key architecture flows
LESSON OVERVIEW
In this lesson, we will review SAP BusinessObjects BI platform security concepts.
LESSON OBJECTIVES
After completing this lesson, you will be able to:
● Administer security
Guest This user account belongs to the Everyone group. This ac-
count is disabled by default and is not assigned a password
by the system. If you assign a password to this account, the
single sign-on to the BI launch pad will be broken.
SMAdmin This user account is a read-only account used by the SAP Sol-
ution Manager to access BI platform components.
Licensing
The BI platform supports four license types.
License Types
● Concurrent user
● Named user
Each license type grants and restricts access to particular tasks and applications. Depending
on which license you have, you may be unable to access some applications, create content, or
add documents to the repository.
Note:
Choose License Key in the CMC for more information on your licensing scheme.
Caution:
BI Viewer users cannot access the CMC.
User Settings
New users and groups are created in the CMC. When you create a new user account in the
CMC, you first must specify the user’s properties, before you configure group memberships
for the user. Groups are collections of users who share the same account privileges. For
instance, you might create groups that are based on department, role, or location. Groups
enable you to change the rights for users in one place (a group) instead of modifying the
rights for each user account individually. You can also assign object rights to a group or
groups.
User Rights and Permissions
After a user account has been created, you can modify the account properties. The properties
that can be modified include:
● Account Name
The account name is the unique identifier for a user account and is the user name entered
when logging into the SAP BusinessObjects Business Intelligence platform.
● Full Name
This optional field is used to capture the user’s full name. We recommend that you use this
field, particularly when managing many users.
● Email
This optional field is used to add the user’s email address and is used for reference only.
For example, if the user forgets their password sometime in the future, you can retrieve
their email address from this field to send them their password.
● Description
This optional field is used to add information about the user, such as their position,
department, or geographic location.
● Enterprise Password Settings
User password settings allow you to change the password and password settings for the
user.
Global password settings can be configured in the Authentication area of the Central
Management Console.
● Connection Type
This option specifies how the user connects to the SAP BusinessObjects Business
Intelligence platform based on the license agreement.
● Account is Disabled
This check box allows the Administrator to deactivate the user account, instead of
permanently deleting the account. This option is useful when administering users who will
be temporarily denied system access, such as employees taking parental leave.
Select the Account is disabled check box to disable the Guest account and make it
unavailable for use.
● Assign Alias
If a user has multiple accounts within SAP BusinessObjects Business Intelligence platform,
use this feature to link the accounts. This results in the user having multiple SAP
BusinessObjects Business Intelligence platform login credentials that map to one SAP
BusinessObjects Business Intelligence platform account.
You can also use the New Alias button to create a new alias.
Group Hierarchy
Groups are collections of users who share the same account privileges. Therefore, you may
create groups that are based on department, role, or location. Groups enable you to change
the rights for users in one place (a group) instead of modifying the rights for each user
account individually. You can also assign object rights to a group or groups.
In the Users and Groups area, you can create groups that give a number of people access to
the report or folder. These groups allow you to make changes in one place instead of
modifying each user account individually. You can also view the several default group
accounts summarized by the following table.
Note:
To view groups in the CMC, in the tree panel, choose Group List. Alternatively, to
display a hierarchical list of all available groups, choose Group Hierarchy.
Administrators Members of this group can perform all tasks in all of the SAP
BusinessObjects Business Intelligence platform applications
(CMC, CCM, Publishing Wizard, and BI launch pad). By de-
fault, the Administrators group contains only the Administra-
tor user.
Everyone Each user is a member of the Everyone group.
QaaWS Group Designer Members of this group have access to Query as a Web Serv-
ice.
Report Conversion Tool Members of this group have access to the Report Conversion
Users Tool application.
Translators Members of this group have access to the Translation Manag-
er application.
Universe Designer Users Users who belong to this group are granted access to the Uni-
verse Designer folder and the Connections folder. They can
control who has access rights to the Designer application.
Add users to this group as needed. By default, no user be-
longs to this group.
Group Settings
After a group is created, you can modify its membership to include other groups. Groups can
include other groups as subgroups. Group names must be unique. After a group is created,
you can modify the properties.
● Title
● Description
● Translations
● User Security
● Member of
● Profile Values
● Account Manager
enabling access control to your BI platform content, rights enable you to delegate user and
group management to different departments. Rights also provide your IT department with
administrative access to servers and server groups.
You can set rights on objects (for example, folders and documents) and principals (the users
and groups that access the objects). For example, to give a manager access to a particular
folder, add the manager to the access control list (the list of principals who have access to an
object) for the folder. You cannot give the manager access by configuring the manager's
rights settings in the Users and Groups area. The User Security settings for the manager in
the Users and Group area are used to grant other principals (such as delegated
administrators) access to the manager as an object in the system. In this way, principals are
themselves like objects for others with greater rights to manage.
Rights on objects can be Granted, Denied, or Not Specified. If a right is Not Specified, the right
is denied. In addition, if the access control settings result in a right being both Granted and
Denied to a User or Group, the right is denied.
An important exception to this rule occurs when a right is explicitly set on a child object that
contradicts the rights inherited from the parent object. In this case, the right set on the child
object overrides the inherited rights. This exception also applies to users who are members of
groups. If a user is explicitly granted a right that is denied to the user's group, the right set on
the user overrides the inherited right from the group.
Rights Terminology
Access levels
Access levels are groups of rights that users frequently need. They allow administrators
to set common security levels quickly and uniformly rather than requiring that individual
rights be set one by one. The BI platform comes with several predefined access levels.
These predefined access levels are based on a model of increasing rights: Beginning with
View and ending with Full Control, each access level builds on the rights granted by the
previous level.
Inheritance
The BI platform recognizes two types of inheritance: group inheritance and folder
inheritance. Group inheritance allows principals to inherit rights as the result of group
membership. Folder inheritance allows principals to inherit any rights that they have
been granted on an object's parent folder.
Top-level folder security
Top-level folder security is the default security set for each specific object type (for
example Universes, Web Intelligence Application, Groups, and Folders). Each object type
has its own top-level folder (root folder) from which all the subobjects inherit rights.
If there are any access levels common to certain object types that apply throughout the
whole system, set them at the top-level folder specific to each object type. For example, if
the Sales group requires the View access level to all folders, you can set this access at the
root level for Folders.
Folder-level security
Folder-level security enables you to set access-level rights for a folder and the objects
contained within that folder. While folders inherit security from the top-level folder (root
folder), subfolders inherit the security of their parent folder. Rights set explicitly at the
folder level override inherited rights.
Object-level security
Objects in BI platform inherit security from their parent folder. Rights set explicitly at the
object level override inherited rights.
Type-specific Rights
Type-specific rights are rights that affect specific object types only, such as SAP Crystal
reports, folders, or access levels. Type-specific rights consist of the following:
● General rights for the object type
These rights are identical to general global rights (for example, the right to add, delete, or
edit an object), but you set them on specific object types to override the general global
rights settings.
● Specific rights for the object type
These rights are available for specific object types only. For example, the right to export a
report's data appears for SAP Crystal reports but not for Microsoft Word documents.
The diagram Type-specific rights example illustrates how type-specific rights work. Right 3
represents the right to edit an object. Group A is denied Edit rights on the top-level folder and
granted Edit rights for SAP Crystal reports in the folder and subfolder. These Edit rights are
specific to SAP Crystal reports and override the rights settings on a general global level. As a
result, members of Group A have Edit rights for SAP Crystal reports but not the XLF file in the
subfolder.
Type-specific rights are useful because they let you limit the rights of principals based on
object type. Consider a situation in which an administrator wants employees to be able to add
objects to a folder but not create subfolders. The administrator grants Add rights at the
general global level for the folder, and then denies Add rights for the folder object type.
Rights are divided into collections based on the object types to which they apply.
Rights Collections
General
These rights affect all objects.
Content
These rights are divided according to particular object types. Examples of content object
types include SAP Crystal reports and Adobe Acrobat PDFs.
Application
These rights are divided according to which BI platform application they affect. Examples
of applications include the CMC and BI launch pad.
System
These rights are divided according to which core system component they affect.
Examples of core system components include Calendars, Events, and Users and Groups
Type-specific rights are in the Content, Application, and System collections. In each
collection, type-specifc rights are further divided into categories based on object type.
LESSON SUMMARY
You should now be able to:
● Administer security
LESSON OBJECTIVES
After completing this lesson, you will be able to:
● Understand SAP BusinessObject 4.3 strategy and direction
● Understand how the SAP BusinessObject and SAC can be configured as a blended hybrid
(on-prem/cloud) solution
Figure 24: SAP Analytics Cloud and SAP BusinessObjects working together as part of an Analytics Hybrid
Solution
SAP will continue to provide maintenance and support for SAP BusinessObjects 4.3 on-
premise until at least 2027. At that point, SAP will re-evaluate continued support and
maintenance. Information about this will be disseminated publicly as this time-line
approaches.
Figure 26: Using Universes as a data source for SAP Analytics Cloud (SAC)
You can also utilize existing Web Intelligence documents in the BI Repository as a data source
for SAC. An SAP SAC user would have to create an SAC model derived from an existing Web
Intelligence document resident in the BI Repository.
Figure 27: Consume a Web Intelligence document data model in SAP Analytics Cloud (SAC)
LESSON SUMMARY
You should now be able to:
● Understand SAP BusinessObject 4.3 strategy and direction
● Understand how the SAP BusinessObject and SAC can be configured as a blended hybrid
(on-prem/cloud) solution
LESSON OVERVIEW
LESSON OBJECTIVES
After completing this lesson, you will be able to:
● Describe SAP BusinessObjects BI Platform Support Tool
SAP Host Agent Each BI Platform node and Hardware metrics and config-
Tomcat node urations are collected using
the SAP Host Agent (Hard-
ware Analysis and Patch His-
tory Analysis).
E2E tracing has the option to
use the SAP Host Agent when
gathering log files once the
trace is complete.
Tomcat JMX Configuration Each Tomcat node in your Metrics are collected from
landscape Tomcat via JMX (Web Appli-
cation Server Analysis).
In the BI Platform Support Tool 2.1, some of the functionality now requires the SAP Host
Agent to collect the analysis data. For example, the SAP Host Agent is required for
the Hardware Analysis Summary report and the Patch History Analysis report. It is also used
to compress and gather the log files from each node during an E2E trace. The SAP Host Agent
is a service that has a very lightweight performance footprint and provides a secure
communication channel between the BI Platform Support Tool client, and your BI and Web
Application Server nodes.
● If you have never installed the SAP Host Agent in your environment, to get the most
functionality from the Landscape Analysis Report and E2E Trace Wizard, you must
complete this step.
● If you have already installed the SAP Host Agent to use with the BI Platform CMC
Monitoring application, then you do not need to install the SAP Host Agent again on the BI
nodes. However, if possible, install the SAP Host Agent on your Apache Tomcat server.
● If you already have the Solution Manager Diagnostic Agent 7.4 on your SAP BI platform
nodes and your Web Application Server nodes, it is not necessary to install the SAP Host
Agent. It is already included as part of the Diagnostic Agent.
● If you are running Diagnostic Agent 7.3 or lower, it is recommended that you update the
SAP Host Agent to the latest 7.2 version (older SAP Host Agents that ship with older
Diagnostic Agents may not be able to collect the required host metrics). The upgrade is a
safe and quick process, refer to the Manual Upgrade of SAP Host Agent page (https://
help.sap.com/saphelp_nw73ehp1/helpdata/en/ab/f882afa4984d818fa2f8d530ad0e83/
content.htm)
● The SAP Host Agent is not required on the client running the BI Platform Support Tool.
● The SAP HOST AGENT 7.21 Patch 45 or higher and SAPCAR 7.21 archives are required.
They can be downloaded from the SAP Software Download Center (https://
support.sap.com/swdc).
LESSON SUMMARY
You should now be able to:
● Describe SAP BusinessObjects BI Platform Support Tool
Learning Assessment
1. What are some of the ways that implementing a BI strategy can help the IT department?
2. SAP BusinessObjects Cloud for business intelligence may be hosted with Amazon Web
Services.
Determine whether this statement is true or false.
X True
X False
3. SAP HANA-based Analytics are reported by schedule, and therefore not real-time.
Determine whether this statement is true or false.
X True
X False
4. Use SAP HANA Rapid Deployment Solutions when you want to do reporting analysis on an
ECC system.
Determine whether this statement is true or false.
X True
X False
5. SAP offers many BI tools. At first glance, an organization might think the varied choice of
tools would be too confusing for users or too complicated for IT to develop and support.
Why should an organization not pick just one SAP BI tool that fits most of its business
scenarios and use that tool for all of its scenarios?
X A Client Tools
X C Lifecycle Management
X D Configuration Services
7. Which of the following is a third party component that can be installed on the platform?
Choose the correct answer.
X B Linux Translator
X C Atlassian JIRA
X D Subversion
X True
X False
X True
X False
10. Match the tiers with the components that function within the tiers.
Match the item in the first column to the corresponding item in the second column.
11. During the process to set a schedule for a Web Intelligence document, the Web
Application Server directly communicates with the BI Launch Pad.
Determine whether this statement is true or false.
X True
X False
X True
X False
X A Superuser
X B Guest
X C BI Viewer
X D Administrator
X E Named user
X A Department
X B Login
X C Location
X D Role
X E Status
17.
Choose the correct answers.
1. What are some of the ways that implementing a BI strategy can help the IT department?
The IT organization can align itself with business partners and formal business needs. The
organization can plan for short, medium, and long term projects. IT can align its efforts
with strategic business goals and deliver measurable results. The company can create
business justifications for an enterprise-wide scope and an end-to-end BI approach,
including data management.
2. SAP BusinessObjects Cloud for business intelligence may be hosted with Amazon Web
Services.
Determine whether this statement is true or false.
X True
X False
3. SAP HANA-based Analytics are reported by schedule, and therefore not real-time.
Determine whether this statement is true or false.
X True
X False
4. Use SAP HANA Rapid Deployment Solutions when you want to do reporting analysis on an
ECC system.
Determine whether this statement is true or false.
X True
X False
5. SAP offers many BI tools. At first glance, an organization might think the varied choice of
tools would be too confusing for users or too complicated for IT to develop and support.
Why should an organization not pick just one SAP BI tool that fits most of its business
scenarios and use that tool for all of its scenarios?
SAP BI offerings meet four sets of client requirements: (1) Reporting, (2) Monitoring, (3)
Analyzing, and (4) Discovering. For each of these requirement areas, SAP has designed
products to meet the specific needs of its large and diverse customer base. When an
organization is deciding which SAP BusinessObjects BI solution to choose for each of
these requirement areas, it should determine who will create the information, who will use
the information, what the business requirements are, what the data sources are, and how
will the data sources connect to the tools.
X A Client Tools
X C Lifecycle Management
X D Configuration Services
7. Which of the following is a third party component that can be installed on the platform?
Choose the correct answer.
X B Linux Translator
X C Atlassian JIRA
X D Subversion
X True
X False
X True
X False
10. Match the tiers with the components that function within the tiers.
Match the item in the first column to the corresponding item in the second column.
11. During the process to set a schedule for a Web Intelligence document, the Web
Application Server directly communicates with the BI Launch Pad.
Determine whether this statement is true or false.
X True
X False
X True
X False
X A Superuser
X B Guest
X C BI Viewer
X D Administrator
X E Named user
X A Department
X B Login
X C Location
X D Role
X E Status
Access levels are groups of rights that users frequently need. They allow administrators to
set common security levels quickly and uniformly rather than requiring that individual
rights be set one by one.
17.
Choose the correct answers.
Lesson 1
Assessing your Organization's Environment 63
UNIT OBJECTIVES
LESSON OVERVIEW
In this lesson, we consider the importance of accurately assessing the needs of your
organization prior to deploying. We will discuss technical requirements such as hardware,
Operating Systems, Databases and Web Application Servers.
LESSON OBJECTIVES
After completing this lesson, you will be able to:
● Assess your organization's environment
2. Review the key concepts you need to consider for your deployment, including operating
system, database, application server considerations, security, performance and
scalability, and high availability.
Note:
It is important to note that each deployment is unique. The flexibility of the SAP
BusinessObjects Business Intelligence platform service-based architecture allows
you to tailor the deployment to serve your organization’s requirements as
precisely as possible.
Operating Systems
The resources and conventions used in your existing network environment affect how you
deploy the SAP BusinessObjects Business Intelligence platform.
Different deployment options are available to you, depending on the operating systems, web
application servers, database servers, and authentication method you plan to use. Other
conventions used in your current environment may also affect how you deploy the SAP
BusinessObjects Business Intelligence platform, such as security, performance monitoring,
and design for high availability.
The SAP BusinessObjects Business Intelligence platform runs on Microsoft Windows and Unix
(including Linux) operating systems.
An administrator account must be used to install the SAP BusinessObjects Business
Intelligence platform on Windows operating systems.
Note:
Review the latest SAP BusinessObjects Business Intelligence Platform Product
Availability Matrix (PAM) available at http://help.sap.com
Web Servers
Although Web application servers come with built-in Web server functionality, the SAP
BusinessObjects Business Intelligence platform also supports the separation of Web and Web
application servers into a de-paired configuration. In a de-paired configuration, the Web
server will serve static and cached content to offset a portion of the requests sent to the Web
application server.
Note:
Multi-Homed Environment
The SAP BusinessObjects Business Intelligence platform supports multi-homed
environments, in which a server has two or more network addresses. This allows servers to be
configured to receive requests from one network and transmit requests to another.
Security
Your organization’s security policies affect how you deploy the SAP BusinessObjects
Business Intelligence platform on your network. Do you plan to use the system’s built-in
Product Coexistence
Several SAP BusinessObjects Business Intelligence platform products can exist on the same
host.
Caution:
All SAP BusinessObjects Business Intelligence platform products installed at
your organization must have the same patch level. For example, if your
organization upgrades SAP BusinessObjects Business Intelligence platform
server products to a service pack or fix pack, all add-ons, Client Tools, and stand-
alone products must also be upgraded. Similarly, if one product is downgraded,
all other products should also be downgraded.
Deployment Checklist
This section provides a checklist for the steps you need to perform when planning a
deployment of the SAP BusinessObjects Business Intelligence platform. The following
checklist outlines the major tasks to be completed for the planning phase of your SAP
BusinessObjects Business Intelligence platform deployment.
LESSON SUMMARY
You should now be able to:
● Assess your organization's environment
Lesson 1
Identifying Factors to Consider when Deploying a BI Platform 4.3 System 69
Lesson 2
Identifying Factors that Influence the Deployment Process 73
Lesson 3
Identifying Factors to Consider when Installing SAP BusinessObjects Platform 4.3 77
Lesson 4
Identifying Factors to Consider when Integrating SAP Analytics Cloud 81
Lesson 5
Identifying Factors to Consider when Configuring a Deployment 87
UNIT OBJECTIVES
LESSON OVERVIEW
In this lesson, we will review important factors to consider when planning a SAP
BusinessObjects Platform 4.3 deployment. We will consider virtualization strategies and
sizing considerations.
LESSON OBJECTIVES
After completing this lesson, you will be able to:
● Identify factors to consider when designing and sizing a deployment
● Virtualization
● Best practices for deployment architecture
● Sizing and configuration
● Fail-over methods
● Load balancing
● Clustering
● Backup strategy
● File storage
Everything must be planned up front, before beginning the implementation, so that you do not
encounter any problematic surprises.
Virtualization Concepts
Considerations for Virtualization
When planning a deployment, consider answering the following virtualization questions:
● Are you getting all of the central processing unit (CPU) power for the CPU licenses that you
paid for?
● Are other VMs on the same machine jamming the Input/Output (I/O) paths from the host?
● Is the memory being over committed without your knowledge?
It is recommend that you become familiar with VMWare concepts and terminology, so that
you can better negotiate with your IT infrastructure providers. See Additional Information
about Designing and Sizing a Deployment for more information.
● The plan should include monitoring solutions, including Wily Introscope and Solution
Manager.
● It should accommodate database clients and middleware downloads, especially 32–bit
and 64–bit DSN administration.
● Other components should be included, such as Chrome Browser and Java 8.
● It should take into account database availability and location (CMS, Audit, Trending),
threads, and deadlocks.
● Ensure you have planned enough memory and number of machines from the start.
● Changes made after tuning and configuring your system create much work.
● To plan for correct sizing, use the sizing estimator.
After you tune up and configure your system, you might retrospectively realize that you need
more memory or additional machines in a cluster. These changes create massive amounts of
rework. You must make sure that you have the correct amount of memory and number of
machines from the beginning, by designing your deployment options and architecture tiers.
To ensure that you have correctly designed the architecture, read the sizing guide and use the
sizing estimator. See Additional Information about Designing and Sizing a Deployment.
You must conduct the sizing calculation before you begin the configuration. Design the
architecture of your deployment options (tiers) before you begin the installation.
Performance Best Practices for VMware On the VMware website, search for Perform-
vSphere® 5.1 ance Best Practices.
vSphere Resource Management — ESXi 5.1, On the VMware website, search for vSphere
vCenter Server 5.1 Resource Management — ESXi 5.1.
Official Product Tutorials — SAP BI Suite On the SAP Community Network, search for
Tutorials — BI Suite.
Sizing and Deploying SAP BI 4 and SAP Lumi- On the SAP Community Network, search for
ra Primary Sizing and Deployment Resources
for SAP BI 4
LESSON SUMMARY
You should now be able to:
● Identify factors to consider when designing and sizing a deployment
LESSON OVERVIEW
In this lesson, we will review important factors that influence the deployment process. We will
consider hardware, network topology, and infrastructure design.
LESSON OBJECTIVES
After completing this lesson, you will be able to:
● Identify factors that influence the deployment process
● Web tier
● Management and Intelligence tier
● Processing tier
● File Storage area
● Database tier
There are a few critical pieces of this architecture to consider when planning a deployment:
1. Web tier servers, including static content (preferably hosted on a dedicated Web server)
and application servers, processing dynamic Java content.
3. Adaptive Processing Servers (APS) that host approximately 22 individual services in the
BI landscape and that run in a Java architecture.
Deployment Workflow
Planning is essential to a well designed system landscape. Do not begin running installers
before the entire landscape has been completely planned. If the planning phase is thorough,
the implementation and deployment phases are time consuming, but simple. The post-
installation configuration phase uses a large portion of the deployment time. This phase is
complex, but can be performed in a methodical, step-by-step process.
Your enterprise needs to make any proposed system future-proof. Be sure to size to allow for
growth. Work with your vendors to obtain the hardware you need that allows you to easily
scale up when necessary.
How you design the architecture of your solution is a pivotal decision. The solution design
cannot be changed after you have started building, so be sure that it is fully functional before
moving on from the design phase. When planning your design, begin by answering some
questions to help you select a design strategy:
● Are you going to split out the architecture tiers vertically? Answer this question with “yes”
because it improves the user experience.
● Are you going to cluster nodes horizontally? This form of node clustering improves balance
and fault tolerance, so it is a preferable method.
Keep in mind that your existing IT infrastructure (and its limitations) could negatively impact
the high ambitions you have for this system. Be sure to determine if you will be hampered by a
substandard network or storage system, and design your solution to avoid or overcome these
kinds of impediments.
SAP BusinessObjects Business Intelligence On the SAP Help Portal, select Analyt-
platform 4.1: Installation, Upgrade, Deploy- ics → Business Intelligence → Business Intel-
ment ligence Platform (Enterprise) → Platform
4.1 → Installation, Upgrade, Deployment
LESSON SUMMARY
You should now be able to:
● Identify factors that influence the deployment process
LESSON OVERVIEW
In this lesson, we will review important factors to consider when installing SAP
BusinessObjects Platform 4.3.
LESSON OBJECTIVES
After completing this lesson, you will be able to:
● Identify factors to consider when installing a deployment
● Remember the credentials and parameters for the installer, including passwords and the
cluster key.
● Make sure Secure Socket Layer (SSL) is off, and disable antivirus software, SMD Agent,
and similar software.
● Back up the whole system before attempting an upgrade.
● In the event of problems, restore the system from the backup, rather than by uninstalling a
patch.
● Failover
● Load balancing
● Clustering
● Backup strategy
● File storage
BI server nodes are clustered horizontally to perform fail-over and load balancing.
1. Copy the file in the default directory into the custom directory.
2. Add lines only for properties that differ from the default properties.
Portal Integration
Considerations When Planning the Integration with Portals
BI 4.x can be integrated with MS SharePoint and SAP Enterprise Portal. Decide on the portal
integration and authentication mechanism that has to be in place to eliminate conflict.
The binaries for portal integration are available in the SAP Mobile Platform. They are installed
and configured with help from the respective portal administrators. When BI 4.x is installed, it
is already enabled for connection to MS SharePoint or SAP Enterprise Portal. Therefore, after
the iViews are installed and configured on SAP Enterprise Portal, BI content can be accessed
from there and similarly from MS SharePoint.
Installation guides and preparation steps for On the SAP Web site, search for Business In-
different types of deployments telligence — SAP Help Portal Page.
LESSON SUMMARY
You should now be able to:
● Identify factors to consider when installing a deployment
LESSON OBJECTIVES
After completing this lesson, you will be able to:
● Identify factors to consider when integrating with SAP Analytics Cloud
Such a hybrid system architecture provides a number of benefits in the areas of:
● Data Preparation and Data Wrangling
● Data Modeling
● Data Discovery
● Report Design
● Scheduling and Broadcasting
● Predictive Scenarios,
In contrast to classic on-premise systems, SAP Analytics Cloud offers all central business
intelligence tasks within a single software-as-a-service (SaaS) platorm. These tasks include:
● Business Intelligence/Analytics
● Planning
● Predictive Analytics
● Analytics Applications
This single platform eliminates the need to combine various front-end tools within an
additional platform.
All of the tools required to visualize, analyze, and create reports on data are possible on a
single platform—SAP Analytics Cloud. The data used by SAC can be from existing data
sources, such as SAP BusinessObjects universes and SAP BW queries, or even directly from a
data source. The interfaces required for this integration flow into the architecture of the
system and usually take the form of a pure data connection.
The implementation of such an integrated approach for your company allows your
organization to take advantage of the strengths of both systems, such as:
● The agility of a self-service strategy
● Optimized business performance
Technical Considerations
In terms of planning for such an integration, an organization must not only invest in future
intelligence systems but also protect investments already made in existing, on-premise
business intelligence systems. Cooperation and communication between existing systems
with future-oriented systems requires consideration in the areas of data transformation and
the inevitable systems expansions, requiring innovations and technologies for a hybrid
business intelligence landscape. Integrating the new technologies and intelligent enterprise
systems, as part of a new business technology platform, must be included in the planning of
the architecture of new system landscapes.
The technical implementation, which includes the aspects of landscape architecture, extends
to the following areas:
● Data connection: Use of different data sources with different properties
● Data types: Live or import connections
● Re-usability of already created data structures such as SAP BusinessObjects universes or
SAP BW queries
● Single sign on: Compatibility of both systems
● System security
● Report requirements
● Presentation of content
● Platform services
● Collaboration
As data security and data storage are of great importance, considering the correct data
connection is a central point of the architecture. There are two types of connections in SAP
Analytics Cloud:
● Import connection: Enables a broader selection of data sources and in some cases also
requires a lower version of the source system than a live connection. Please check the
suitability of a data source in advance. In this situation, the selected data is extracted from
the data source once or at intervals and is stored on the cloud server.
● Live connection: Does not save any data permanently in the cloud but only uses data at the
time of execution/presentation of a report.
The result of a successful architecture plan that takes into account all aspects of the SAP
Business Intelligence systems ideally leads to an architecture that allows you to benefit from
the entire SAP BI portfolio. A current evaluation, but also an evaluation optimized for future
changes, increases the business value of your data collection in the context of an optimized
data evaluation and presentation. However, keeping an eye on the entire development
process of the SAP BI process and keeping pace with the changes in technology does not
always make sense in all respects. So consider implementing the best of both worlds with the
hybrid scenario as an option. Your business functions and areas can be available both on-
premise and via Cloud systems. A sensible approach leads to a scalable architecture of an
SAP Business Intelligence portfolio, the scope and use of which can be adapted to your needs
at any time.
LESSON SUMMARY
You should now be able to:
● Identify factors to consider when integrating with SAP Analytics Cloud
LESSON OVERVIEW
In this lesson, we will review factors to consider when configuring a SAP Business Objects
Platform 4.3 deployment. We will consider optimal configuration settings that will improve
system performance and system scalability.
LESSON OBJECTIVES
After completing this lesson, you will be able to:
● Identify factors to consider when configuring a deployment
● BI 4.3 ships with a Tomcat Web application server component that processes static
content.
● A dedicated Web server component allows for optimization of JavaScript and images.
BI 4.3 ships with a Tomcat Web application server component, whose built-in HTTP plug-in
also allows it to process static content. However, a dedicated Web server component, such as
Apache or IBM HTTP Server, allows significant benefits in the optimization of JavaScript and
images. Key points for Web tier optimization are:
● If you are not using a Web server, start planning to implement one. The benefits far out
weigh the costs and the process is documented extensively on SAP Community Network
(SCN).
● For Linux customers, Apache 2.4 offers an event-based threading model that dynamically
scales up and down to meet the user load. This performs well with robust applications like
BI.
● Adding an Apache Web server and using Wdeploy in split mode can improve navigation by
25% or more. This can even be added to the same machine that is running the Tomcat
server.
● BI 4.3 is leveraging an SAP JVM for the embedded application server for the first time.
However, it is currently using Java 8.
● Java 8 delivers significant performance improvement, especially around startup time and
during garbage collection for a multi-core CPU, capabilities you will use anyway.
● APS is a plug-able service model that is Java based and hosts over 20 services.
● APS is certified only for use with SAP JVM. Therefore, currently you cannot use GC1 or a
tiered compilation.
● The System Configuration wizard simplifies the optimization process.
The APS can host over 20 different services. It is an SAP plug-able service container for
interactive processes, such as charting, connectivity to SAP Business Warehouse (SAP BW),
and promotion management.
The APS is a Java-based process like the Web Application Server (WAS), but unlike the WAS,
it is not certified for use with any JVM other than SAP JVM. Therefore, you cannot switch to
Java 7 and enable the GC1 collector. Instead, use the documented best practices from SAP to
adjust and configure these processes.
The BI 4.3 provides the System Configuration wizard as a great first step for identifying the
right system configuration template, splitting APS processes into separate bundles, and
stopping unnecessary processes.
● Monolithic APS uses significantly less resource allocation than required for complex
product workflows.
● Processes are labor intensive.
● It is recommended to use the Configuration Wizard in 4.3, if it is available.
● Sufficient heap space to the APS is the single biggest performance factor.
● For response-time-critical applications, the older generation parallel garbage collector is
recommended.
● SAP JVM is new to SAP BusinessObjects customers.
● Performance tests help you to determine whether to switch to —XX:
+UseConcMarkSweepGC.
● Version 4.3 uses Parallel Old garbage collector.
Allocating sufficient heap space to the APS is the single biggest performance factor in a high
performing BI environment. Starving an APS for memory can cause out-of-memory errors or
significant performance degradation, due to the swapping of memory to disk.
If you have granted the APS memory up to maximum heap, you have made a significant
commitment. Take resizing out of the equation and right-size the JVM.
SAP JVM documentation states that concurrent mark sweep garbage collector is also
available for use with SAP JVM. For response-time-critical applications, the Parallel Old
garbage collector is recommended. For more information, refer to Additional Information
about Configuring a Deployment.
Starving the APS is the single worst performance problem in BI 4.x. Your system will either
crash with an out-of-memory error, or it will swap to disk, which causes performance to
suffer.
SAP JVM has been in use by SAP NetWeaver applications for many years, but it is relatively
new to SAP BusinessObjects customers.
Setting — Xms and —Xmx to the same value increases predictability by removing the most
important sizing decision from the virtual machine. On the other hand, the virtual machine
cannot compensate if you make a poor choice. By default, they are not set to the same value.
You can utilize performance tests to determine whether or not there are gains to be had from
switching to —XX:+UseConcMarkSweepGC. Key points of this switch are:
● The BI 4 Sizing Companion and System Configuration wizard give you a great starting
point for your deployment.
● Garbage Collection (GC) Logging, while extensive, does not cause a performance impact
because of buffering. However, it does create a lot of files on the file system.
● Version 4.3 uses the Parallel Old garbage collector, which is a big step up from the default
garbage collector that was used in 4.0. To enable this in 4.0, refer to Additional
Information about Configuring a Deployment.
There is a lot of documentation on sizing and configuring the SAP JVM. See Additional
Information about Configuring a Deployment.
CMS Tuning
● A bottleneck that occurs in the CMS will cascade throughout the system and cause
performance problems.
● The CMS communicates with the database to update metadata and verify security.
● The CMS saves object metadata in cache.
● Queuing at the CMS level should not occur and can be prevented by an administrator’s
actions.
● CMS performance is affected by the size of its cache memory and by the number of
objects it holds in the cache.
The Central Management Server (CMS) is the brains of the entire BI system. A bottleneck at
the CMS level (or CMS database level) can impact every operation in the entire environment.
The CMS communicates with the database to update metadata and verify security. It uses a
pool of connections, which is configured in the CMC. The CMS must have enough database
connections to run queries in parallel. In older releases, this was set via the command line, but
now the setting can be configured in the CMC. The default setting is 14, but it can be set as
high as 50 per CMS.
Before changing this value, anyone who is using a system database such as Oracle or SQL
Server should speak with their database administrator to ensure that sufficient connections
are available for the CMS cluster.
The other property that impacts CMS performance is the number of objects that are held in
memory (cache). In prior versions of the product, this value was configured at 10,000, which
quickly resulted in the CMS swapping objects in and out of cache. In newer releases, such as
BI 4.3, the value defaults to 100,000. A higher value means that there is higher memory
consumption in the CMS.
You can check the metrics view of the CMS to identify how many objects are in the CMS
cache at any given time. This view will also show how many objects there are in total in the
system database. Few objects make it unlikely that you would need to increase the value for
the CMS cache, but if you have a million (or more) objects in the CMS system database, it is
worth increasing the value. The setting can be configured on the command line, as such: -
maxobjectsincache 250000.
● You can study the system topology, network components, servers, and connections.
● You can follow the detailed, step-by-step setup and deployment instructions for each
software component.
● It details the multivariate considerations of the deployment, including hardware, operating
system, and software, all the way down to the smaller components of versions, patches,
user accounts, and network domains.
● It notes deployment failures and solutions to these failures.
For more information about the BI Pattern Books, refer to Additional Information about
Configuring a Deployment.
Best Practices for SAPBO BI 4.0 Adaptive On the SAP Community Network, search for
Processing Servers the title.
Configuration and Setup of SAP JVM On the SAP Help Portal, search for the title.
Improve BI 4.0 P&R by using Java’s Parallel On YouTube, search for the title.
Garbage Collector
LESSON SUMMARY
You should now be able to:
● Identify factors to consider when configuring a deployment
Learning Assessment
X True
X False
3. How does SAP recommend clustering nodes in your solution, and why?
Choose the correct answer.
X A Nodes should be clustered horizontally, because this form of node clustering helps
improve the user experience.
X C Nodes should be clustered vertically, because this form of node clustering helps
improve the user experience.
4. How does SAP recommend splitting the tiers in your solution, and why?
Choose the correct answer.
X A Tiers should be split horizontally, because this configuration helps improve the
user experience.
X B Tiers should be split horizontally, because this configuration improves balance and
fault tolerance.
X C Tiers should be split vertically, because this configuration helps improve the user
experience.
X D Tiers should be split vertically, because this configuration improves balance and
fault tolerance.
X A To modify an installation file, the administrator makes changes directly in the file
located in the default installation directory.
X C The administrator should back up the entire system before attempting an upgrade.
6. Why would you increase the CMS connection pool and maximum number of objects in
CMS cache?
Choose the correct answers.
X True
X False
3. How does SAP recommend clustering nodes in your solution, and why?
Choose the correct answer.
X A Nodes should be clustered horizontally, because this form of node clustering helps
improve the user experience.
X C Nodes should be clustered vertically, because this form of node clustering helps
improve the user experience.
4. How does SAP recommend splitting the tiers in your solution, and why?
Choose the correct answer.
X A Tiers should be split horizontally, because this configuration helps improve the
user experience.
X B Tiers should be split horizontally, because this configuration improves balance and
fault tolerance.
X C Tiers should be split vertically, because this configuration helps improve the user
experience.
X D Tiers should be split vertically, because this configuration improves balance and
fault tolerance.
X A To modify an installation file, the administrator makes changes directly in the file
located in the default installation directory.
X C The administrator should back up the entire system before attempting an upgrade.
6. Why would you increase the CMS connection pool and maximum number of objects in
CMS cache?
Choose the correct answers.
Lesson 1
Designing an SAP Business Intelligence Deployment 99
Lesson 2
Designing a Scalable System 103
Lesson 3
Preparing a Sizing of a BI Platform Deployment 109
UNIT OBJECTIVES
LESSON OVERVIEW
This lesson describes the role of system resources in the BI platform and the aspects of
system resources you should consider when you design a BI deployment.
LESSON OBJECTIVES
After completing this lesson, you will be able to:
● Describe factors that influence BI platform deployment design
● Stresses to I/O are even more critical and harder to measure than the effects to CPU or
RAM.
● Aggregating millions of rows is very different from streaming transactions.
● Spiky loads make estimation even harder when considering a constant load versus peak
times.
● I/O concerns underscore the importance of understanding the workload before you start.
BI 4.x Architecture
BI 4.x is entirely different from 3.1. Note the following information:
● Scaling up has limits, but modern hardware is too powerful to host only single processes.
● Five instances of Web Intelligence on a machine might make sense, but watch out for
bottlenecks such as Disk I/O.
● If you schedule SAP Crystal Reports to run mostly at night, Crystal Reports job servers and
Processing servers might co-exist with Web Intelligence.
● CMS:
- Each instance of a CMS can handle 400 to 500 heavy users.
- Add another CMS for every 500 active concurrent user.
- The best practice is to maintain only one CMS on a physical system, which enables fault
tolerance and effective load balancing.
● FRS:
- FRS performance is dependent on disk I/O.
- Input FRS should be kept close to processing servers as a best practice.
- Setting Instance Limits is essential for optimal output FRS performance.
● File repository servers need to be hosted on an NFS that is physically near the BI Platform
(but not on it).
● Likewise, the CMS and audit databases should be physically close, but not on the BI
Platform.
● This configuration reduces the amount of parallel I/O on a single host by distributing the
handling of requests to other processors.
Sizing and Deploying SAP BI 4 and SAP Lumi- On the SAP Community Network, search for
ra the title.
LESSON SUMMARY
You should now be able to:
● Describe factors that influence BI platform deployment design
LESSON OVERVIEW
This lesson describes the factors that influence scalability and capacity in a BI system and
discusses the design tradeoffs to consider when you design a BI system.
LESSON OBJECTIVES
After completing this lesson, you will be able to:
● Identify factors that influence scalability and capacity
Scalability Definition
Scalability is the capacity of a system to address additional system load, by adding resources
without fundamentally altering the implementation architecture or implementation design.
Scalability Objectives
When building a scalable system, the main objective is to achieve a linear relationship
between the amount of resources added and the resulting increase in performance, while
maintaining the speed of the transactions after additional users are added and more requests
need to be processed.
Depending on your situation, you can run all services on one machine or you can run them on
separate machines. For example, you can run the Central Management Server and the Event
Server on one machine, while you run the Adaptive Processing Server on a separate machine.
The same service can also run in multiple instances on a single machine.
This section focuses on the different aspects of your system’s capacity, discusses the
relevant components, and provides a number of ways in which you can modify your
configuration accordingly.
information stored in the CMS database. When you cluster two CMS machines, you instruct
the new CMS to share in the task of maintaining and querying the CMS database.
Clustering helps improve system capacity in the following ways:
Note:
The Web Intelligence processing server is added because it is responsible for
scheduling the physical processing of a Web Intelligence document.
If the majority of your reports are scheduled to run on a regular basis, there are several
strategies you can adopt to maximize your system’s processing capacity:
● Install all Adaptive Job servers and Adaptive Processing servers in close proximity to (but
not on the same machine as) the database server against which the reports run. Ensure
that the File Repository servers are readily accessible to the installed services (so you can
send a read report objects request from the Input File Repository server, and send a
request to write report instances to the Output File Repository server, quickly). Depending
on your network configuration, these strategies may improve the processing speed of your
scheduled documents, as there is less distance for data to travel over your corporate
network.
● Verify the efficiency of your reports. When designing reports in Crystal Reports, there are a
number of ways in which you can improve the performance of the report itself. You can do
this by modifying record selection formulas, using the database server’s resources to
group data, incorporating parameter fields, and so on.
● Use event-based scheduling to create dependencies between large or complex reports.
For instance, if you run several very complex reports on a nightly basis, you can use
Schedule events to ensure that the reports are processed sequentially. This is a useful way
of minimizing the processing load that your database server is subject to at any time.
● If some reports are much larger or more complex than others, consider distributing the
processing load through the use of server groups. For instance, you might create two
server groups, each containing one or more Adaptive Job servers. When you schedule
recurrent reports, you can specify that it be processed by a particular server group to
ensure that especially large reports are executed by more powerful servers.
● Increase the hardware resources that are available to your scheduling services. If the
server is currently running on a machine along with other SAP BusinessObjects BI platform
components, consider moving the server to a dedicated machine. If the new machine has
multiple CPUs, you can install multiple Adaptive Job and Adaptive Processing servers on
the same machine (typically no more than one service per CPU).
performs. If the Web server is limiting web response speeds, consider increasing the Web
server’s hardware, setting up a “Web farm” (multiple Web servers responding to Web
requests to a single IP address), or both.
● If Web response speeds are slowed only by report viewing activities, you can increase
scheduled reporting capacity and on-demand viewing capacity.
● Take into account the number of users who regularly access your system. If you are
running a large deployment, ensure that you have set up a CMS cluster.
If you find that a single application server (for example, the Tomcat Java Web Application
Server) inadequately serves the number of scripting requests made by users who access your
system on a regular basis, consider the following options:
● Increase the hardware resources that are available to the application server. If the
application server is currently running on the Web server or on a single machine with other
SAP BusinessObjects BI platform components, consider moving the application server to a
dedicated machine.
● Consider setting up two (or more) Java application servers. Consult the documentation for
your Java Web Application server for information on load-balancing, clustering, and
scalability.
● Before you expand your SAP BusinessObjects BI platform, consider some of the following
common aspects of your deployment:
- Assess the Web server’s ability to serve the number of users who regularly connect to
the platform.
- For a large deployment, ensure that you have set up a CMS cluster.
- Determine if an application server is not adequately serving user requests.
- Assess the ability of your system to overcome failure during outages.
Note:
If your deployment requires a higher level of failover and fault tolerance, exceeds
three or more servers, or serves a large number of users who would be severely
affected by any system outage, it is strongly advised that you distribute the load of
the application tier over multiple application servers balanced through a hardware
load balancer. Each application server will connect to any CMS service running in
the cluster, so the failure of one application server would only affect the
application tier, not the processing tier.
Note:
For proxy load balancing, just as with hardware-based load balancing, the use of
sticky sessions is required.
If you are developing your own custom desktops or administrative tools with the SAP
BusinessObjects BI platform SDKs, be sure to review the libraries and APIs. You can, for
instance, incorporate complete security and scheduling options into your own Web
applications. You can also modify server settings from within your own code to further
integrate SAP BusinessObjects BI platform with your existing intranet tools and overall
reporting environment. In addition, be sure to check the developer documentation available
on your SAP BusinessObjects BI platform product CD for performance tips and other
scalability considerations. The query optimization section in particular provides some
preliminary steps to ensure that custom applications make efficient use of the query
language.
Web farms are not supported by SAP BusinessObject BI platform.
Primary Sizing and Deployment Resources On the SAP Community Network, search for
for SAP BI 4 Sizing and Deploying SAP BI 4 and SAP Lumi-
ra
LESSON SUMMARY
You should now be able to:
● Identify factors that influence scalability and capacity
LESSON OVERVIEW
This lesson describes the details of sizing a BI platform deployment. It discusses sizing
considerations for each type of server at each tier in the BI platform.
LESSON OBJECTIVES
After completing this lesson, you will be able to:
● Define the sizing process
● Analyze the sizing capacity of system servers
● Analyze the sizing requirements of system components
● Describe factors that influence scaling
● Identify the uses of Configuration Intelligence
● Identify the uses of the BI 4 Sizing Guide
● Identify the uses of BI Virtualization
● Identify backend BW systems sizing guidelines
● Perform an initial sizing of a deployment
● Sizings for versions 3.1 and earlier were conducted using CPU units. As of BI 4.0, CPU units
are deprecated and are no longer used for sizing BI 4.0 and later deployments.
● Using SAPS to provide sizing estimates includes the following benefits:
- Each hardware partner generates the SAPS rating for the systems that they sell. SAPS
ratings, therefore, are always current. There is built in anti-obsolescence.
- SAPS is the unit of measurement used in sizing and is independent of operating
systems.
- Using SAPS for sizing is now consistent with how SAP provides sizing information for
other products like SAP Business Suite.
The SAP SD Standard Application Benchmark Results provide more information about
correctly converting the SAPS into core CPU. Refer to Additional Information about the Sizing
Process to access the information online.
The conversion of SAPS to Core CPU is totally dependent on the hardware used. Your
hardware vendor can also provide the right conversion ratio for their hardware.
1. Determine the Active Concurrent Users for each of the BI 4.x Client tools. Examples of
BI 4.x Client tools include Analysis, edition for OLAP, BI Workspace, Crystal Reports
2013, Crystal Reports Enterprise, Dashboards (Dashboard Design), and Web
Intelligence.
2. Work with SAP certified hardware partners using SAPS to secure the required
resources, following the detailed description and guidelines for BI 4.x Services.
Note:
An Active Concurrent User is a user who is actively executing a workflow. In the
benchmarks used to create this standard, all users are active concurrent. No
user is idle, that is, sitting at the BI Launch Pad home page.
Deployment Mapping
Initial Sizing and Mapping
The following are considerations when carrying out the initial sizing and mapping:
● Use the BI 4 Sizing Estimator and Companion Guide to create an initial sizing calculation
using the Sizing Estimator.
● Map out the deployment, noting the following information:
- CMS database is the key to overall performance and scalability of BI Platform.
- Use a dedicated CMS database on its own hardware to ensure performance.
- Monitor individual server tiers to identify bottlenecks.
Scaling Approach
Considerations When Scaling
When scaling out your system, consider the following approach:
Tomcat 9 Change the JVM Heap size in the 'Java Options' box. Add the
line '-XX:MaxPermSize=256m' Fill out the Maximum memory
pool field with the host machine's memory (such as, 1024).
SAP Netweaver Java Appli- Use default settings for SAP NetWeaver
cation Server
● Lock
Locks are used to control concurrency in a database. When a process locks a row or table,
no other process can access it until the lock is released. Lock granularity is the level at
which a lock is requested (that is, row or table level).
● Deadlock
A deadlock occurs when two or more processes conflict in such a way that each must wait
for the other to release its lock to complete.
● Lock wait
A lock wait occurs when a process must wait for another process to release a lock on a row
or table.
● Lock escalation
A lock escalation occurs when the number of locks on rows or tables in the database
equals the maximum number of locks specified in the database configuration settings.
Note:
For additional information, refer to Additional Information about Analyzing the
Sizing Capacity of Server.
Are important to avoid: ● Disk access being unable to keep up with the level of I/O
that the database server requests.
● Long disk read or disk write queues.
● Unnecessary SQL statement compilations or long SQL
statement compilation times.
Are achieved by: ● Setting the lock granularity of your database to row level.
No lock escalations
Hint:
Keep in mind that Quick Sizer is a tool for initial sizings only. Post going-live
sizings are out-of-scope, such as upgrade or resizing (except delta sizing of a
new SAP solution), configuration, landscaping, and customer coding.
Note:
For more information about the principles of sizing and the theoretical
background of SAP sizing initiative, see the background sizing document at:
service.sap.com/sizing Sizing Guidelines → General Sizing.
● It processes the CMC requests to start, stop, monitor, and manage all servers on the node.
● It also monitors potential problems and automatically restarts servers that have shut down
unexpectedly.
● The SIA ensures optimal performance by continually monitoring server status information,
which is stored in the CMS database.
● When you change server settings or add a new server in the CMC, the CMS notifies the
SIA, and the SIA performs the task.
● Only RAM memory consideration is required to maintain the execution of SIA. The
recommended RAM memory is 350 MB.
General Considerations
The CMS manages many areas. The state of various activities persists in the CMS repository.
This is what enables multiple CMSs to run in a cluster, as well as providing for fault tolerance.
The CMS depends heavily on the repository for the proper functioning of BI 4.0. Therefore, it
is imperative that the DBMS, where the repository resides, is tuned and running on a machine
with sufficient capacity (memory, CPU, fast disk, and network bandwidth) and reliability.
Stated another way, if access to the repository database is slow or the reliability is in question,
then the BI 4.0 deployment will not deliver maximum performance and reliability. This also
means network throughput between any node running a CMS and the repository database
must not endure the effects of other heavy traffic (such as backups running during production
hours). This cannot be overstated.
Note:
The recommended maximum number of CMS services in a cluster is 4. The best
balance of CMS performance and network traffic occurs with this number.
A cluster that has two or more CMS clusters on different subnets is technically possible and
has been QA tested. This configuration is supported by SAP, strictly providing that it does not
create significant additional network latency as a result of an additional subnet.
The most important factor to ensure efficient CMS clustering performance is to eliminate
excessive latency between CMS services and the CMS database. For example, CMS1 and the
CMS system database are located in the same data center in New York. CMS2 is a member of
the same cluster as CMS1, but it is located in China and must communicate with the CMS
database in New York. Excessive network latency of CMS2 in China to the CMS database in
New York would be problematic.
Make sure that all CMS cluster members have uniform communication speeds to the system
database.
Note:
For best performance, run each CMS cluster member on a machine that has the
same type of CPU, same operating system, same amount of memory and so on.
The machines should be as similar as possible to ensure uniform performance.
Processor Requirements
The number of cores (and, by association, SAPS) required to support CMS services is highly
dependent on the type of CMS activity. For example, large updates to the CMS system
database (that is, adding or deleting many users; viewing or querying many objects) will use
significant CPU time. For increased CMS throughput and response times, allocate additional
CPU resources.
Memory Requirements
For best performance, run each CMS cluster member on a machine that has the same
amount of memory. Memory usage is controlled (roughly) by the number of objects stored in
the object cache. You can accomplish this by using the –MaxObjectsInCache command in the
properties of the CMS. You can set the desired number of InfoObjects to be cached. It is
recommended you set it to 500000 to 1000000.
Note:
The official syntax would be –MaxObjectsInCache 500000.
This specifies the maximum number of objects that the CMS stores in its memory cache.
Increasing the number of objects reduces the number of database calls required and greatly
improves CMS performance. The memory restrictions that existed in earlier versions with
respect to the number of objects in cache has been removed.
Note:
For parameters that affect the caching of user data, make sure that the interval
length is no longer than the shortest interval for refresh of any referenced data
source. The oldest on-demand data settings will ensure that data is kept in cache
no longer than the set interval, no matter how often the cached data is referenced.
Make sure that any properties that apply to both the SAP Crystal Reports Cache server and
SAP Crystal Reports Processing server are set to the same respective value.
Disk Requirements
For the Cache server service, it is recommended that sufficient hard drive disk space be
available for the generation of cache files, as well as in the temp directory for the creation of
These files are published to the system by administrators or end users. Publishing Wizard, the
Central Management Console, or a SAP BusinessObjects designer component, such as SAP
Crystal Reports or the Web Intelligence Java or HTML Report Panels, are used to facilitate
publishing.
The Output FRS maintains all the instances that have been produced from reports (SAP
Crystal Reports or Web Intelligence), programs, and object packages that have been
scheduled.
● The FRSs require higher I/O resources (faster disks and network) and fewer CPU
resources.
● The first available FRS is active. All other FRSs are passive, unless the active FRS becomes
unavailable.
● FRSs have little impact on system memory.
● Enough disk space must be available to store files.
Repository Location
You could have multiple FRS (input) and FRS (output) services on one or several different
machines to support a high-availability environment. However, the FRS services will behave in
an active or passive fashion, where the first available FRS will be active and all other FRS
services will remain passive, unless the active FRS becomes unavailable. The Input and
Output repositories do not have to reside on the same machine. The location of the FRS
repositories is managed through the CMC in the Servers section under the Properties tab.
Maximum Idle Time Minutes This setting limits the Default value is 20.
length of time that Note: Setting a value
the server waits be- too low can cause a
fore it closes inactive user request to be
connection. closed prematurely.
Setting a value too
high can cause exces-
sive use of system re-
sources such as proc-
essing and disk
space.
Processor Requirements
The FRSs require higher I/O resources (faster disk, network) and fewer CPU resources. When
estimating resource requirements in the SAP BusinessObjects BI platform, the FRSs are not
considered.
Memory Requirements
The FRSs have little impact on system memory.
Disk Requirements
Enough disk space must be available to store files. We recommend a fast disk for both the
input and output FRS file systems, especially for high volume publications. Typically, the
Output FRS will require more disk space than the Input FRS. The Output FRS maintains all the
instances (with Saved Data) that have been produced from reports (Crystal or Web
Intelligence), programs, and object packages that have been scheduled, As a result, they
require proportionately more disk space. For both the Input and Output FRS, the amount of
space required varies from system to system, however, knowing the average file size and
multiplying this by the number of projected instances assists in estimating total disk needs.
Processor Requirements
The Web Intelligence Job Service has a comparable function to the SAP Crystal Reports Job
Service in that it is responsible for handling Scheduled Jobs. However, the Web Intelligence
Job Service does not actually process reports. It only acts as a scheduling manager or router,
sending jobs to be processed by the Web Intelligence Processing Server.
Disk Requirements
For the Web Intelligence Job Server service, sufficient hard drive disk space should be
available in the temp directory for the creation of temporary files during report processing.
The data from the database server is stored in these files until it can be saved and
compressed in the report.
● The following are requirements for the AJS SAP Crystal Reports job service:
- Memory Requirements
Depending on the design of a report, the number of records retrieved from the
database, memory requirements can vary. When a report is executing and loaded into
memory, the report is decompressed and expanded up to as much as 40 times the
original report file size (with saved data and retrieved records).
- Disk Requirements
For an SAP Crystal Reports job service, sufficient hard drive disk space should be
available in the temp directory for the creation of temporary files during report
processing. The data from the database server is stored in these files until it can be
saved and compressed in the report. Hard-drive access speed to the temp directory
could have an impact on the speed at which a report processes.
● An AJS can host one or more services. The following service are some that might be
included:
- Destination Delivery Scheduling Service
- Publication Scheduling Service
Note:
The DSL Bridge Service that runs in the APS is used by any BI 4.3 client tool that
accesses BW data via the transient universe. It is important to note that the APS
and AJS are not the same as the Java components of SAP Crystal Reports
Enterprise and the other products (such as the SAP Crystal Reports Processing
Server).
-Xmx1024M This command line flag will Enables the server to handle
increase the server heap size. more requests. Ensure that
A 1 GB heap size is recom- you have sufficient free mem-
mended unless otherwise ory at peak before increasing
noted – check the specific this value. Note that certain
sections below for excep- clients, like Analysis edition
tions. for OLAP, may recommend a
4GB heap size for an APS
running the MDAS service.
For APS instances running publication services, SAP recommends having 1 APS instance for
every 3 concurrent publications.
For small deployments (running a dozen or fewer users) like development systems, small
demo systems or sandbox systems where performance is not critical, you can run multiple
types of services in a single APS.
For medium to large deployments, it is a general recommendation that an APS instance hosts
only one service for those services that are critical to performance, and that you will need to
scale out as capacity is added. For example, you would not have one APS instance hosting the
CVOM and DSL Session Service — you would have at least one for each. The memory
requirements of some of the services hosted by the APS can be large. By separating services
across APS instances (and nodes), you gain better manageability for memory and scalability
considerations.
● The following are requirements for the Web Intelligence Processing Server:
- Processor Requirements
The Web Intelligence Processing Server will expand to as many CPUs as needed by
scaling-up the number of Web Intelligence Server services. It can also process several
documents in parallel, so only one Web Intelligence report server is required per
machine as a starting point.
- Memory Requirements
Depending on the design of a report, the types of actions being performed (view,
modify, refresh) and the memory requirements will vary. A refresh request demands
the greatest amount of memory for a Web Intelligence document, because the
database is queried and the entire data set will be transferred to the Web Intelligence
server.
When using several large documents, it is not necessary to increase the number of Web
Intelligence Processing Server instances to more than one per machine, because in BI
4.3 the platform and all services are native 64 bit. You can create more than one Web
Intelligence Processing Server instance per machine for tuning reasons not related to
memory.
Load balancing between the Web Intelligence Processing servers is handled by the CMS
services automatically.
in a Java component. This means additional tuning can be applied to the SAP Crystal Reports
Enterprise Processing Server.
Both processing servers are primarily responsible for responding to page requests, by
processing SAP Crystal Reports and generating Encapsulated Page Format (EPF) pages. A
single .epf file represents one page of an SAP Crystal report. The processing server retrieves
data for the report from the latest instance or directly from the database (depending on the
user request and user security level).
Specifically, both processing servers respond to page requests made by the cache server.
The processing server and cache server interact closely, so cached EPF pages are reused as
frequently as possible, and new pages are generated as soon as they are required. The BI
Launchpad portal takes advantage of this behavior by ensuring that most report-viewing
requests are made to the cache server and processing server. However, if a user default
viewer is the interactive DHTML viewer, the report application server processes the report.
The SAP Crystal Reports 2020 Processing Server creates processing server subprocesses.
Each subprocess loads CRPE and then initiates threads or print jobs as needed. With the SAP
Crystal Reports 2020 Processing Server, if an individual print job were to fail for any reason,
only those threads contained in the processing server sub-process would be affected. All
other subprocesses within the processing server service would be unaffected. Additionally,
individual subprocesses end after so many requests and a new subprocess starts, if required,
so as to maximize resource management.
Idle job timeout Minutes Specifies the length of Default value is 60 min.
time, the CR processing
server waits between re-
quests for a given job.
Browse Data Size Number Specifies the number Default value is 100
(records) of distinct records re- records.
turned from the data-
base when browsing
through a particular
field value.
● Performance factors that are not specific to Analysis edition for OLAP:
- Network bandwidth and quality
- Other enabled services that would have a small effect on performance (such as
monitoring or auditing)
● Performance factors that are specific to Analysis edition for OLAP:
- Hardware settings (HDD RAID settings, memory tuning)
Testing has shown a practical limit of 45 users per machine. This is due to a number of
factors, including the memory needed to support 45 users (for the MDAS instances alone this
is 12 GB). Supporting 45 users implies three MDAS instances (3 MDAS instances * 15 users
recommended max/MDAS instance). Keep in mind it is better to scale out than up for BI 4.3.
You have more flexibility in terms of allocating services with more nodes, not less.
So, if you wanted to have 3 MDAS instances running, that means you have 3 APS instances
that are only running the MDAS service.
An alternate way to configure for 45 users would be to run one APS running the MDAS service
supporting 45 sessions. Testing shows this performs better, which is likely due to the
elimination of load balancing. However, as we recommend 4 GB heap size per 15 users, that
scale would mean an APS with a heap size of 12 GB.
Connection Server
Connection Server Considerations
Event Server
Event Server Considerations
The Event Server under normal enterprise usage is not a processing or memory intensive
server and as such will not be weighted in the sizing process. If event server functionality is
required, allocate this service into the system, but do not estimate any additional cores for
this service.
Additional Servers
Search Service
- Memory: 500MB
- Disk space: same size as FRS
● The client auditing proxy service manages the logging of events of all clients.
- Load: N/A Memory
- 500MB Disk space: N/A
● To make the publishing service available, host the publishing post processing service,
publishing service, and the destination delivery service.
● The program scheduling service manages schedule executable objects. Memory and disk
space consumption varies depending on individual executable runtime requirements.
BI Workspaces
The tables below assume that all BI 4.3 services are deployed on the same machine and the
only load on the machine are BI workspace scenarios. Note that the scenarios run at the suite
level differ slightly from the workflows that form the basis of these observations.
The table presents the BI workspace parameters.
CPU Most of the BI workspace code is now Java code being run by
the Java Application Server (Tomcat). It is best to have one
Java AS instance per each SAP BusinessObject Enterprise in-
stance in a clustered installation.
Memory It is critical that there is enough RAM available to the server in-
stance so as not to require paging. If the Java AS is configured
with a heap size of 2 GB, the operating system requires 1 GB,
and other processes require 2 GB, then 6 GB of RAM is installed
on the server to preserve a buffer for memory usage. Our load
tests show that about 500 MB of memory is recycled per hour
during load. If possible, a heap size of 2 GB is allocated to the
Java AS, and another 2 GB is reserved for use by the C++ proc-
esses, such as the CMS.
Sizing/Scale N/A
Processing Architecture
Consideration When Analyzing the Processing Architecture
● When analyzing the processing architecture, you should ask the following questions:
- Do you have enough CPU power beyond SAPS?
- Are you set to properly scale your systems out?
- Are your processes distributed across nodes?
It is very important to distribute all the processes across the nodes.
Scheduling and publishing are I/O intensive. A machine with fast I/O for disk, and networking
to a SAN, is important for the location of the FRS folders.
WEBI processing server is an I/O intensive process. Carefully consider I/O while sizing for the
WEBI processing server.
Note:
SAP strongly recommends that BI virtual machines have reservations for the
memory and CPUs assigned to them.
BI is a bursty application workload. It can be very I/O intensive when heavily loaded by users.
When experiencing maximum load as defined by your sizing, all parts of the system can go
from moderately loaded to very busy momentarily. If the BI hardware is being shared with
other workloads in a virtualized environment, the response to bursts in activity can result in
slow-downs between dependent BI services and ultimately end-user response time.
It is recommended that you build your system to target 65% utilization so that bursts in
activity can be handled. Performance of both physical and virtual systems degrades when
system utilization is greater than 80%.
If you are unable to secure memory and CPU reservations for your virtualized deployment, it
is recommended that additional scale-outs be done to provide adequate BI processing
resources for peak periods. The amount of additional scaling out will depend on your IT
infrastructure and performance policies.
The goal of sizing BI 4.3 is to have a deployment that can handle the workload needed and
provide a good response time for end-users. If you deploy to a shared virtualized
environment, extra care is needed to ensure your sizing exercise results in a system which
has a service level and responsiveness that can be maintained.
- One of the only things you can control is your own architecture.
The best and only thing you can really do is benchmark a series of loads and continue to test
over time. The testing will not show root causes, but it will help determine if there is a problem
in the first place.
● Use the following best practices for sizing virtual environments using VM Ware:
- Use strict CPU reservations for each BI VM on every host (for example, vCPU to pCPU
relationship).
- Use strict “memory reservations,” for the full amount, for each BI VM on every host (for
example, RAM or heap pre-allocation). This is especially true for Java applications.
- Do not use shares, limits, affinity, or other artificial mechanisms to divide VM resources
on the host.
- To avoid excessive swapping, size VMs large enough (for example, CPU and RAM) in
anticipation of very I/O intensive BI workloads.
- Use more, smaller VMs rather than a few very large (greater than 16 CPU) nodes.
The Adaptive Processing Server (APS) is a generic server that hosts multiple services
responsible for processing various requests. The APS is part of the processing tier. By default,
the BI4.x platform contains one APS per host system.
SAP BI 4.x: Not a technical upgrade from SAP BusinessObjects Enterprise 3.1 or XI R2
BI 4.3 is designed to take advantage of modern hardware and RAM. It uses 64–bit addressing,
unlike SAP BusinessObjects Enterprise 3.1, which was designed to squeeze the whole suite
within a 32–bit architecture.
BI 4.3 is architecturally different than 3.1. Whereas SAP BusinessObjects Enterprise 3.1 was a
collection of applications with their own connectivity stacks, BI 4 components share a new
common semantic layer for data connectivity.
BI 4.3 is bigger than SAP BusinessObjects Enterprise 3.1, because it includes new services
and applications. It is designed for modern infrastructure and does not run on the same
hardware. As a first-class system, it is a highly integrated SAP client for BI. New components
on the system include analysis functions, new monitoring capabilities, native SAP BW
connectivity, and others.
Note:
The BI Sizing Guide and BI Platform Installation Guide contain detailed technical
information about specific services critical to configure and size correctly. See
Additional Information about Using Configuration Intelligence.
XL Template Usage
The following are XL template usage considerations when upgrading:
Deployment Templates
The following are deployment template considerations when sizing:
Note:
The DeploymentTemplates.pdf file describes the deployment templates in detail
(see Additional Information about Configuration Intelligence). The templates do
not specify the number of users that can be supported, because the number is
dependent on load. You should perform system sizing to determine the number of
users you will need to support and the system resources required for that support,
including RAM, CPU, and other resources.
Sizing Methodology
Factors Affecting Sizing
Sizing a BI 4.3 deployment requires a reasonable degree of planning, so that calculations and
predictions can be made about the needs of the system. The number of users and the needs
of those users can be used to predict system loads. The types of data sources used have an
effect on the load and needs of the system. The needs of the users include the BI services that
they need to use. Some users use a lot of services, some use just a few. Some reports are
scheduled to be processed at night and viewed during the day. Some reports need to be
refreshed when viewed, which causes more load as the number of users increases.
After the user requirements are defined, the system can be defined so that it will support the
amount of processing required. The final step in sizing is to apply it to the hardware
landscape. Deployment hardware can range from many small machines to one large machine.
The sizing exercise includes the allocation of BI services to the nodes in the system, taking
into account the CPU, memory, disk, and network capabilities of nodes to be used in the
construction of the system landscape.
Note:
You must determine the SAPS rating of the machines that will make up your
deployment. You cannot assume any SAPS rating from this or any other
document.
this data accurately is the most important part of the sizing effort, because all of the sizing
calculations derive from this information.
Users: You need to determine how many information consumers, business users, and expert
users of each type of BI tool there are. An average user workflow is also important to know. If
you expect users to open five BI documents and refresh them all at the same time, that’s five
times the load of one user. The entire load needs to be accounted for because the system
needs to handle that load.
It is very important to know if the user workflow is going to include refreshing reports and how
frequently those reports will be refreshed. You will need to determine if the reports will be
scheduled to run at night, and you will need to know if users are only viewing reports or if they
are refreshing reports frequently. This information is an important part of load prediction and
the sizing estimation.
Data Sources: You need to determine which data sources will be used: direct-access SQL
databases, UNV Universes, UNX Universes, SAP BW, BW on SAP HANA, or HANA (Direct
Connectivity). It can be a mix of sources. It’s important to know which types of data sources
will be used for the majority of BI processing so that peak load can be predicted. It is also
important to consider that some customers expect their mix of data sources to change over
time.
Document Size: The relative size of each document is important to know. Will most
documents be small and cause little impact on the system or will they be mostly large and
require a lot of processing? This should be determined for each BI tool expected to be used.
tests. They are only estimates and should not be used as simple predictions of performance
or deployment guidance for your project. The Resource Usage Estimator and the Sizing Guide
must be used together to provide effective sizing guidance.
Initialize the Resource Usage Estimator with the number of users of each type for each type of
BI tool. Set the report size sliders to reflect the sizes and types of documents that will be
processed by the system.
● Consider the following points when you use the Resource Usage Estimator:
- BI 4 Resource Usage Estimator is an estimator, not an analyzer.
- It is only a starting point for your calculations.
- Outputs are not numbers you can blindly deploy with.
- This tool is not a replacement for expert advice.
● Resource Usage Estimator provides SAPS (CPU) and Memory for each of the following BI
4 Tiers:
- Web and Application Tiers (Tomcat or any other Web Application Server)
- Intelligence Database Tier (CMS database)
This only provides what a BI will consume. For the database itself, hardware resource
estimation has to be added separately.
- Intelligence Tier
- Processing Tier
● BI 4 Resource Usage Estimator does not include sizing for Explorer, Mobile, or Live Office.
Note:
The RAM calculation is for deployment on one large machine. Scale-out to smaller
16GB or similar machines might be required, depending on the customer’s
hardware availability.
Overview of BI Virtualization
BI 4.3 Functionality
SAP BusinessObjects Business Intelligence suite (SAP BusinessObjects BI 4.3) is a flexible
and scalable solution that provides the full spectrum of BI functionality, including reporting
and analysis, data exploration, dashboards, and self-service BI. It also gives IT departments a
flexible means to share BI content throughout the entire organization, empowering business
users to make effective, informed decisions using self-service access to information.
Deployable on physical, virtual, and cloud environments, SAP BusinessObjects BI solutions
are flexible enough to best fit the unique needs of each organization while providing a
complete enterprise BI solution. Customers can decide faster, perform better, and achieve
superior results throughout all areas of their business.
The solution is a multi-tier, server-based product that comprises a number of logical servers.
Each report format has its own server modules that are controlled through a single
management interface. These servers run as separate processes and they can all be installed
on one machine or distributed across multiple machines, with multiple instances of individual
servers able to run on each host. This allows for a very flexible and scalable architecture, with
servers dedicated to specific processes and tasks, and of virtually any size, depending on how
many concurrent users the environment needs to support.
Customer Responsibilities
Many customers with physical deployments who experience performance problems typically
have not allocated enough dedicated hardware resources or have designed a substandard BI
system architecture. Having either of these problems in a virtual deployment can have more
performance implications and the effects will be far more noticeable.
In the vast majority of cases, performance problems blamed on virtualization are usually a
result of poor design decisions a customer has made and not virtualization or the SAP
BusinessObjects BI suite.
BI Virtualization Features
Considerations for BI Virtualization
● All instructions for a physical environment are 100% applicable to virtualized BI, which
include the following:
- There is no evidence that deviating from physical tuning within the VM is a good idea.
- You must properly configure the VM itself and the hypervisor.
- You must ensure that you use memory and CPU reservations, otherwise your sizing
might not be valid.
- You should critically evaluate or disregard non-BI virtualization guidance.
- Many SAP Notes and white papers are inapplicable, because they are based on SAP
NetWeaver.
- Some best practices for Java in virtualized environments can be detrimental to BI.
For more information, refer to Additional Information about Using BI Virtualization.
Design and ensure a good ● Virtualization gives you the freedom to make a better ar-
system architecture before chitecture than typically possible in a physical deployment.
performing optimizations at
any level. ● Optimization cannot compensate for poor system design.
Poor designs will suffer the same bottlenecks on physical
systems as well.
Follow guidance specifically ● Disregard virtualization guidance for SAP Software unless
for SAP BusinessObjects BI it is specifically for SAP BusinessObjects BI 4.3.
4.3 and disregard previous
studies. ● Disregard previous SAP and non-SAP virtualization studies
for BI as they might be based on previous versions.
Review the configuration of ● Ensure configuration at all levels have sensible values.
each level of your hardware
and software stack. ● Use defaults for each stack element, unless following guid-
ance by the specific vendor for that element.
● Always use the latest versions of all components whenever
possible to take advantage of performance improvements.
Use strict CPU (processing ● VMs without CPU reservations may share CPU time with
power) reservations for each other processor intensive guests and negatively impact
BI VM on every host. performance.
● All standard BI sizing and performance expectations are
invalid if CPU reservations are not strictly enforced.
Use strict memory reserva- ● VMs without memory reservations may be forced to page
tions for each BI VM on ev- to disk if the host experiences memory pressure.
ery host.
● All standard BI sizing and performance expectations are
invalid if memory reservations are not strictly enforced.
Do not use shares, limits, af- ● Any mechanism that divides or shares resources allows
finity, or other artificial the host to provide less than the promised resources re-
mechanisms to divide VM quired for optimal performance.
resources on the host.
● All standard BI sizing and performance expectations are
invalid if resource sharing features are employed.
Size VMs large enough for BI ● Size your VMs the same way you would on physical nodes.
workloads. If your IT department does not allow for large enough VMs,
seek exceptions or expect the system to struggle.
● Ensure your VMs at least meet the hardware minimum re-
quirements as stated in the SAP BusinessObjects BI 4 Siz-
ing Guide [Ref. 2].
Use more, smaller VMs rath- ● Split very large VMs into smaller ones to better monitor
er than a few very large (>16 and manage resource usage.
CPU) nodes.
● Individual VMs still have inherent bottlenecks regardless of
how powerful or well provisioned the host is.
Always deploy VMtools in ● VMtools provide the mechanism for the host and guest to
every guest. work efficiently together.
● Without VMtools, the system might still run, but all stand-
ard BI sizing and performance expectations should be con-
sidered invalid.
Use strict resource reserva- ● Minimize virtualization overhead to avoid invalidating your
tions and other recommen- BI sizing.
dations from this guide to
ensure your BI sizing is valid. ● Conservatively factor in some overhead to provide a per-
formance buffer.
Use hypervisor-level moni- ● Guest-level monitoring tools may not be hypervisor aware
toring tools. and may not report the true system statistics.
● Work with your IT department to get access to hypervisor-
level tools for monitoring, or work with them to do a per-
formance analysis using tools like vCenter or “esxtop.”
Ensure your landscape re- ● Maintaining license compliance can affect how you deploy
flects your SAP software li- in a virtual environment.
cense and virtualization
rights. ● Ensure you review your specific virtualization rights. If you
have CPU-based licensing, ensure you are calculating the
host’s cores correctly and getting full value from each one
through reservations.
Evaluating Selected Java Best Practices for On the VMware website, search for the title.
SAP BusinessObjects Business Intelligence 4
On vSphere
Sizing and Deploying SAP BI 4 and SAP Lumi- On the SAP Community Network, search for
ra the title.
Lock
Locks are used to control concurrency in a database. When a process locks a row or
table, no other process can access it until the lock is released. The level at which a lock is
requested, at the row or the table level, is referred to as the lock granularity.
Deadlock
A deadlock occurs when two or more processes conflict in such a way that each must
wait for the other to release its lock in order for the other to proceed.
Lock wait
A lock wait occurs when a process must wait for another process to release a lock on a
row or a table.
Lock escalation
A lock escalation occurs when the number of locks on rows or tables in the database
equals the maximum number of locks specified in the database configuration settings.
● Meeting the following criteria helps to avoid the most common performance bottlenecks:
- The database system’s cache hit rates are over 90%.
- The optimizer statistics are not older than 24 hours.
- Lock granularity is row locking.
- There are no lock escalations.
- There are no log write waits. The average disk write queue length to the disk drives that
contain the database log files is smaller than five.
The following table examines the most common performance bottlenecks in more detail.
90% or higher cache ● A relatively low ● Disk access is un- ● Providing suffi-
hit rate number of physi- able to keep up cient physical
cal disk reads and with the level of memory.
writes. I/O that the data-
● Configuring suffi-
base server re-
● A relatively low cient cache sizes.
quests.
number of SQL
compiler execu- ● There are long
tions. disk read or disk
write queues.
● There are unnec-
essary SQL state-
ment compilations
or long SQL state-
ment compilation
times.
No log write waits Reduced transaction Lock waits or dead- ● Providing an I/O
processing time due locks. subsystem with
to no I/O waits. sufficient through-
put.
Note:
The sizing estimate below should only be used as a starting point, as it is based on
the workflows outlined in the table. We strongly recommend volume testing to
validate your sizing estimate, based on the expected usage in your deployment.
Table 21: Sizing Recommendations for SAP Lumira, Server for BI Platform
Maximum Cell Size
(no. of rows * no. of Recommended Maximum Active Concurrent Users
columns)
60,000,000 10 12 15
700,000 15 25 35
Minimum Memory 32 48 64
Required (RAM in
GB)
1. Your hardware should meet the minimum hardware specifications outlined in the SAP
Lumira server for BI platform product availability matrix (PAM).
2. To avoid the user load impacting other BI platform workflows, it is recommended that SAP
Lumira, server for BI platform runs on dedicated hardware resources — according to the
configurations outlined in the table above. There is no hard limit cut-off that the in-
memory data engine can support. The engine will utilize the available memory resources.
3. The sizing recommendations are specific to the hardware configurations outlined in the
table. For larger deployments, involving more than 350 active concurrent users, we
recommend that you add more nodes to your deployment, rather than adding additional
Lumira servers to an existing node to avoid potential memory resource allocation
conflicts. If your deployment requires 350 active concurrent users, you should consider
deploying 10 nodes each, with 64GB RAM and 24 CPU cores.
The Lumira server processes Lumira content on BI platform. This server hosts the in-memory
data engine, which is used as an offline data processing engine for Lumira. It is a separate
product from HANA, but borrows many concepts such as in-memory, column store,
parallelization, and compression. Like other documents in BI platform, Lumira documents are
also saved in the input/ output file repository servers. The data is only loaded into the in-
memory data engine when a user opens a Lumira document.
Refer to the following links for more information about SAP Lumira, server for BI platform:
● http://scn.sap.com/docs/DOC-63551: contains the latest information regarding Lumira,
server for BI platform, functionalities and support statements.
● http://scn.sap.com/docs/DOC-26507: refer to the architecture process flow section for a
step-by-step explanation of how Lumira user loads are processed within a BI platform
deployment.
● Quick Sizer is the online sizing tool of SAP. The tool helps prospects, customers, and
hardware vendors to do the following:
- It speeds up the hardware-planning phase by asking structured questions relating to
usage of business processes that might affect sizing.
- It facilitates the tendering procedure by including links to hardware vendors so that
they can make an offer based on the information provided in the tool.
- It provides content for GoingLive Check to determine if the hardware is sufficient to
smoothly run SAP software.
Note:
Keep in mind that Quick Sizer is a tool for initial sizings only. It is not used for
post-deployment sizings, such as upgrade or re-sizing (except delta-sizing of a
new SAP solution), configuration, landscaping, or customer coding.
Performance and Scalability: Quick Sizer tool On the SAP Service Marketplace, choose
Products → Performance and Scalability →
Sizing → Quick Sizer.
Sizing and Deploying SAP BI 4 and SAP Lumi- On the SAP Community Network, search for
ra the title.
Background: Sizing SAP Business Suite On the SAP Service Marketplace, choose
Products → Performance and Scalability →
Sizing → Sizing Guidelines → Presentations
and Articles → Background: Sizing SAP Busi-
ness Suite.
LESSON SUMMARY
You should now be able to:
● Define the sizing process
● Analyze the sizing capacity of system servers
● Analyze the sizing requirements of system components
● Describe factors that influence scaling
● Identify the uses of Configuration Intelligence
● Identify the uses of the BI 4 Sizing Guide
● Identify the uses of BI Virtualization
● Identify backend BW systems sizing guidelines
● Perform an initial sizing of a deployment
Learning Assessment
1. You can put 10 Webi Processing servers along with a CMS and an FRS in a single VM that
has 12 CPU and 48G memory without impacting performance.
Determine whether this statement is true or false.
X True
X False
2. The best practice is to have only two CMSs clustered for an BI 4.3 environment, because
only two CMSs can be clustered in a single environment.
Determine whether this statement is true or false.
X True
X False
X True
X False
X B Active users
X D Power users
X B Two Web Intelligence report servers are required per machine as a starting point.
X D The input FRS maintains all the instances that have been produced from reports.
X E One Server Intelligence Agent (SIA) is required to run on each machine regardless
of CPU configuration.
X True
X False
7. For memory usage information, planners should look at average usage, not maximum
usage.
Determine whether this statement is true or false.
X True
X False
8. Performance of physical and virtual systems degrades when the system utilization is
greater than 80%.
Determine whether this statement is true or false.
X True
X False
9. A likely issue to encounter when running BI in the cloud is that cloud providers do not
provide visibility into performance.
Determine whether this statement is true or false.
X True
X False
10. Which is a best practice for sizing virtual environments with VM Ware?
Choose the correct answer.
11. Which of the following is not an SAP BusinessObjects Business Intelligence conceptual
tier?
Choose the correct answer.
X A Client tier
X C Processing tier
X D Storage tier
X A Virtualization gives you the freedom to make a better architecture than is typically
possible in a physical deployment.
X B Because SAP has a different virtualization strategy for each product, it has a
separate support policy for each product.
X C SAP helps solve issues with virtual environments, but the system must be
configured properly and operating optimally.
X E Without VMtools, the system might still run, but all standard BI sizing and
performance expectations should be considered invalid.
X F Previous SAP and non-SAP virtualization studies for BI can be helpful, even if they
are based on previous versions.
X A A lock escalation occurs when the number of locks on rows or tables in the
database equals the minimum number of locks specified in the database
configuration settings.
X B Quick Sizer is not used for post-deployment sizings, except delta-sizing of a new
SAP solution.
1. You can put 10 Webi Processing servers along with a CMS and an FRS in a single VM that
has 12 CPU and 48G memory without impacting performance.
Determine whether this statement is true or false.
X True
X False
2. The best practice is to have only two CMSs clustered for an BI 4.3 environment, because
only two CMSs can be clustered in a single environment.
Determine whether this statement is true or false.
X True
X False
X True
X False
X B Active users
X D Power users
X B Two Web Intelligence report servers are required per machine as a starting point.
X D The input FRS maintains all the instances that have been produced from reports.
X E One Server Intelligence Agent (SIA) is required to run on each machine regardless
of CPU configuration.
X True
X False
7. For memory usage information, planners should look at average usage, not maximum
usage.
Determine whether this statement is true or false.
X True
X False
8. Performance of physical and virtual systems degrades when the system utilization is
greater than 80%.
Determine whether this statement is true or false.
X True
X False
9. A likely issue to encounter when running BI in the cloud is that cloud providers do not
provide visibility into performance.
Determine whether this statement is true or false.
X True
X False
10. Which is a best practice for sizing virtual environments with VM Ware?
Choose the correct answer.
11. Which of the following is not an SAP BusinessObjects Business Intelligence conceptual
tier?
Choose the correct answer.
X A Client tier
X C Processing tier
X D Storage tier
X A Virtualization gives you the freedom to make a better architecture than is typically
possible in a physical deployment.
X B Because SAP has a different virtualization strategy for each product, it has a
separate support policy for each product.
X C SAP helps solve issues with virtual environments, but the system must be
configured properly and operating optimally.
X E Without VMtools, the system might still run, but all standard BI sizing and
performance expectations should be considered invalid.
X F Previous SAP and non-SAP virtualization studies for BI can be helpful, even if they
are based on previous versions.
X A A lock escalation occurs when the number of locks on rows or tables in the
database equals the minimum number of locks specified in the database
configuration settings.
X B Quick Sizer is not used for post-deployment sizings, except delta-sizing of a new
SAP solution.
Lesson 1
Planning a Deployment Solution 161
Lesson 2
Configuring the Web Tier for High Availability 179
Lesson 3
Configuring the Management Tier for High Availability 187
Lesson 4
Configuring the Storage Tier for High Availability 191
Lesson 5
Configuring the Processing Tier for High Availability 195
Lesson 6
BI Platform Pattern Books and Best Practices for Deployment 201
Lesson 7
Administering Server Groups 207
UNIT OBJECTIVES
LESSON OVERVIEW
This lesson discusses factors to consider when you are planning for a highly available, fault-
tolerant BI deployment solution.
LESSON OBJECTIVES
After completing this lesson, you will be able to:
● Describe the factors that influence a deployment solution plan
● Develop a deployment strategy for high availability
Challenges in BI Solutions
With the rapid changes in today’s market, business, and technology landscape, organizations
need business intelligence (BI) platforms. BI provides real-time performance on big data,
insight with trust across business and social data, and instantly available mobile BI. BI
solutions need to be globally ready, easily scalable, and flexibly deployable.
Organizations Challenges
The following are challenges that organizations must face:
A key challenge that organizations of any size in any industry must face is how to manage and
analyze the soaring quantity of data and harness that information to improve the business. IT
is challenged by the high costs associated with the purchase and maintenance of hardware
needed to accommodate large data volumes, while business users need quick, often real-time
access to analyze this information in order to respond to ever-changing market conditions.
● External Layer
● Web Server Layer
● Application Server Layer
● SAP BusinessObjects BI Platform
● Data Layer
Corporate Data Stores (Data Warehouse, data marts, etc.) is the key component of the Data
layer in horizontal architecture.
The BusinessObjects Platform supports the usage of networking storage solutions like
Storage Area Network (SAN) or Network Attached Storage (NAS).
Large-scale SAP BusinessObjects BI 4 plat- On the SAP Community Network, search for
form deployments over SAP Sybase ASE and the title.
SAP Sybase IQ databases optimal sizing and
configuration for up to 10,000 concurrent
users
Sizing Estimator http://www.sap.com/bisizing
● When evaluating for risk management while designing for high availability, consider the
following questions:
- How much risk is your customer prepared to undertake versus the cost of mitigating
that risk?
- Is it important to your customer if their BI system is down for an hour, a day, or even a
week?
- How much downtime can your customer tolerate before the loss of their BI system has
a serious impact on their organization?
- Assuming your customer would like to at least partially mitigate the risk, how much are
they willing to spend?
- What level of continuity does your customer need?
- What would your customer do if they lost an entire data center?
The Landscape
When designing your system with high availability or disaster recovery in mind, it is important
to consider each of the components displayed in the figure, High Availability System Design
Landscape. Each of these components is discussed in greater detail in the following sections,
but the three main tiers of a BI system—the web server, web application server and BI
platform tiers—should be instantly recognizable. The external components that are relied
upon are also illustrated in the figure. The green arrows represent the paths where failover is
typically taken care of by the BI platform system and the gold arrows indicate where external
systems must be relied upon to provide failover capability.
Deployment Options
Single-Server Deployments
Single-server deployments are not highly available; therefore, they are not “Enterprise”
architectures. They are basic deployments from which we can build bigger environments.
Multiple-Server Deployments
Figure 51: Horizontal and Vertical Scaling of Application Tier: Multiple Business Intelligence Servers
Figure 52: Horizontal and Vertical Scaling of Application Tier: Hardware Redirector
Figure 53: Horizontal and Vertical Scaling of Application Tier: Clustered Application Servers
Figure 54: Horizontal and Vertical Scaling of Application Tier: Web Tier and BI Platform Cluster
The figure, Multiple Tier Availability, illustrates a tier by tier multi-machine configuration that
showcases high availability and load-balancing of the SAP BusinessObejcts Platform.
In the Intelligence Tier, in the event of a failure, the workload of the failed CMS is picked up by
an available CMS within the cluster.
Ensuring availability of resources that the CMS cluster depends on (such as the system
database and third-party authentication servers) is critical to the availability of the entire 4.3
deployment.
In the event that an active FRS fails, the passive FRS becomes the active server. When the
previously active FRS is operational again, it is registered with the CMS as a passive FRS.
Ensuring availability of each FRS root folder is critical. There are many possible solutions.
In the Processing Tier, redundant servers are running on separate machines to support high
availability. If one server fails, redundant servers on different machines are still available.
during installation of the BI platform can be broken into two smaller WAR files: one
containing the static content and the other containing the dynamic JSPs and Servlets.
● Customers want to offload the SSL work from the Web App Servers. However, if you have
one web server, then you have a single point of failure. You can eliminate the single point of
failure by introducing multiple web servers.
Using multiple web servers introduces a new problem: users now have multiple web
servers and multiple URLs to choose from.
DNS round robin is one solution. It is not very popular, but it is very cheap to implement.
Web server software clustering is another solution. But by far the best and most popular
method, especially for enterprise organizations, is to introduce a hardware load balancer.
These hardware appliances present a single URL to the end users. Traffic is then
forwarded to the web server farm behind it. Hardware load balancers tend to be very
reliable.
The system you specify is really the host name (and optionally the port) of a host running a
CMS (it is best to use the fully qualified domain name of the host).
Interestingly, due to the way that SAP BusinessObjects Enterprise failover works, the host
name you specify may not actually be the CMS to which you end up connecting. Clustering is
inherent to the system at every point.
CMS Clustering
The CMS service controls the entire SAP BusinessObjects Enterprise tier and all the
components that reside on the platform. Each CMS service talks to the others to maintain a
dynamic list of all the CMS processes available within the cluster. It passes this list down to
each platform service as they connect to the CMS. If a CMS service dies, all the other services
already have the names of all the other CMS services, and they will automatically try to re-
establish a connection with one of those.
Making a cluster is easy, all you need to do is install another CMS on another server then point
it to the same system repository database. It becomes part of the cluster by using the shared
repository.
If you lose one of the servers within the cluster where the files are stored, the backup FRS
pairs will still be able to access the files through the cluster.
LESSON SUMMARY
You should now be able to:
● Describe the factors that influence a deployment solution plan
● Develop a deployment strategy for high availability
LESSON OVERVIEW
This lesson describes the configuration of each tier in the BI platform, how to deploy a split
environment, and how to configure clusters in a BI platform deployment.
LESSON OBJECTIVES
After completing this lesson, you will be able to:
● Configure the web tier for high availability.
Simple DMZs
Demilitarized Zones (DMZs) are the cordoned-off areas of the network that are separated
from the internet by an outward-facing firewall and separated from the rest of the internal
network by another firewall.
Real-Life Scenario
It is becoming more common to extend this model. In the figure, DMZ Realistic Scenario,
there are two DMZs. The web server is in the first DMZ, the application server in the second
DMZ, and database servers and so forth are in the internal network.
Web servers have weak points, and it is good practice to not allow web servers in the first
DMZ. Perhaps an even more widely accepted standard is to not run any code inside the first
DMZ. The scenario shown in the figure, DMZ Safer Scenario is more commonly used,
especially in combination with Network Address Translation (NAT). Implementing good rules
about what traffic can enter and leave the network can make it a lot harder for malicious users
to find an easy way in and back out.
Because a firewall can be placed between the web and application servers, a scenario with
three DMZs is also possible. In the figure, DMZ Three-Zone Scenario, even more obstacles are
placed in the way of the attacker. The web server remains as vulnerable as in the figure, DMZ
Safer Scenario, but the application server has another firewall protecting it.
Configuration Example
In this example, the BIG-IP system is configured to optimize and direct traffic to both the
Apache web server and the Tomcat application server. The figure, Simple Configuration:
Redundant BIG-IP Local Traffic Manager, shows a simple, logical configuration with a
redundant pair of BIG-IP LTM devices in front of a group of Apache web servers and Tomcat
application servers.
Additional Information
The following links provide step-by-step instructions for the Apache web server setup:
http://wiki.scn.sap.com/wiki/display/BOBJ/Apache
http://wiki.scn.sap.com/wiki/display/BOBJ/Setting+up+Apache+2.4
Standalone Mode
Deploying web applications’ static and dynamic resources, which are bundled in the .war file
on the web application server, is known as standalone mode.
This makes deployment simpler, but performance may suffer because every type of
transaction, even those requiring static content only, must pass through the Web application
server.
The scenario shown in the figure, Distributed Deployment: Different Machines, could enhance
performance because the Web server can serve up static pages without having to
communicate with the Web application server.
LESSON SUMMARY
You should now be able to:
● Configure the web tier for high availability.
LESSON OVERVIEW
This lesson provides information on CMS clustering, web application deployment, and
declustering a deployment.
LESSON OBJECTIVES
After completing this lesson, you will be able to:
● Configure the management tier for high availability
CMS Clustering
CMS Clustering offers benefits for organizations with large or mission-critical
implementations of the BI platform.
● A cluster consists of two or more CMS servers working together against a common CMS
system database.
● Organizations with large or mission-critical implements usually run several CMS machines
together in a cluster.
● If a machine that is running one CMS fails, a machine running another CMS will continue to
service BI platform requests.
● This configuration provides high availability support, ensuring that BI platform users can
still access information when equipment fails.
Clustering Requirements
Requirements for Clustering CMS Machines
Ensure the following prerequisites and requirements are met when clustering CMS machines:
● Before clustering CMS machines, ensure that each CMS is installed on a system that
meets detailed requirements for the operating system and database components.
● The database server that hosts the system database must be able to process small
queries very quickly.
● Run each CMS cluster member on a machine that has the same amount of memory and
the same type of CPU as the other machines in the cluster.
● Configure each machine in the cluster in the same manner.
● If your cluster has more than eight CMS cluster members, ensure that the command line
for each CMS includes the out-of-band threads option.
● To enable auditing, configure each CMS to use the same auditing database and to connect
to it in the same manner.
Before clustering CMS machines, you must make sure that each CMS is installed on a system
that meets the detailed requirements (including version levels and patch levels) for the
operating system, database server, database access method, database driver, and database
client as outlined in the Product Availability Matrix.
For the best performance, the database server that you choose to host the system database
must be able to process small queries very quickly. The CMS communicates frequently with
the system database and sends it many small queries. If the database server is unable to
process these requests in a timely manner, BI platform performance will be greatly affected.
For best performance, run each CMS cluster member on a machine that has the same
amount of memory and the same type of CPU as the other machines in the cluster.
Configure each machine similarly using the following guidelines:
● Install the same operating system including the same version of operating system service
packs and patches.
● Install the same version of the BI platform (including patches, if applicable).
● Ensure that each CMS connects to the CMS system database in the same manner and
uses the same native or ODBC drivers. Make sure that the drivers are the same on each
machine and are a supported version.
● Ensure that each CMS uses the same database client to connect to its system database
and that it is a supported version.
● Check that each CMS uses the same database user account and password to connect to
the CMS system database. This account must have create, delete, and update rights on
the system database.
● Ensure that the nodes on which each CMS is located are running under the same operating
system account. On Windows, the default is the LocalSystem account.
● Verify that the current date and time are set correctly on each CMS machine (including
settings for daylight savings time).
● Ensure that all machines in a cluster (including the machines that host the CMS) are set to
the same system time. For best results, synchronize the machines to a time server (such
as time.nist.gov) or use a central monitoring solution.
● Ensure that the same WAR files are installed on all web application servers in the cluster.
For more information on WAR file deployment, see the SAP BusinessObjects Business
Intelligence Platform Installation Guide.
● Ensure that each CMS in a cluster is on the same Local Area Network.
Out-of-Band threads (-oobthreads) are used by clustering pings and clustering notifications.
Since both operations are quick (notifications are asynchronous), the BI platform no longer
requires multiple oobthreads, and only one -oobthread is created.
If your cluster has more than eight CMS cluster members, ensure that the command line for
each CMS includes the -oobthreads <numCMS> option, where <numCMS> is the number of
CMS servers in the cluster. This option ensures that the cluster can handle heavy loads. For
information about configuring server command lines, see the server command lines appendix
in the SAP BusinessObjects Business Intelligence Platform Administrator Guide.
If you want to enable auditing, each CMS must be configured to use the same auditing
database and to connect to it in the same manner. The requirements for the auditing
database are the same as those for the system database in terms of database servers, clients,
access methods, drivers, and user IDs.
Hint:
By default, a cluster name reflects the machine host-name of the first CMS that
you install.
LESSON SUMMARY
You should now be able to:
● Configure the management tier for high availability
LESSON OVERVIEW
LESSON OBJECTIVES
After completing this lesson, you will be able to:
● Configure the File Repository System (FRS)
FRS Overview
Overview of the File Repository System (FRS)
FileStore refers to the disk directories where the actual report files reside. The
BusinessObjects FRSs are responsible for listing files on the server, querying for the size of a
file, querying for the size of the entire file repository, adding files to the repository, and
removing files from the repository. FRSs are also responsible for the creation of file system
objects, such as exported reports and imported files in non-native formats.
In every BusinessObjects Enterprise implementation there is an Input and an Output FRS.
Both manage their respective directories and handle all aspects of file management.
The Input FRS manages all of the report objects and program objects that have been
published to the repository. It can
store .RPT, .CAR, .EXE, .BAT, .JS, .XLS, .DOC, .PPT, .RTF, .TXT, .PDF, .WID files. In the case
of .RPT files, they are stored only as report definition files which do not contain any data. The
Report Properties page of the CMC shows you the location of the Input report files.
The Output FRS manages all of the report instances (saved data copy of the report)
generated by the Report Job Server or the Web Intelligence Report Server, and the program
instances generated by the Program Job Server. It also manages instances generated by the
Web Intelligence Report Server and the LOV Job Server. It can store the following
files: .RPT, .CSV, .XLS, .DOC, .RTF, .TXT, .PDF, .WID.
For .RPT and .WID, files are stored as reports/documents with saved data.
Because the Output FRS stores the report instances, deleting instances would remove
instances not the actual reports. However the report structure will be stored in the Input FRS.
Configurations to Avoid
SAP recommends that you do not install Host Bus Adaptor (HBA) cards in each system for
the following reasons:
● An HBA is a hardware card with fiber channel links that connect directly to the SAN.
- With the HBA installed, the computer senses that it has a local disk installed within the
computer itself.
- It is, therefore, attempting to point the HBA card to the same logical unit number (LUN)
(that is, a patch of disk) on the SAN.
- As two programs can write to different parts of the same file at the same time and each
computer allows the modification, files can become corrupted.
- In theory, multiple FRS processes, running as active or passive, should prevent
corruption, but it still occurs.
● Initially, each FRS points to a local directory on the server where it was installed.
● This configuration must be changed so file storage is done in a common area.
● In the event of failure, back-up FRSs can access files in the common area.
● An NAS head is an appliance detected by servers across the network as a file share.
● It manages file locking, because it knows that there is an SAN behind the NAS head.
● It is multi-faceted, appearing as a Windows server message block (SMB) share for
Windows clients or an NFS share for Unix clients.
● SAP BusinessObjects BI platform servers do not need HBA cards because they access the
share across the network using interface cards.
● The disk behind the clustered servers can either be a mirrored disk or an SAN.
● The Microsoft server is referenced by the uniform naming convention (UNC) name.
● Only one member of the cluster has a connection to the SAN at any time.
● The clustering takes care of the fail-over when something fails.
● Two FRS pairs can point to the same UNC path.
● HBA cards are used in each server, and a special SAN-aware software driver is installed.
● The driver resides above the HBA, and they are in constant communication with each
other, managing file locking and preventing file corruption.
● Create a shared, common clustered folder on another host across the network.
For Windows, create a Network SMB share; for Unix, create an NFS or CIFS.
● Point the FRS pairs to the new folder.
● If you lose one of the servers within the cluster where the files are stored, the backup FRS
pairs can still access the files through the cluster.
● The process is not highly available; it takes several seconds to switch over. However, it is
inexpensive, safe, and reliable. It can also be scripted.
LESSON SUMMARY
You should now be able to:
● Configure the File Repository System (FRS)
LESSON OVERVIEW
In this lesson, we will review concepts of high availability across the SAP BusinessObjects
Platform processing tier.
LESSON OBJECTIVES
After completing this lesson, you will be able to:
● Configure the processing tier for high availability
● Choose a deployment template
Note:
Selecting a deployment template in the wizard or manually creating additional
APSs does not replace system sizing. For more information about BI sizing, go
to http://www.sap.com/bisizing.
Report Servers
SAP Crystal Reports 2020 Report Application Server (RAS)
The following are the benefits of using SAP Crystal Reports 2020 Report Application Server
(RAS):
● The default small deployment is suitable only for a demo environment on limited hardware.
● You can choose a predefined deployment template to better match your hardware and use
cases.
● Although the template helps with the initial configuration, it does not replace system sizing
and configuration efforts.
The default installation of the BI platform configures a small deployment that is suitable for a
demo environment on limited system hardware. To better match your hardware and intended
use case (for example, preparing a test system or production system), choose one of the
predefined deployment templates from the Capacity view. These templates are intended to
help the BI platform system up and running quickly, and to shorten the initial deployment
time.
Although, choosing an appropriate deployment template helps with the initial configuration
and provides a good starting point, it is not a replacement for system sizing and configuration,
which must be performed. For best performance, size your system by referring to a sizing
guide: http://www.sap.com/bisizing
Note:
The DeploymentTemplates.pdf file describes the deployment templates in
detail. The templates do not specify the number of users that can be supported
because the number of users that can be supported is dependent on load.
Perform the system sizing to determine the number of users that the system must
support, and therefore the amount of RAM, CPU requirements, and all other
system requirements necessary to determine the size of the deployment you
need.
XS Deployment Template
The XS template is an extra small deployment. This deployment requires 6–8 GB of RAM, 8
GB of Host RAM, and uses one APS. This deployment is typically used as a demo system.
S Deployment Template
The S template is a small deployment template. This deployment requires 12–16 GB of RAM,
16 GB of Host RAM, and uses four APSs. This deployment is typically used as a development
system.
M Deployment Template
The M template is a medium deployment template. This deployment requires 15–25 GB of
RAM, 16–32 GB of Host RAM, and uses 7 APSs. It is typically used as a test production system
or small production system.
L Deployment Template
The L template is a large deployment template. This deployment requires 30–45 GB of RAM,
32–48 GB of Host RAM, and uses 9 APSs. It is typically used as a production system.
XL Deployment Template
The XL template is an extra large deployment template. This deployment requires 40–60 GB
of RAM, 48–64+ GB of Host RAM, and uses 11 APSs. It is typically used as a large production
system.
LESSON SUMMARY
You should now be able to:
● Configure the processing tier for high availability
● Choose a deployment template
LESSON OVERVIEW
In this lesson, we will discuss best practices for deploying SAP BusinessObjects Platform
using Pattern Books.
LESSON OBJECTIVES
After completing this lesson, you will be able to:
● Select a pattern for a BI landscape
Pattern Overview
SAP BusinessObjects BI Platform Deployment in the Real World
In addition to installation efforts, deployment of the BI platform involves the following efforts
to fully leverage the platform:
Note:
Setup instructions vary for different operating systems, application server
vendors, and database vendors.
Pattern Prerequisites
Best Practices for BI Deployment: Part I
When designing the architecture, consider the following best practices for BI deployment:
● Enable high availability for CMS database and FRS file share.
- Use existing technologies outside of BI platform.
● Separate environments for development, testing, and production.
● Have at least one pre-production environments configured similar to the production
environment.
- Allow install and configuration cookbook for production environment.
- Allow contents to be tested in a production-like environment.
- Allow production issues to be reproduced for troubleshooting in a non-production
environment.
● All environments should be on the same BI platform release.
● All environments should have same operating system version, patches, and upgrades.
● Plan for content promotion.
Note:
Refer to SAP BusinessObjects Statement of Support for VMWare.
Note:
Refer to Optimize Application Delivery for Global Deployment whitepaper.
Pre-Installation Checklist
Architecture Design for BI Reporting Pre-Installation Check List: Part I
In addition to reviewing the product platform support guide, ensure you have performed the
following before installation:
● Verify that the port number to be used for CMS is free and available.
● Verify connectivity to the CMS using the NetBIOS name of the CMS.
● Verify the Web Server has connectivity with the Web Application Server
● On Windows Platform, perform the following:
- Install user must have local administrator’s rights.
- Do not set up a Windows Server as a domain controller.
- Ensure Windows servers have performance options configured to give background
services priority.
Installation Verification
Installation Verification for Architecture Design
After BI platform installation, verify that its components are installed successfully.
● To launch the SAP BusinessObjects BI platform Java BI Launchpad, open a web browser
and navigate to the following URL:
http://<servername>:<portnumber>/BI
● To launch the SAP BusinessObjects BI platform Central Management Console, open a web
browser and navigate to the following URL:
http://<servername>:<portnumber>/CmcApp
● To launch the SAP BusinessObjects BI platform Explorer, open a web browser and
navigate to the following URL:
http://<servername>:<portnumber>/explorer
LESSON SUMMARY
You should now be able to:
● Select a pattern for a BI landscape
LESSON OVERVIEW
This lesson explains how to modify server group membership, clone servers, and configure
server groups. It also provides information on how to set up the BI platform servers, including
troubleshooting and debugging.
LESSON OBJECTIVES
After completing this lesson, you will be able to:
● Administer server groups
Server Groups
Server groups provide a way of organizing your SAP BusinessObjects BI platform servers to
make them easier to manage.
● When you manage a group of servers, you need to view only a subset of all the servers on
your system.
● Server groups are a powerful way of customizing SAP BusinessObjects BI platform to
optimize your system for users in different location or for objects of different types.
If you group your servers by region, you can easily set up default processing settings,
recurrent schedules, and schedule destinations that are appropriate to users who work in a
particular regional office. You can associate an object with a single server group, so the object
is always processed by the same servers. Also, you can associate scheduled objects with a
particular server group to ensure that scheduled objects are sent to the correct printers, file
servers, and other resources. As a result, server groups prove to be especially useful when
maintaining systems that span multiple locations and multiple time zones.
If you group your servers by type, you can configure objects to be processed by servers that
have been optimized for those objects. For example, processing servers need to
communicate frequently with the database containing data for published reports. Placing
processing servers close to the database server that they need to access improves system
performance and minimizes network traffic. Therefore, if you had a number of reports that
ran against a DB2 database, you might want to create a group of Processing Servers that
process reports only against the DB2 database server.
If you then configure the appropriate reports to always use this Processing Server group for
viewing, you optimize system performance for viewing these reports. After creating server
groups, configure objects to use specific server groups for scheduling or for viewing and
modifying reports. Use the navigation tree in the Servers management area of the CMC to
view server groups. The Server Groups List option displays a list of server groups in the details
pane, and the Server Groups option allows you to view the servers in the group.
2. Choose Manage → New → Create Server Group. The Create Server Group dialog box
appears.
3. In the Name field, enter a name for the new group of servers.
4. In the Description field, enter additional information about the group. Choose OK.
5. In the Servers management area, in the navigation tree, choose Server Groups and select
the new server group.
7. Select the servers that you want to add to this group, and then choose the > arrow. Use
CTRL+click to select multiple servers.
8. Click OK.
You return to the Servers management area, which now lists all the servers that you added to
the group. You can now change the status, view server metrics, and change the properties of
the servers in the group.
Server Subgroups
Subgroups of servers provide you with a way of further organizing your servers.
For example, if you group servers by region and by country, then each regional group
becomes a subgroup of a country group. To organize servers in this way, first create a group
for each region, and add the appropriate servers to each regional group. Then, create a group
for each country, and add each regional group to the corresponding country group.
2. To select the parent group to add subgroups to, in the navigation tree, choose Server
Groups and choose the server group you want.
This group is the parent group.
4. In the navigation tree, choose Server Groups, choose the server groups that you want to
add to this group, and then choose the arrow (>).
5. Choose OK.
You return to the Servers management area, which now lists the server groups that you
added to the parent group.
2. Choose the group that you want to add as a member to another group.
4. In the Available server groups list, choose the other groups that you want to add the group
to, and then choose the arrow (>).
Use CTRL+click to select multiple servers.
5. Choose OK.
● You can modify a server’s group membership to add the server to (or remove it from) any
group or subgroup that you have already created on the system.
● For example, if you create server groups for a number of regions, you might want to use a
single Central Management Server (CMS) for multiple regions.
● Instead of having to add the CMS individually to each regional server group, you can
choose the server’s Member of link to add it to all the regions at once.
4. In the Properties dialog box, in the navigation list, choose Existing Server Groups.
In the Details panel, the groups you can add to the server appear in the Available server
groups list. Any server groups that the server currently belongs to appear in the Member
of Server Groups list.
5. To change the server's group membership, use the arrows to move the server groups
between the lists, and choose OK.
Granting administrative rights to users enables them to perform server and server group
tasks, such as starting and stopping servers. Depending on system configuration and security
concerns, you might want to limit server management to the BI platform administrator, or you
might need to provide administrative access to other people using those servers.
Many organizations have a group of IT professionals dedicated to server management. If your
server team needs to perform regular server maintenance tasks that require them to shut
down and start up servers, you need to grant them administrative rights to the servers. You
may also want to delegate BI platform server administration tasks to other people or want
some groups in your organization to control their own server management.
Note:
You can select a server or server group for a publication (not for a particular user).
However, you can assign administrative rights to users or user groups for a
particular server or server group.
To Assign Administrative Rights to Users or User Groups for a Server or Server Group
You can assign administrative rights to users or user groups for a particular server or server
group.
2. To grant administrative access rights, right-click the server or server group you want, and
choose User Security.
4. To give administrative rights to the server or server group, in the Add Principals dialog box,
choose a user or group, and then choose the arrow (>).
6. On the Assign Security screen, choose security settings for the user or group, and choose
OK.
LESSON SUMMARY
You should now be able to:
● Administer server groups
Learning Assessment
X B Connector layer
X C External layer
X A Single-server deployment
X D Single-CMS deployment
X F High-availability deployment
X B Demilitarized Zones (DMZs) are areas of the network that are separated from the
internet by an outward facing firewall and separated from the rest of the internal
network by another firewall.
X C A widely accepted standard is to run all code inside the first DMZ.
X D Web servers have weak points, and it is good practice to not allow Web servers in
any of your DMZs.
X A A cluster consists of two or more CMS servers working together against a common
CMS system database.
X C If a machine that is running one CMS fails, a machine running another CMS will
continue to service BI platform requests.
X A The Input FRS manages all the report objects and program objects that have been
published to the repository.
X B The Output FRS manages all the report instances generated by the Report Job
Server or the Web Intelligence Report Server.
X C The Output FRS manages program instances generated by the Program Job
Server, the Web Intelligence Report Server, and the LOV Job Server.
6. High availability can be achieved in the processing layer by including which of the following
components in your organization’s deployment?
Choose the correct answers.
X B Cache Servers
X C Report Servers
X B System sizing
X C User survey
X D Use assessment
X A The SAP BusinessObjects Business Intelligence platform can run on only one
machine at a time.
X C The BI platform uses its own databases for storing configuration, auditing, and
other operational information.
X D To fully leverage the BI platform in your organization, you simply have to install it—
no other actions are necessary.
9. What are the two ways that you can set up subgroups?
Choose the correct answers.
X B Connector layer
X C External layer
X A Single-server deployment
X D Single-CMS deployment
X F High-availability deployment
X B Demilitarized Zones (DMZs) are areas of the network that are separated from the
internet by an outward facing firewall and separated from the rest of the internal
network by another firewall.
X C A widely accepted standard is to run all code inside the first DMZ.
X D Web servers have weak points, and it is good practice to not allow Web servers in
any of your DMZs.
X A A cluster consists of two or more CMS servers working together against a common
CMS system database.
X C If a machine that is running one CMS fails, a machine running another CMS will
continue to service BI platform requests.
X A The Input FRS manages all the report objects and program objects that have been
published to the repository.
X B The Output FRS manages all the report instances generated by the Report Job
Server or the Web Intelligence Report Server.
X C The Output FRS manages program instances generated by the Program Job
Server, the Web Intelligence Report Server, and the LOV Job Server.
6. High availability can be achieved in the processing layer by including which of the following
components in your organization’s deployment?
Choose the correct answers.
X B Cache Servers
X C Report Servers
X B System sizing
X C User survey
X D Use assessment
X A The SAP BusinessObjects Business Intelligence platform can run on only one
machine at a time.
X C The BI platform uses its own databases for storing configuration, auditing, and
other operational information.
X D To fully leverage the BI platform in your organization, you simply have to install it—
no other actions are necessary.
9. What are the two ways that you can set up subgroups?
Choose the correct answers.
Lesson 1
Preparing to Manage Content 219
Lesson 2
Upgrading to SAP BusinessObjects 4.3 223
Lesson 3
Managing the Life Cycle of a Deployment 227
Lesson 4
Moving Objects from One Deployment to Another 243
UNIT OBJECTIVES
LESSON OVERVIEW
This lesson reviews the content management tools and process.
LESSON OBJECTIVES
After completing this lesson, you will be able to:
● Describe the content management process and tools
LCMBIAR files contain compressed business intelligence content. This content can be easily
moved to a different location or SAP BusinessObjects Business Intelligence platform
deployment.
During a content promotion job, you can select an LCMBIAR file as the source or destination if
you cannot connect to two Central Management Servers directly (for example, if your
destination and source deployments are on different physical networks).
● Life cycle management refers to the entire process of designing, developing, and
maintaining an application solution.
● Life cycle management also encompasses the effort to optimize the solution with ongoing
improvements.
● The Promotion Management Tool (PMT) is used only for the BI content life cycle.
For SAP products, life cycle management refers to the complete process surrounding an
application, including gathering the initial requirements, designing the solution, building and
testing the solution, deploying the solution, the ongoing operation of that solution, and the
optimizing of that solution to improve it over time. Ongoing improvements can lead to further
requirements that would then be rolled into a subsequent iteration. So the term “life cycle
management” covers many topics: installation, deployment, change management,
administration, auditing, monitoring, and troubleshooting. The Promotion Management Tool
(PMT) for SAP BusinessObjects BI 4.3 is used only for the BI content life cycle. BI content life
cycle management services are specifically focused on managing the ongoing life cycle for BI
content.
There are two primary tools for upgrade and promotion: the Promotion Management Tool and
the Promotion Management Wizard.
Promotion Management Tool
The Promotion Management Tool (previously known as the LCM) transports versioned
content from one system to another system in small, incremental packages. For example, you
can promote content from Development to Test to Production. Content includes primarily
document templates, metadata, and schedules, not document instances. This tool moves 100
objects or less. This tool is not meant to be used for upgrades.
Promotion Management Wizard
The Promotion Management Wizard is a tool that aims to ease the task of CMS content copy.
Instead of using the Command Line Interface (CLI), Promotion Management Wizard offers a
Graphic User Interface (GUI) for performing a full copy or a selective copy. LCMBIAR files are
the same compared to the CMC or CLI versions of Promotion Management. It is a Thick-client
(Full Client GUI) tool that is installed along with the server products
LESSON SUMMARY
You should now be able to:
● Describe the content management process and tools
LESSON OBJECTIVES
After completing this lesson, you will be able to:
● Understand update scenarios for SAP BusinessObjects 4.3
If you have an older deployment, follow these guidelines to upgrade your existing deployment
to BI platform 4.3:
2. If your existing deployment is of XI 3.x, you can directly proceed with the upgrade in step
3.
3. Install BI 4.1/4.2 SPx on a separate machine, and run Upgrade Management Tool from the
4.1/4.2 SPx to migrate the content from the above-mentioned versions to BI 4.1/4.2 SPx
level.
4. Once you have your content in BI 4.1/4.2 SPx level, you can choose either of the methods
in the previous figure to move to 4.3.
The Promotion Management Wizard does not create a job that can be reused in the Central
CMS. This saves some space on the host system.
Prior to making a scenario selection, ensure you have enabled the following options:
● Go to options of the Objects step.
LESSON SUMMARY
You should now be able to:
● Understand update scenarios for SAP BusinessObjects 4.3
LESSON OVERVIEW
This lesson details the deployment lifecycle and how to manage the lifecycle with the Life
Cycle Management tool. This lesson also reviews how to create a new job, manage job
dependencies, and schedule and roll back jobs. Lastly, this lesson teaches basic version
management and best practices for managing the lifecycle of a deployment.
LESSON OBJECTIVES
After completing this lesson, you will be able to:
● Describe the life cycle of a deployment
● Manage the life cycle of a deployment using the Life Cycle Management (LCM) tool
● Create a new job
● Manage job dependencies
● Schedule and roll back jobs
● Identify factors that affect life cycle management
● Apply best practices for managing the life cycle of a deployment
Lifecycle management refers to the set of processes involved in managing information related
to a product’s life cycle and establishes procedures for governing the entire product life cycle.
These phases can occur at the same site or at different geographical locations.
The BI resources that are present in the development repository must be transferred to the
testing repository for testing deployment. The time required to transfer resources from one
repository to another repository must be minimal to obtain a high-quality and competitive
product. These resources also have dependencies that have to be moved from one repository
to another. The dependencies add complexity to the movement of resources because the
resources have to move with their dependents.
The Lifecycle Management Console for SAP BusinessObjects Business Intelligence platform
4.3 is now integrated into CMC as the Promotion Management tool, which enables you to
move BI resources from one system to another without affecting the dependencies of the
resources. It also enables you to manage different versions of BI resources, manage
dependencies of BI resources, and roll back a promoted resource to restore the destination
system to its previous state.
The Promotion Management tool, also known as the Life Cycle Management Console, for SAP
BusinessObjects BI 4.x is used only for the BI content life cycle. BI content life cycle
management services focus on managing the ongoing life cycle for BI content. This tool
allows BI administrators and operations teams to package BI content and associated
dependencies and to promote that content in an efficient, reliable, and repeatable fashion
through multiple environments. BI content life cycle management is a subset of this larger
topic, primarily in the test and deploy phases.
The promotion management tool allows moving BI resources from one repository to another,
manages dependencies of the resources, and rolls back promoted resources at the
destination system, if required. It also supports the management of different versions of the
same BI resource.
The promotion management tool is integrated with the Central Management Console. You
can only promote a business intelligence resource from one system to another if the same
version of the BI platform is installed on both the source and destination systems.
Many BI deployments contain different stages such as development, testing, and production.
Reports and other BI objects often require modification or enhancement due to changing
information and business requirements. Administrators must control how objects are
promoted through these stages, whether the objects are completely new, or the objects have
the rights to overwrite or update the objects that already exist in the destination environment.
LCM Functions
Features of the Promotion Management Tool
The following are features of the Promotion Management Tool:
● BI content promotion
● Dependencies management
● Job scheduling
The promotion and version management workflow is not able to do these tasks, because it is
specifically designed for promotion workflows and optimized for 100 objects for each
promotion.
However, it might form part of a recovery strategy.
1. You create a promotion job with Promotion Management, which contains the following:
2. You run the promotion job, so that the content is promoted from the development
environment into the test environment.
4. If the test of the content is successful, you can then re-run the same promotion job but
change the target to the production environment (which is consistent with SAP TMS /
CTS+ ).
Note:
Promoting content from the test environment to the production environment does
not follow this principle and so it is considered poor practice.
● Check-in
Creates a new revision of an object (like a document or a universe) every time you check it
in.
● Check-out
Overwrites the version in the BusinessObjects Repository with the revision you select.
Administration Options
Manage Systems
This option enables you to add and remove the host systems.
Override Settings
This option enables you to override the properties of infoobjects within the job that were
promoted to the destination system. It overrides the properties of infoobjects that were
promoted from the source system.
Rollback Settings
This option enables you to configure the rollback process at the system level.
Job Settings
This option enables you to specify the number of job instances that can exist in life cycle
management console system at any instance of time. If the number of jobs exceeds the
specified number, they are automatically deleted. It also enables the user to specify the
number of days for a job, after which the job must be deleted from the life cycle
management console.
VMS Settings
This option enables you to configure version management systems.
Additional Information about Managing the Life Cycle of a Deployment Using the
PMT Tool
Topic Online location
● Name
● Description
● Keywords
● Save Job in
● Source System
● Destination System
● User name
● Password
● Authentication
You can create a new job using the Promotion Management tool. The table discusses the
elements and fields in the tool interface that you can use to create a new job:
● During promotion, you can promote all dependencies or select only the ones you want
promoted.
● To select and filter dependents for promotion, you must use the Manage Dependencies
option.
The table discusses options that you can use to manage the dependents:
Universe for selected reports The universe upon which the selected infoobject
is dependent is promoted.
Selected universes, Universe restriction Universes that are dependent on another Uni-
set verse restriction set are promoted.
Access levels set on selected objects Access levels that are used on the selected in-
foobjects are promoted.
Connections used by selected Universes Universe connection objects that are used by the
selected infoobjects are promoted.
Business Views for selected reports Business Views, business elements, the data
foundation, the data connections, and List of Val-
ues (LoVs) that the selected infoobjects depend
on are promoted..
Events, calendars, and profiles used by Event, calendar, and user-profile objects that are
selected publication used by a selected publication are promoted.
Note:
Job Scheduling
The Promotion Management enables you to specify when a job must be promoted, rather
than promote it as soon as it is created. It also enables you to schedule job promotion at fixed
intervals. This feature is useful for promoting large jobs when the load on the server is at its
minimum. To schedule a job promotion, you must specify a time in the future or select a
recurrence pattern, and you must specify additional parameters.
Job Rollback
The Rollback option enables you to restore the destination system to its previous state after a
job is promoted.
● The purpose of a roll back is to restore a destination system to its previous state.
● Use a roll back in the following scenarios:
- There are production issues.
- The changes must be reversed quickly.
- The scope of changes to be rolled back is large.
● When using CTS+ integration, roll back is only possible for the BI content.
High-Level Architecture
The figure, High-Level Recommendation Architecture, shows a recommended setup for a
connected system with no air gap or firewall.
There are three BI plaform environments: Development, Test, and Production. A separate,
dedicated installation of the BI platform is used just for promotion management. This helps
with the version management workflows. The reason for this separation is to assist with
version management of a promotion job. That is, you have a promotion job and you want to
check in that promotion job. Having a separate environment for promotion management
prevents users from performing version control with their own content in the production
environment.
The figure Two Dedicated Promotion Management Systems shows a recommended setup for
a non-connected system with no air gap (for example, no firewall). A non-connected system is
one where there is an air gap or a firewall preventing access between systems.
● The first system is used to create promotion jobs that are run against the Test
environment.
● The system creates an LCMBIAR file that contains the contents of the promotion job.
● Another promotion job is created on the second promotion management system, which
promotes the LCMBIAR file to the Production environment.
In this architecture, there are two dedicated Promotion Management systems. The first
system is used to create promotion jobs. The promotion jobs are then run against Test.
However, because there is an air-gap surrounding production, the dedicated promotion
management system cannot connect directly to Production. The system creates an LCMBIAR
file that contains the contents of the promotion job. The LCMBIAR file is used as the basis to
create another promotion job on the second promotion management system on the right.
This LCMBIAR file is then promoted into production.
The use of this second dedicated promotion management system is optional and typically not
used.
Promotion Management is really a “production” system. The system needs to be backed up.
The rollback content is stored as instances in the output File Repository Server. The
versioning of content is stored in Version Management, so you must also back up the version
management database.
A dedicated system stores rollback content from a production environment, which could be
critical if a rollback is required in your Production system.
Together with the Version Management System, it contains the versioning and an audit trail of
your BI content, allowing that content to be re-promoted or old content recovered. It could be
critical to meet compliance regulations. A dedicated system avoids potential issues when
promoting content to which the “Promotion job owner” does not have access. It allows for a
refresh of Development and Test from Production without losing any Promotion
Management-specific data.
It allows for “decoupling” of software releases between Promotion Management and other
environments. You can patch Promotion Management without affecting Production. It allows
the Promotion Management Server to be upgraded without necessarily upgrading
Production. For example, you might want to upgrade Test before Production, but to be
supported, you also need to upgrade the Promotion Management Server.
Connection Overrides
Connection overrides provide the following connections:
● Universe connections
● Query as Web Service connections
● Crystal Reports direct-to-data connections
The table presents the Adaptive Processing Server services and functions.
Service Function
● LCM is not designed for a large object. Optimal performance is 100 objects at a time.
● LCM promotion jobs are not designed for backups of entire repository.
● Promotion jobs cannot include instances, in-boxes and documents in Favorites folder.
Additional Information about Best Practices for Managing the Life cycle of a
Deployment
Title Online Location
BI4 Upgrade and Promotion Management On the SAP Community Network, search for
KBAs the title.
SAP Enterprise Support Academy On the SAP Support Portal, choose Support
Programs & Services → SAP Enterprise Sup-
port → SAP Enterprise Support Academy.
LESSON SUMMARY
You should now be able to:
● Describe the life cycle of a deployment
● Manage the life cycle of a deployment using the Life Cycle Management (LCM) tool
● Create a new job
LESSON OVERVIEW
This lesson teaches how to compare objects and files, and how to move objects with the
Change and Transport System (CTS). It details information on comparing objects and files
and object management best practices.
LESSON OBJECTIVES
After completing this lesson, you will be able to:
● Compare objects and files
● It shows the differences between two versions of a LCMBIAR file, an LCM Job, or both.
● It helps you develop and maintain different report types by comparing source to
destination versions.
● It detects missing elements, modified elements, and added elements.
Visual Difference allows you to view the differences between two versions of a supported file
type (LCMBIAR), a supported object type (LCM Job), or both. You can use this feature to
determine the difference between files or objects to develop and maintain different report
types. This feature gives a comparison status between the source and the destination
versions. For example, if a previous version of the user report is accurate and the current
version is inaccurate, you can compare and analyze the file to evaluate the exact issue.
● Removed - An element is missing in one of the file versions, for example, a row, section
instance, or even a block.
● Modified - There is a different value between the source version and the destination
version, for example, cell content or the result of a local variable.
● Inserted - There is an element in the destination version but is not in the source version.
● LCMBIAR file
● Promotion Management Job (LCM Job)
Comparison Combinations
With Visual Difference, you can compare the following combinations:
LESSON SUMMARY
You should now be able to:
● Compare objects and files
Learning Assessment
1. A content management plan is useful when migrating from a project that has been poorly
managed.
Determine whether this statement is true or false.
X True
X False
3. The Promotion Management Tool has to be installed separately in BI 4.3 because it is not
integrated with BI 4.3.
Determine whether this statement is true or false.
X True
X False
X True
X False
X A BI Content Promotion
X B Version Management
X C Auditing
6. Job creation is required only for promoting BI content from one environment to another,
not for exporting to LCMBIAR.
Determine whether this statement is true or false.
X True
X False
7. You cannot check the security rights with the Manage Job Dependencies function.
Determine whether this statement is true or false.
X True
X False
X A To schedule a job promotion, you must specify a time in the future or select a
recurrence pattern and you must specify additional parameters.
X B Promotion jobs can be scheduled only when load is greater than 30 percent.
X A Development
X B Production
X C Promotion
X D Test
10. Expert Guided Implementation combines training, live configuration, and on-demand
expertise.
Determine whether this statement is true or false.
X True
X False
X A WEB Intelligence
X B LCMBIAR
X C Universe
1. A content management plan is useful when migrating from a project that has been poorly
managed.
Determine whether this statement is true or false.
X True
X False
3. The Promotion Management Tool has to be installed separately in BI 4.3 because it is not
integrated with BI 4.3.
Determine whether this statement is true or false.
X True
X False
X True
X False
X A BI Content Promotion
X B Version Management
X C Auditing
6. Job creation is required only for promoting BI content from one environment to another,
not for exporting to LCMBIAR.
Determine whether this statement is true or false.
X True
X False
7. You cannot check the security rights with the Manage Job Dependencies function.
Determine whether this statement is true or false.
X True
X False
X A To schedule a job promotion, you must specify a time in the future or select a
recurrence pattern and you must specify additional parameters.
X B Promotion jobs can be scheduled only when load is greater than 30 percent.
X A Development
X B Production
X C Promotion
X D Test
10. Expert Guided Implementation combines training, live configuration, and on-demand
expertise.
Determine whether this statement is true or false.
X True
X False
X A WEB Intelligence
X B LCMBIAR
X C Universe
Lesson 1
Designing Publications 253
UNIT OBJECTIVES
● Design a publication
● Personalize publications
LESSON OVERVIEW
This lesson describes how to design and publish objects in the BI platform.
LESSON OBJECTIVES
After completing this lesson, you will be able to:
● Design a publication
● Personalize publications
Publishing Basics
Publishing
Publishing is the process of making documents such as Crystal Reports and Web
Intelligence documents publicly available for mass consumption. The contents of these
documents can be distributed automatically. Distribution occurs via e-mail or FTP, saved
to disk, or managed through the BI platform for Web viewing, archiving, and retrieval, and
automated through the use of scheduling. From within BI launch pad or the CMC, you can
take documents and tailor them for different users or recipients. You can schedule a
publication to run at specified intervals; and you can send it to a number of destinations,
including recipients' BI Inboxes and e-mail addresses.
Publication
A publication is a collection of documents intended for distribution to a mass audience.
Before the documents are distributed, the publisher defines the publication using a
collection of metadata. This metadata includes the publication source, its recipients, and
the personalization applied.
Publications
Publications can help you send information through your organization more efficiently.
Benefits of Publications
● Enable you to distribute information to individuals or groups of users and personalize the
information each user or group receives.
● Provide delivery of targeted business information to groups or individuals through a
password-protected portal, across an intranet, an extranet, or the Internet.
● Minimize database access by eliminating the need for users to send process requests
themselves.
Note:
You can create different types of publications based on Crystal reports or Web
Intelligence documents.
Typically, the publisher (the user who owns and schedules the publication) can view all
publication instances for all recipients; the recipients can view their personalized publication
instances only. This rights setup ensures maximum security for publication data because it
reserves the rights to schedule publications and to view all publication instances for the
publisher only.
Note:
If you are a publisher and want to add yourself to a publication as a recipient, use
two user accounts for yourself, a Publisher account and a Recipient account. The
Publisher account grants you the rights you require when you design and schedule
publications, while the Recipient account grants you the rights of a typical
recipient.
Publication Formats
Formats define the file types that a publication's documents will be published in. A single
document can be published in multiple formats, and these instances can be delivered to
multiple destinations. For publications with multiple documents, you can specify a different
format for each. For publications that contain Web Intelligence documents, you can publish
the whole document or a report tab within the document to different formats.
Any formats you choose for a document apply to all recipients of the publication. For
example, you cannot publish a document as a Microsoft Excel file for one recipient and as a
PDF for another. If you want the recipients to receive instances in those formats, each
recipient will receive a Microsoft Excel file and a PDF.
The following table provides descriptions of each format type:
Plain Text
Microsoft Excel
Adobe Acrobat
mHTML
Publication Destinations
Destinations are locations that you deliver publications to. A destination can be an Enterprise
location in which a publication in stored, a BI Inbox, an e-mail address, an FTP server, or a
directory on the file system. You can specify multiple destinations for a publication.
If you are publishing multiple Crystal reports, you can also merge them into a single PDF on a
per destination basis.
If you want to publish a publication as a single ZIP file, choose to zip or unzip the instances on
a per destination basis. For example, zip the instances for e-mail recipients and leave them
unzipped for BI Inboxes.
The following table provides descriptions of each destination type:
Publication Destinations
● FTP Server
● SFTP Server
● File System
● SAP StreamWork
Note:
● Deliver objects to each user is selected by default for all destinations. However,
in some cases, you may not want to deliver objects to each user. For example,
three recipients have identical personalization values and thus receive the
same data in their publication instances. If you clear Deliver objects to each
user, one publication instance is generated and delivered to all three recipients.
If you select Deliver objects to each user, the same publication instance is
delivered three times (once for each recipient). Additionally, if you are sending
the publication to an FTP server or local disk destination and some recipients
share identical personalization values, you can clear Deliver objects to each
user to decrease overall processing time.
● If you clear Deliver objects to each user, any placeholders that you use when
you configure your destinations will contain the publisher's information and not
the information of the recipients.
Subscriptions
A subscription enables users who are not publication recipients to view the latest instance.
Enterprise recipients can unsubscribe from a publication at any time. Dynamic recipients can
neither subscribe to or unsubscribe from a publication.
Users with the appropriate access rights can subscribe and unsubscribe other users. To
subscribe to or unsubscribe from a publication, you need a BI platform account and the
following rights:
● Access to the BI launch pad or to the CMC
● View rights to see the publication
● Subscriber rights for the user account (Enterprise recipients)
Report Bursting
During publishing, the data in documents is refreshed against data sources and personalized
before the publication is delivered to recipients. This combined process is known as “report
bursting.” Depending on the size of the publication and how many recipients the publication is
intended for, you have several report bursting methods to choose from:
● One database fetch for all recipients
When you use this report bursting method, all documents in the publication are refreshed
once, and then the documents are personalized and delivered to each recipient. This
report bursting method uses the data source logon credentials of the publisher to refresh
data.
This bursting method is used for Web Intelligence document publications. It is also the
recommended option when you want to minimize the impact of Publishing on your
database. This option is secure only when the source documents are delivered as static
documents. For example, a recipient who receives a Web Intelligence document in its
original format can modify the document and view the data associated with other
recipients. However, if the document is delivered as a PDF, the data would be secure.
Note:
- This option is secure for most Crystal reports regardless of whether the
Crystal reports are delivered in their original format.
- The performance of this option varies depending on the number of
recipients.
Note:
This option is unavailable for Web Intelligence documents.
Note:
Crystal reports that are based on universes or Business Views support one
database fetch per recipient only to maximize security.
Personalization Methods
Personalization is the process of filtering data in source documents so that only relevant data
is displayed for publication recipients. Personalization alters the view of the data, but it does
not necessarily change or secure the data being queried from the data source.
3. You assign profile values to each user and group for that profile.
2. You specify a local profile target for the profile to filter (for example, a field in a Crystal
report).
3. You specify the profile or profiles that will be used for personalization.
You can grant or deny users and groups access to profiles. Depending on how you organize
your profiles, you may have specific profiles that you want to be available only for certain
employees or departments.
Users with access to the CMC can see only profiles that they have the rights to see; therefore,
you can use rights to hide profiles that are not applicable to a particular group.
For example, by granting only the ITadmin group access to IT-related profiles, those profiles
are hidden from users in the HRadmin group, making the profile list easier for the HRadmin
group to navigate.
Profile values are attributes detailed to specific users or groups when you assign these users
and groups to a profile. When a profile is applied to a publication, the users and groups
assigned to that profile receive versions of the publication that are filtered according to the
profile values set for them.
Note:
Inheritance for profile values works in the same way as inheritance works for
security settings.
You can use global profile targets for publications that contain Web Intelligence
documents. You cannot use global profile targets with Crystal reports.
Profile targets and profile values enable a profile to personalize a publication for recipients.
The users and groups specified for a profile receive filtered versions of the same publication
that only display the data most relevant to them.
Consider a situation where a global sales report is distributed to a company’s regional sales
teams in North America, South America, Europe, and Asia. Each regional sales team only
wants to view the data that is specific to their region. The administrator creates a Regional
Sales profile and adds each regional sales team to the profile as a group. The administrator
assigns each regional sales team a corresponding profile value (for example, the North
America Sales group is assigned “North America”). During Publishing, the publisher uses the
Region field in the global sales report as a local profile target, and applies the profile to the
report. The global sales report is filtered according to the profile values set for each regional
sales team. When the global sales report is distributed, each regional sales team receives a
personalized version that only displays regional sales data.
1. Do not merge
SAP BusinessObjects BI platform determines the different possible views of a publication
that could be delivered and produces a unique view for each case. In the example, Tony
would receive one publication personalized to show data for Mexico, and another
publication that shows product manager data.
2. Merge
With this setting, SAP BusinessObjects BI platform again determines the different
possible views of the data, but this time the nonconflicting profiles are merged. This type
of profile resolution is designed for role-based security. In this example, Tony would
receive a single publication personalized to show data for Mexican product managers.
Conflicts between profile values can also arise when a user inherits two contradictory profile
values as a result of group membership. In general, explicitly assigned profile values override
profile values inherited from group membership. A profile value assigned to a user or a
subgroup overrides the profile value inherited from group membership.
For example, David belongs to the North America Sales and Canada Sales groups. The
Canada Sales group is a subgroup of the North America Sales group. These groups are both
added to the Region profile. From the North America Sales group, David inherits a Region
profile value of “ North America”, and from the Canada Sales group, David inherits a Region
profile value of “ Canada”. In this case, the profile value that is assigned to the subgroup
overrides the profile value that is assigned to the group, and David receives a publication with
data for Canada.
Conflicts between profile values can also arise when a user is explicitly assigned a profile value
that contradicts a profile value inherited from group membership. For example, Paula belongs
to the North America Sales group, which has a Region profile value of “ North America”. The
administrator also assigns Paula a Region profile value of “ Spain”. In this case, the profile
value that is assigned to the member overrides the profile value that is inherited from the
group, and Paula receives a publication with data for Spain.
However, sometimes a user can inherit different profile values from two different groups for
one profile. Both groups are hierarchically equal; one group is not a subgroup of the other
group, so one profile value does not override the other. In this case, both profile values are
valid and the user receives a publication instance for each profile value.
As a result of this profile value conflict, sometimes duplicate report instances are included in
different publication instances and sent to the same user. For example, Sandra is a manager
in two North America offices and receives a publication via e-mail that contains two reports.
Report 1 is personalized using the Region profile, for which Sandra inherits the conflicting
profile values “ USA” and “ Canada” from group membership. Report 2 is personalized using
the Role profile, for which Sandra inherits the profile value “ Manager”. If there is no profile
value conflict, after personalization, Sandra receives one e-mail with a merged Report 1
instance (USA and Canada data) and a Report 2 instance (Manager data). Instead, Sandra
receives two e-mails: one e-mail includes a Report 1 USA instance, the other e-mail includes a
Report 1 Canada instance, and both e-mails have the same Report 2 Manager instance.
Hint:
To avoid profile value conflicts that result in duplicate publication instances being
sent, when possible, explicitly assign profile values to users instead of allowing
users to inherit profile values from group membership.
LESSON SUMMARY
You should now be able to:
● Design a publication
● Personalize publications
Learning Assessment
1. What is a publication?
X True
X False
1. What is a publication?
Dynamic recipients are publication recipients who exist outside of the SAP
BusinessObjects Business Intelligence platform. Dynamic recipients already have user
information in an external data source, such as a database or an LDAP or AD directory, but
do not have user accounts in SAP BusinessObjects Business Intelligence platform.
Profiles are objects in the SAP BusinessObjects Business Intelligence platform that let you
classify users and groups. They work with publications to personalize the content that
users see. Profiles link users and groups to profile values, which are values used to
personalize data within a report. Profiles also use profile targets, which describe how a
profile is applied to a report. By assigning different profile values, the data within a report
can be tailored to specific users or groups. Many different personalized versions of the
report are then delivered to your users.
X True
X False
Do not merge: The BI platform generates a separate publication for each unique view
possible. Merge: The BI platform generates a single publication that contains only the data
common among the conflicting profiles.
Lesson 1
Performing Advanced Troubleshooting of a BI Platform Deployment 267
UNIT OBJECTIVES
LESSON OVERVIEW
This lesson provides information on how to troubleshoot a deployment with advanced
diagnostics and SAP Passport. It details system monitoring with tools, advanced diagnostics,
end-to-end tracing, and system performance testing.
LESSON OBJECTIVES
After completing this lesson, you will be able to:
● Perform an end-to-end trace with SAP Passport
End-to-End Tracing
The following are considerations when using E2E tracing:
Supportability Challenges
The following are supportability challenges when using E2E tracing:
● Complex architecture
● Challenging to manage large number of logfiles
● Tracing must be enabled on all servers
● Difficult to narrow down single workflow
● Unified E2E View not previously possible
End-to-End Scenarios
E2E tracing is useful in many different scenarios.
The SAP Passport client tool injects a unique identifier, which is Correlation ID (DSR ROOT
CONTEXT ID) into all HTTP requests for a particular workflow and this identifier is forwarded
to all servers used in the workflow. SAP support personnel can put together an end-to-end
trace for the workflow by using this unique identifier.
The figure, Request Header, is an illustration of how Fidler 2, a third party host header
inspection tool, provides relevant host header traffic information helpful when
troubleshooting problems related to web navigation workflows.
TRACELOG is designed to determine trace levels from the passport and override the current
trace level. TRACELOG will never lower a trace level to match the passport.
Trace Start-Up
The SAP Passport client tool is not installed as part of BI platform. You need to install the SAP
Client plug-in before you can use it.
The BI Platform Support Tool was developed by colleagues in AGS as a standalone tool to
provide visibility into a BI 4.x deployment. The tool is completely free to download on the SAP
Community Network and supports SAP BusinessObjects BI platform 4.0, 4.1, 4.2, and 4.3.
Currently, the primary feature of the BI Support Tool is the Landscape Inspection Report. The
Landscape Inspection Report connects to a CMS system, performs an extraction from this
system then generates a landscape report inside the client. This report can be exported and
sent to SAP support when opening a new message so that the engineer may be aware of the
landscape configuration prior to beginning the message processing. The report includes
details about the server topology, settings, performance metrics, and memory settings. It has
an analysis engine that checks recommended parameter settings and metric values against
predefined thresholds. If any thresholds are breached it alerts the administrator to take action
before the problem becomes a potential system outage. The report also includes statistics
and metrics regarding repository content, scheduled jobs, system capabilities, and more.
Note:
Check the SCN (http://wiki.scn.sap.com/wiki/display/BOBJ/SAP+BI+Platform
+Support+Tool) often to download updates for free.
The figures, SAP Client Plug-In Settings 1 and SAP Client Plug-In Settings 2 show key steps in
performing an end-to-end trace with SAP Passport.
On the SAP Client Plug-In screen, in the Application field, select Microsoft Internet Explorer
and choose Launch.
Populate the Business Transaction Name and Net Step TraceLevel fields.
On the BI Launchpad home page, navigate to the Reports Tile.
On the Formatting Sample tab page, choose the Refresh Query icon.
Navigate to the automatically generated folder with the Business Transaction Name you
specified in the SAP Client Plug-in Tool.
In the XML file found in the automatically generated folder, open the file and locate the
BusinessTransaction ID.
Navigate to the SBOP Webapp BI Launchpad TRace GIF file and copy it to the e2e_trace
folder, which serves as a staging directory for all copied .glf files to be analyzed.
Navigate to the SAP BusinessObjects Enterprise\Logging directory and do a search for all .glf
files.
In the Open Files dialog box, choose Add Files and browse to the e2e_trace directory you
created that has all of the copied .glf files.
In the GLF Viewer, choose View and Indent Text According to Scope.
Review the information that provides workflow details regarding the error the user is
encountering.
In the GLF Viewer, choose File and Export current (filtered) view.
1. Close all the browsers on your image. Using Windows Explorer, browse to the directory
containing the SAP Client plug-in.
3. Set the option for Application to Microsoft Internet Explorer and choose Launch.
4. Define the following properties before you begin your workflow. Under Business
Transaction Name choose a name that describes the workflow that you are capturing and
set TraceLevel to High.
5. Before you start the E2E trace, queue up the browser to the beginning of the workflow you
want to trace. In the same browser window, launch the BI launchpad and log on.
6. Click the Documents tab, then browse to the document folder for which you want to create
the end-to-end trace.
1. In the SAP Client Plug-in window, choose Start Transaction, and then right-click the
document and choose View.
3. When the document has finished refreshing, choose Stop Transaction in the SAP Client
Plug-In window. You might receive the message “Settings are not valid.” You can ignore
this message, because it relates to the SMD Agent settings for Solution Manager. Choose
OK and click Exit to close the SAP Client Plug-In window.
1. Browse to the same directory where you executed the SAP Client Plug-in. Then, browse to
the log directory.
2. In the log directory, browse to the folder named after your Business Transaction Name.
3. Open the file BusinessTransaction.xml, and search for the string BusinessTransaction id=.
Locate the ID associated with the BusinessTransaction. This is the unique identifier
associated with your E2E trace. You will need this ID later.
4. Because the location of the BI launchpad trace (Web Application trace) is not in the same
location as the default logging, copy this trace to a temporary directory. Browse to the C:
\drive and, according to the modified date timestamp, locate the newest folder named
SBOPWebapp_BIlaunchpad_*. Open this folder and copy the *.glf to S:\temp.
5. Copy the BI4 server traces from the default logging directory to the temporary directory.
Browse to S: → Program Files (x86) → SAP BusinessObjects → SAP BusinessObjects
Enterprise XI 4.0 → logging and filter by *.glf. Change the file view to details and sort by
modified date. Finally, copy *.glf from today’s date to the folder S:\temp.
6. Download the latest version of GLF Viewer from the following SMP link. Extract the
downloaded zip file to a folder on the S: drive (for example S:\GLFViewer. Browse to S:
\GLFViewer and launch the GLF Viewer application by executing the file named
runGLFViewer.bat
7. Inside the GLFViewer, choose File → Open. Choose Add Files and add all *.glf files
generated with today’s date.
8. Confirm that the option Should merge all into a single tab is checked. This option will
create the end-to-end view of your traced workflow.
9. Check the option to Filter and only read matching entries, and then under the Column
option select the field named DSRRootContextID. The operator should be contains, and in
the text box, paste in the BusinessTransaction ID that you found previously. Then choose
OK.
2. Choose Analysis → Show Event Analyzer. When prompted, choose NO this component is
not a CMC until you are no longer prompted.
3. Expand the Event Analyzer to enlarge the view and choose Start. The Event Analyzer will
play through to the end-to-end trace and will show you which component is issuing the
call-in sequence. With this view, you can see which function or method call is waiting on
one component while other functions complete on other components.
4. Stop and close the event analyzer. Next choose View → Index Text According to Scope.
5. Now inspect the Text column. Notice that transactions that are part of the same call stack
will be indented, making it easier to follow the flow of function calls within a particular
component.
7. Click Analysis, List Unique to see all the individual servers and components listed in the
trace. This is particularly useful if you want to know more about the BI 4.x servers required
for the processing of your E2E workflow.
8. Save your E2E trace as one trace file by choosing File → Export current (filtered) view.
LESSON SUMMARY
You should now be able to:
● Perform an end-to-end trace with SAP Passport
Learning Assessment
X A CORRELATION ID
X B SI_ID
X C User ID
X D Thread ID
2. E2E Tracing with SAP Passport will help which of the following?
Choose the correct answer.
3. The SAP Passport client tool is not installed as part of the BI platform.
Determine whether this statement is true or false.
X True
X False
X A CORRELATION ID
X B SI_ID
X C User ID
X D Thread ID
2. E2E Tracing with SAP Passport will help which of the following?
Choose the correct answer.
3. The SAP Passport client tool is not installed as part of the BI platform.
Determine whether this statement is true or false.
X True
X False
Lesson 1
Replicating a BI Platform Deployment 287
UNIT OBJECTIVES
LESSON OVERVIEW
This lesson provides instruction on the Federation application and deployment replication. It
teaches how to replicate a job and detect and resolve conflicts.
LESSON OBJECTIVES
After completing this lesson, you will be able to:
● Identify the uses of the Federation application
● Apply replication techniques and best practices
Federation Overview
Federation is a cross-site replication tool for working with multiple SAP BusinessObjects
Business Intelligence platform deployment in a global environment. Content can be created
and managed from one BI platform deployment and replicated to other BI platform
deployments across geographical sites on a recurring schedule. You can complete both one-
way replication and two-way replication jobs.
● Cross-site tool
● Multiple deployments
● Global environment
● Content management across geographical sites from a single site
● Recurring schedule
● One-way and two-way replication
● Increased performance for end-users
Federation allows you to have separate security models, life cycles, testing, and deployment
times, as well as different business owners and administrators. For example, you can delegate
administration features that restrict the sales application administrator from changing a
human resources application.
● When you replicate content using Federation, you can do the following:
- Simplify administration needs for multiple deployments.
- Provide a consistent rights policy across multiple offices for global organizations.
- Obtain information faster and process reports at remote sites where data resides.
- Save time by retrieving local and dispersed data faster.
- Synchronize content from multiple deployments without writing custom code.
Business Views Business View Manager, Da- All objects are supported, al-
taConnection, LOVx, Data though not at the individual
Foundation, and similar ob- level.
ject types
Reports SAP Crystal reports, Web In- Full client add-in and tem-
telligence, and Lumira De- plates are supported.
signer
Third-Party Objects Microsoft Excel, PDF, Power-
Point, Flash, Word, text, and
rich text
Users Users, groups, Inboxes, Fa-
vorites, and Personal Catego-
ry
Business Intelligence plat- Folders, Events, Categories,
form Calendars, Access Levels,
Hyperlinks, Shortcuts, Pro-
grams, Profiles, Object Pack-
ages, Agnostic
Universe Universe, Connections, and
UniverseOverload
Replication Examples
Scenario 1: A retail company with a centralized design wants to send a monthly sales report to
the different store locations using the one-way replication method. The administrator at the
origin site creates a report that administrators at each destination site replicate and run
against that store’s database.
Hint:
Localized instances can be sent back to the origin site that maintain each
object’s replicated information. For example, it will apply the appropriate log,
database connection information, and so on.
Scenario 2: A company was to set up remote scheduling with distributed access. The data is
at the origin site. Pending replication jobs are sent to the origin site to run. Completed
replication jobs are then sent back to the destination sites for viewing. For example, the data
for a report might not be available on the destination site, but the user can configure the
reports to run on the origin site before the complete report is sent back to the destination site.
Replication Scenario 1
Replication Scenario 2
Federation Terms
BI application
The logical grouping of related Business Intelligence (BI) content with a specific purpose
and audience. A BI application is not an object. One BI platform deployment can host
multiple BI applications, each of which can have a separate security model, life cycle,
testing and deployment timeline, as well as separate business owners and
administrators.
Destination site
A BI platform system that pulls replicated BI platform content from an origin site.
Local
The local system where a user or administrator is connected. For example, the
administrator of a destination site is considered “local” to the destination site.
Locally run completed instances
Instances that are processed on the destination site and then sent back to the origin site.
Multiple origin sites
More than one site can serve as an origin site. For example, multiple development
centers generally have multiple origin sites. However, there can be only one origin site
per replication.
One-way replication
Objects are replicated only in one direction: from the origin site to the destination site.
Any updates made at a destination site remain at that destination site.
Origin site
The BI platform system where the content originates.
Remote
A system that is not local to a user. For example, the origin site is considered “remote” to
users and administrators of the destination site.
Remote connection
An object that contains information used to connect to a BI platform deployment,
including username and password, CMS name, WebService URI, and clean-up options.
Remote scheduling
Schedule requests that are sent from the destination site to the origin site. Reports on
destination sites can be scheduled remotely, which sends the report instance back to the
origin site for processing. Then the completed instance is returned to the destination site.
Replication
The process of copying content from one BI platform system to another.
Replication job
An object that contains information about replication scheduling, which content to
replicate, and any special conditions that should be performed when replicating content.
Replication list
A list of the objects to be replicated. A replication list refers to other content such as
users, groups, reports, and so on, in the BI platform deployment to be replicated
together.
Replication object
An object that is replicated from an origin site to a destination site. All replicated objects
on a destination site will be flagged with a replication icon. If there is a conflict, objects will
be flagged with a conflict icon.
Replication package
Created during the transfer, the replication package contains objects from a replication
job. It can contain all the objects defined in the replication list, as in the case of a rapidly
changing environment or initial replication. Or it can contain a subset of the replication
list if the objects change infrequently compared to the schedule of the replication job.
The replication package is implemented as a BI Application Resource (BIAR) file.
Replication refresh
All objects in a replication list are refreshed regardless of the last modified version.
Two-way replication
Two-way replication acts the same as one-way replication, but two-way replication also
sends changes in both directions. Updates to the origin site are replicated to each
destination site. Updates and new objects on a destination site are sent to the origin site.
● One-way replication
● Two-way replication
● Refresh from origin
● Refresh from destination
Depending on your selection of replication type and replication mode, you will create one of
four different replication job options: one-way replication, two-way replication, refresh from
origin, or refresh from destination.
One-way replication
With one-way replication, you can only replicate content in one direction—from the origin site
to a destination site. Any changes made to objects on the origin site in the replication list are
sent to the destination site. However, changes made to objects on a destination site are not
sent back to the origin site.
One-way replication is ideal for deployments with one central BI platform deployment where
objects are created, modified, and administered. Other deployments use the content of the
central deployment.
Two-way replication
With two-way replication, you can replicate content in both directions between the origin and
destination sites. Any changes made to objects on the origin site are sent to destination sites,
and changes made on a destination site are sent to the origin site during replication.
To perform remote scheduling and to send locally run instances back to the origin site, you
must select two-way replication mode.
If you have multiple BI platform deployments where content is created, modified,
administered, and used at both locations, two-way replication is the most efficient option. It
also helps synchronize the deployments.
Refresh from origin or refresh from destination
When you replicate content in one-way or two-way replication modes, the objects on the
replication list are replicated to a destination site. However, not all of the objects might
replicate each time the replication job runs. Federation has an optimization engine designed
to help finish your replication jobs faster. It uses a combination of the object’s version and
time stamp to determine if the object was modified since the last replication. This check is
done on objects specifically selected in the replication list and any objects replicated during
dependency checking. However, in some cases the optimization engine might miss objects,
which then are not replicated. In these cases, you can use Refresh from Origin and Refresh
from Destination to force the replication job to replicate content and any dependencies,
regardless of the timestamps. Refresh from Origin only sends content from the origin to the
destination sites. Refresh from Destination only sends content from the destination sites to
the origin site.
the destination site. However, Report B’s timestamp does not change. Therefore, it will be
missed by a regular one-way or two-way replication job.
To ensure Folder B’s content is properly replicated, a replication job with Refresh from Origin
should be used once. After this, the regular one-way or two-way replication job will replicate it
properly. If this example is reversed and Folder B is moved or copied on the destination site,
then use Refresh from Destination.
Note:
After importing new objects into an area that is being replicated on the origin site,
it is recommended that you run a “Refresh from Origin” replication job. After
importing new objects into an area that is being replicated on the destination site,
it is recommended that you run a “Refresh from Destination” replication job.
● Scenario 2: The addition of new objects using LifeCycle Manager or the BIAR command
line
- An area is being replicated using LifeCycle Manager or the BIAR command line.
- Objects are added to that area.
- The internal clocks on the origin and destination systems might not be synchronized.
- The added objects might not be picked up during a regular one-way or two-way
replication job.
Note:
This scenario can be costly for large replication lists, so it is recommended that
you do not use this option often. For example, it is not necessary to create
replication jobs to refresh from the origin to destination mode on an hourly
schedule. These modes should be used in “run now” or infrequent schedules.
In some cases, you cannot use conflict resolution, including the following
situations:
● Refresh from Origin: Destination site wins option is blocked.
● Refresh from Destination: Origin site wins option is blocked.
If a conflict occurs and you select No automatic conflict resolution, the conflict is not resolved,
a log file is not generated, and it does not appear in the conflicting object list. Administrators
can access a list of all replicated objects that are in conflict in the Federation area of the CMC.
Objects in conflict are grouped together by the remote connection they used to connect to
the origin site with. To access these lists, go to the Replication Errors folder in the Federation
area of the CMC, and select the desired remote connection. All replicated objects on a
destination site will be flagged with a replication icon. If there is a conflict, objects will be
flagged with a conflict icon. A warning message also appears in the Properties page.
might require configurations with smaller or larger replication sizes. If there are a large
number of objects in a single replication job, you can take additional steps to ensure that it
runs successfully:
LESSON SUMMARY
You should now be able to:
● Identify the uses of the Federation application
● Apply replication techniques and best practices
Learning Assessment
1. Federation is a cross-site replication tool for working with a single SAP BusinessObjects
Business Intelligence platform deployment.
Determine whether this statement is true or false.
X True
X False
X C When the properties of an object are changed on both the origin site and
destination site.
X D When the properties of an object are not changed on the origin site and destination
site.
1. Federation is a cross-site replication tool for working with a single SAP BusinessObjects
Business Intelligence platform deployment.
Determine whether this statement is true or false.
X True
X False
X C When the properties of an object are changed on both the origin site and
destination site.
X D When the properties of an object are not changed on the origin site and destination
site.
Lesson 1
Planning for Disaster Recovery 301
UNIT OBJECTIVES
LESSON OVERVIEW
In this lesson, we consider the importance of being prepared for disaster and best practice in
terms of recovery strategies.
LESSON OBJECTIVES
After completing this lesson, you will be able to:
● Describe guidelines and best practices for disaster recovery
● In-place restore is the recovery of your production system on the same hardware.
● Disaster recovery is the recovery of your production system in a remote location.
Backup Strategy
The following are considerations when establishing a backup strategy:
● The FRS (Input and Output) folders should be on a highly available SAN or NAS storage
device
● The CMS DB (Audit optional) should be on a DB cluster which has a regular backup
- Weekly full backup
- Daily incremental
- Transactional
● File and machine configurations should be documented and stored on the network
● VMs should also be backed up (weekly)
● Back up files should be stored on another geographical location
The figure, High Availability Components, illustrates a high level architecture for disaster
recovery planning.
Once the Disaster Recovery architecture has been designed, the figure, Migration and Back-
up of CMS Data, illustrates the work-flow for how to manage the recovery process.
● Disaster
Any unanticipated event that creates a defined problem can be called a disaster. Disasters
are usually considered in terms of severity.
● Disaster Recovery Planning
Disaster recovery is the processes, policies, and procedures involved when restoring
operations that are critical to the resumption of business.
Network Storage
When discussing network storage, it is important to understand the following terms:
Note:
An NAS head is a translator for SAN storage.
The figure Network Storage in the Environment illustrates a high level architecture of how NAS
devices can be leveraged in your SAP BusinessObjects BI platform 4.3 disaster recovery plan.
LESSON SUMMARY
You should now be able to:
● Describe guidelines and best practices for disaster recovery
Learning Assessment
The time frequency recommendations for backups are a weekly full backup and daily
incremental backups.