DataKinetics-Batch-Optimization-Whitepaper
DataKinetics-Batch-Optimization-Whitepaper
RS
SERVING THE GLOBAL
FORTUNE
UNE 500
BATCH OPTIMIZATION
FOR MAINFRAME DATACENTERS
A DATAKINETICS WHITEPAPER
Batch Optimization for Mainframe Datacenters
Table of Contents
Batch Optimization for Mainframe Datacenters 3
Mainframe Batch Processing Today 3
Pressures on the Batch Window 3
Failure to complete batch processing on time 3
Contemporary Solutions for Batch Woes 4
Modern Mainframe Batch Performance and Cost Optimization Techniques 5
In-memory optimization of batch applications 5
IT business intelligence 6
Conclusion 7
The next step 7
DataKinetics Data Performance & Optimization | 50 Hines Road, Suite 240 Ottawa, ON, Canada K2K 2M5 | © 2023 DataKinetics 2
Batch Optimization for Mainframe Datacenters
In essence, batch jobs perform many read, write, and sort activities on
sequential files. There is no manual intervention or input required unless, of
course, a job does not end successfully. The job should run automatically once
it begins and continue until it is completed. Batch window processing often
takes place during off-hours or non-peak hours, and it is not unusual for OLTP
to wait on the completion of the batch processing, as it requires files to be
updated or tables to be current. Again, timing of execution and completion
within a predefined time window (or windows) is critical.
Other pressures on the batch window come from the need to handle larger
volumes of data, and the need to incorporate additional functions. Naturally
with these changes, the processing time required to complete the batch jobs
increases, sometimes exceeding the available batch windows; often leading to
extreme congestion in the batch window.
DataKinetics Data Performance & Optimization | 50 Hines Road, Suite 240 Ottawa, ON, Canada K2K 2M5 | © 2023 DataKinetics 3
Batch Optimization for Mainframe Datacenters
Scheduling: Scheduling solutions require scheduling software, ongoing monitoring efforts, and are
of moderate complexity. They require constant vigilance and frequent intervention by the operators.
Effectiveness decreases over time. This solution will be costly, and may or may not be effective in
reaching your specific batch goals.
Hardware Upgrades: Adding additional hardware equates to more MIPS. This option can be expensive
but low in complexity because no application code changes are needed, no changes to the database
are needed, and no changes are needed to existing monitoring and optimization processes. This
solution will be also costly, and also may or may not be effective in reaching your specific batch goals.
Db2 optimization: This can take significant elapsed time to implement, particularly when it is difficult
to identify what needs to be optimized. Ongoing optimization would still be required as the Db2 queries
change over time and with the implementation of new requests for data. Often specialized software tools
are required to pursue this option, and in the end, it may not make a difference in reaching your batch goals.
Run Batch and OLTP Concurrently: Running batch and OLTP concurrently removes the concern
about a batch job not completing within a window of time, but this option typically reduces the
performance of OLTP and negatively impacts client experiences. Transaction response time is often
slower because batch processes are running at the same time. There are challenges in accessing
the same data concurrently. Lockouts would keep data from being accessed by either of the two
different processes at the same time or data may not be current if another process using it. The result
is deterioration in the company’s customer’s experience.
Data in Memory (DIM): One of IBM’s recommendations from the IBM Redbook on Batch Modernization
on z/OS, is to use DIM to reduce repeated I/O calls of the same data. According to the Redbook, “It
is recommended to read such data only once from the database and cache it somewhere for further
processing. This will prevent your system from running unnecessary round trips to the database.”
IBM’s DIM solution requires spare memory and spare CPU capacity, additional software, and ongoing
monitoring and optimization. The inherent code changes are typically extensive, so as a result, this
solution is highly complex and expensive to implement. In IBM’s own words, “Implementing data in
memory techniques is a complex task.”
Despite all of these contemporary solutions, there is only so much improvement possible, and often, when systems and
applications change, these efforts have to be repeated. Fortunately, there are a handful of third-party batch optimization
solutions that have been helping IT organizations reach their batch goals for years – and for the most part, they are “fire-
and-forget” - their impact is long-term. And most of them improve performance and lower operational cost at the same time.
DataKinetics Data Performance & Optimization | 50 Hines Road, Suite 240 Ottawa, ON, Canada K2K 2M5 | © 2023 DataKinetics 4
Batch Optimization for Mainframe Datacenters
This technology works by allowing select data to be accessed using a much shorter code path than most data. The typical
Db2 code path requires 10,000 to 100,000 machine cycles – and that includes any type of buffered access. Figure 1 shows
this code path (top).
Calling Application
DASD
BSDS Logs SQL Parse SQL Opt Rec Map Index Mgr DSM Buffer Pools LRW IRLM
tableBASE
tableBASE Driver
Data accessed using high-performance in-memory technology uses only 400 machine cycles – Figure 1 also shows this code
path (bottom). Only a small portion of your read-only data – the data accessed most often – needs to be accessed in this way.
DataKinetics Data Performance & Optimization | 50 Hines Road, Suite 240 Ottawa, ON, Canada K2K 2M5 | © 2023 DataKinetics 5
Batch Optimization for Mainframe Datacenters
IT business intelligence
IT organizations collect tremendous amounts of data about their own computing resources every day – both mainframe,
midrange servers locally, or in third-party datacenters. So much data is collected, that you could call it their own “IT Big
Data.” And with the right toolsets, this IT data can be used to reduce the cost of batch running on your mainframe, and can
help identify low-priority batch candidates to offload for running on other platforms.
IT business intelligence identifies lower-priority batch workloads that are potential candidates for reprioritization, re-
platforming or even elimination. This can directly contribute to improved performance, especially during peak and mission-
critical workloads (see Figure 3).
Figure 3: A low-priority batch workload contributes to the peak workload of the week
IT business intelligence can also show which departments are using mainframe resources, and how much that is costing.
This information can further help to re-prioritize batch processing based on the new-found transparency of departmental
spending patterns.
i Mips per calendar Date and calendar TimeOfDay by BusinessArea
$100,000
$90,000
$80,000
$70,000
1,000
$60,000
Cost per month
Mips
$50,000
$40,000
500
$30,000
$20,000
$10,000
0 $0
0
4
6
4
6
2
8
20
20
22
22
10
14
16
10
14
16
12
18
12
18
DataKinetics Data Performance & Optimization | 50 Hines Road, Suite 240 Ottawa, ON, Canada K2K 2M5 | © 2023 DataKinetics 6
Batch Optimization for Mainframe Datacenters
Conclusion
Third-party batch optimization solutions can help large IT organizations running mission-critical batch processing to
reduce their execution times from two times to two orders of magnitude, depending on their business type and the specific
characteristics of their batch applications. Each solution shown in this paper can make a significant difference by itself –
together, they can make a large dent in ongoing batch processing costs, and most will help reduce batch run times.
DataKinetics Professional Services staff will work with you to outline a high-level project plan and approach that will review
existing application code, environments, and to help implement the proof of concept. This consists of migrating sample
programs and data for the proof of concept, which will demonstrate the benefits. We will work with you to provide a proposal
based on your current IT plan.
© DataKinetics Ltd., 2023. All rights reserved. No part of this publication may be reproduced without the express written permission of DataKinetics Ltd.
DataKinetics and tableBASE are registered trademarks of DataKinetics Ltd. Db2 and z/OS are registered trademarks of IBM Corporation. All other trademarks,
registered trademarks, product names, and company names and/or logos cited herein, if any, are the property of their respective holders.
DataKinetics Data Performance & Optimization | 50 Hines Road, Suite 240 Ottawa, ON, Canada K2K 2M5 | © 2023 DataKinetics 7