Mro
Mro
CICS® multiregion operation (MRO) enables CICS systems that are running in the same
MVS™ image, or in the same MVS sysplex, to communicate with each other. MRO does
not support communication between a CICS system and a non-CICS system such as
IMS™.
Note:
The external CICS interface (EXCI) uses a specialized form of MRO link to support:
ACF/VTAM and SNA networking facilities are not required for MRO. The support
within CICS that enables region-to-region communication is called interregion
communication (IRC). IRC can be implemented in three ways:
CICS regions linked by MRO can be at different release levels, If an MVS image
contains different releases of CICS, all using MRO to communicate with each other (or
XCF/MRO to communicate with regions in other images in the sysplex), the DFHIRP
module in the MVS LPA should be that from the most current CICS release in the image,
or higher. For full details of software and hardware requirements for XCF/MRO
• Function shipping
• Asynchronous processing
• Transaction routing
CICS regions linked by XCF/MRO can be at different release levels Depending on the
versions of CICS installed in the MVS images participating in XCF/MRO, the versions of
DFHIRP installed in the link pack areas of the MVS images can be different. If a single
MVS image contains different releases of CICS, all using XCF/MRO to communicate
with regions in other images in the sysplex, the DFHIRP module in the MVS LPA should
be that from the most current CICS release in the image, or higher.
Figure 2. A sysplex (SYSPLEX1) comprising two MVS images (MVS1 and MVS2). In
this illustration, the members of the CICS group, DFHIR000, are capable of
communicating via XCF/MRO links across the MVS images.
In MVS1, the DFHIRP module in the LPA should be at the level of the highest CICS TS
z/OS release in the image.
Because it contains only CICS TS OS/390, Version 1 Release 3 regions, the DFHIRP
module in the LPA can be at the CICS TS OS/390, Version 1 Release 3 level, or later.
In the MRO links between CICS1 and CICS2, and between CICS3 and CICS4, use either
the IRC or XM access methods, as defined for the link. The MRO links between CICS
regions on MVS1 and the CICS regions on MVS2 use the XCF method, which is selected
by CICS dynamically.
Benefits of XCF/MRO
• Easier connection resource definition than for ISC links, with no VTAM tables to
update.
• Easy transfer of CICS systems between MVS images. The simpler connection
resource definition of MRO, and having no VTAM tables to update, makes it
much easier to move CICS regions from one MVS to another. You no longer need
to change the connection definitions from CICS MRO to CICS ISC (which, in any
event, can be done only if CICS startup on the new MVS is a warm or cold start).
2.
XCF. The MVS/ESA™ cross-system coupling facility that provides MVS™ coupling
services. XCF services allow authorized programs in a multisystem environment to
communicate (send and receive data) with programs in the same, or another, MVS image.
Multisystem applications can use the services of XCF, including MVS components and
application subsystems (such as CICS®), to communicate across a sysplex. See the
MVS/ESA Setting Up a Sysplex manual, GC28-1449, for more information about the use
of XCF in a sysplex.
3.
Coupling facility links. High-bandwidth fiber optic links that provide the high-speed
connectivity required for data sharing between a coupling facility and the central
processor complexes attached to it.
4.
XCF/MRO does not support shared data tables. Shared access to a data table, across two
or more CICS regions, requires the regions to be in the same MVS image. To access a
data table in a different MVS image, you can use function shipping.
Program development
The testing of newly-written programs can be isolated from production work by running
a separate CICS® region for testing. This permits the reliability and availability of the
production system to be maintained during the development of new applications, because
the production system continues even if the test system terminates abnormally.
By using function shipping, the test transactions can access resources of the production
system, such as files or transient data queues. By using transaction routing, terminals
connected to the production system can be used to run test transactions.
The test system can be started and ended as required, without interrupting production
work. During the cutover of the new programs into production, terminal operators can
run transactions in the test system from their regular production terminals, and the new
programs can access the full resources of the production system.
Time-sharing
If one CICS system is used for compute-bound work, such as APL or ICCF, as well as
regular DB/DC work, the response time for the DB/DC user can be unduly long. It can be
improved by running the compute-bound applications in a lower-priority address space
and the DB/DC applications in another. Transaction routing allows any terminal to access
either CICS system without the operator being aware that there are two different systems.
You can use the storage protection and transaction isolation facilities of CICS
Transaction Server for z/OS®, Version 3 Release 1 to guard against unreliable
applications that might otherwise bring down the system or disable other applications.
However, you could use MRO to extend the level of protection.
For example, you could define two CICS regions, one of which owns applications that
you have identified as unreliable, and the other the reliable applications and the database.
The fewer the applications that run in the database-owning region, the more reliable this
region will be. However, the cross-region traffic will be greater, so performance can be
degraded. You must balance performance against reliability.
You can take this application of MRO to its limit by having no user applications at all in
the database-owning region. The online performance degradation may be a worthwhile
trade-off against the elapsed time necessary to restart a CICS region that owns a very
large database.
Departmental separation
Multiprocessor performance
Using MRO, you can take advantage of a multiprocessor by linking several CICS
systems into a CICSplex, and allowing any terminal to access the transactions and data
resources of any of the systems. The system programmer can assign transactions and data
resources to any of the connected systems to get optimum performance. Transaction
routing presents the terminal user with a single system image; the user need not be aware
that there is more than one CICS system.
In an OS/390® sysplex, you can use MRO and XCF/MRO links to create a CICSplex
consisting of sets of functionally-equivalent terminal-owning regions (TORs) and
application-owning regions (AORs). You can then perform workload balancing using:
A terminal user, wishing to start a session with a CICSplex that has several terminal-
owning regions, uses the generic resource name in the logon request. Using the generic
resource name, VTAM is able to select one of the CICS TORs to be the target for that
session. For this mechanism to operate, the TORs must all register to VTAM under the
same generic resource name. VTAM is able to perform workload balancing of the
terminal sessions across the available terminal-owning regions.
The terminal-owning regions can in turn perform workload balancing using dynamic
transaction routing. Application-owning regions can route DPL requests dynamically.
The CICSPlex SM product can help you manage dynamic routing across a CICSplex.
For further information about VTAM generic resources, see the VTAM Version 4 Release
2 Release Guide. Dynamic routing of DPL requests is described in topic Dynamically
routing DPL requests of this book. Dynamic transaction routing is described in Dynamic
transaction routing. For an overview of CICSPlex SM, see the CICSPlex SM Concepts
and Planning manual. For information about the MVS workload manager, see the CICS
Performance Guide.
In some large CICS systems, the amount of virtual storage available can become a
limiting factor. In such cases, it is often possible to relieve the virtual storage problem by
splitting the system into two or more separate systems with shared resources. All the
facilities of MRO can be used to help maintain a single-system image for end users.
Note:
If you are using DL/I databases, and want to split your system to avoid virtual storage
constraints, consider using DBCTL, rather than CICS function shipping, to share the
databases between your CICS address spaces.
It is always necessary to define an MRO link between the two regions and to provide
local and remote definitions of the shared resources. These operations are described in
Defining intercommunication resources.
The external CICS interface (EXCI) uses a specialized form of MRO link to support DCE
remote procedure calls to CICS programs, and communication between z/OS batch
programs and CICS .
MRO does not require ACF/VTAM or SNA networking facilities. The support within
CICS that enables region-to-region communication is called interregion communication
(IRC). IRC is implemented in three ways:
For information about the design and implementation of interregion communication, and
about the benefits of cross-system MRO, the Intercommunication concepts and facilities
topic in the CICS Intercommunication Guide.
To install support for MRO, complete the following steps (outlined in more detail in this
section):
2. Install the current versions of the DFHIRP and DFHCSVC modules in the LPA.
3. If you give the SVC a new number, and you have CICS Version 1 or Version 2
regions that use MRO, regenerate the CICS modules DFHCRC and DFHDRPA
for those CICS versions, specifying the SVC number.
7. Define and install the MRO connections appropriate to your CICS environment.
Provided you complete the above steps, you can use MRO to communicate with all levels
of CICS from CICS/ESA® Version 4.1 onwards.
Should MRO be used to communicate between different releases of CICS, the function
provided on any connection is that of the lower-level release.
In an MVS™ system, you can use intrahost ISC for communication between two or more
CICS® systems (although MRO is a more efficient alternative) or between, for example,
a CICS system and an IMS™ system.
From the CICS point of view, intrahost ISC is the same as ISC between systems in
different VTAM® domains.
An IBM® 3725 can be configured with a multichannel adapter that permits you to
connect two VTAM domains (for example, VTAM1 and VTAM2 in Figure 3) through a
single ACF/NCP/VS. This configuration may be useful for communication between:
This is the most typical configuration for intersystem communication. in CICSD and
CICSE can be connected to CICSA, CICSB, and CICSC in this way. Each participating
system is appropriately configured for its particular location, using MVS or Virtual
Storage Extended (VSE) CICS or IMS, and one of the ACF access methods such as
ACF/VTAM.
For a list of the CICS and non-CICS systems that CICS Transaction Server for z/OS®
can connect to via ISC, For detailed information about using ISC to connect CICS
Transaction Server for z/OS to other CICS products, see the CICS Family:
Communicating from CICS on System/390® manual.
• Function shipping
• Asynchronous processing
• Transaction routing
For APPC links, the number of contention-winning sessions is specified when the link is
defined. The contention-winning sessions are normally bound by CICS, but CICS also
accepts bind requests from the remote system for these sessions.
Normally, the contention-losing sessions are bound by the remote system. However,
CICS can also bind contention-losing sessions if the remote system is incapable of
sending bind requests.
A single session to an APPC terminal is normally defined as the contention winner, and is
bound by CICS, but CICS can accept a negotiated bind in which the contention winner is
changed to the loser.
You must include the following management programs in your CICS regions, (by
specifying the system initialization parameters that are given in parentheses):