Chapter 3a - MVS Internals: Section 3.1 - Introduction
Chapter 3a - MVS Internals: Section 3.1 - Introduction
Chapter 3a - MVS Internals: Section 3.1 - Introduction
http://www.mvsbook.fsnet.co.uk/chap03a.htm
Contents
The Web Version of this chapter is split into 4 pages - this is page 1 - page contents are as follows: Section 3.1 - Introduction Section 3.2 - Storage Management 3.2.1 Virtual Storage and Real Storage 3.2.2 Paging 3.2.3 Central Storage and Expanded Storage 3.2.4 Attributes of Virtual Storage Areas 3.2.5 Areas of Virtual Storage 3.2.6 Swapping
Home
Contents
Next Section
Top of Page
1 of 8
11/2/2001 1:45 PM
http://www.mvsbook.fsnet.co.uk/chap03a.htm
The rest of this chapter is divided into sections, each one corresponding to one functional area of MVS. Home Contents Previous Section Next Section Top of Page
2 of 8
11/2/2001 1:45 PM
http://www.mvsbook.fsnet.co.uk/chap03a.htm
Whenever a program makes a reference to a virtual storage address, either to fetch data from it or to store date into it, it is necessary to translate that address into a real storage address before the processor can find and access the area of storage in question. The DAT hardware does this automatically, so that there is no software overhead required for virtual address translation. In order to make this possible, MVS must maintain tables in storage which relate each address space's virtual storage addresses to real storage addresses, and which are accessible to the DAT routines. The first of these is the "segment table". MVS maintains a seperate segment table for each address space, at a fixed location in real storage, and loads the address of the current address space's segment table into the Segment Table Origin Register, one of the control registers which is inaccessible to application programmers, but used for system functions. The segment table contains an entry for each 1 Megabyte segment of the address space's virtual storage. If the address space is not using that megabyte at all, there is an indicator to this effect, and any attempt to resolve a virtual address in that segment will result in a protection exception (this will appear to the user as an 0C4 abend). If the segment is in use, however, the segment table entry will contain the address of a "page table" for that segment. The page table in turn contains an entry for each 4K page in the segment. If the page is unused the corresponding page table entry will contain an indicator to this effect; if it has been paged out a different indicator will be set; and if it is present in real storage it will contain the real storage address of the page. The DAT process is illustrated in Figure 3.3 below.
The sizes of a page and a segment are significant because they allow the DAT hardware to do some very simple and therefore very fast table look-ups - if we think of a 32-bit address as a string of eight hex digits, then the first three of these represent the segment number of the area being addressed, and can be used as an argument to look up the address of the page table in the segment table; the next two
3 of 8
11/2/2001 1:45 PM
http://www.mvsbook.fsnet.co.uk/chap03a.htm
digits then represent the page number within the segment, and can be used as the argument to directly look up the real address of the required page from the page table. The last three digits can then be used simply as the offset into this real page frame of the required address. To speed up the DAT process even further, the results of recent address translations are stored in the Translation Lookaside Buffer (TLB), which the DAT routines check before attempting the full translation process. The high degree of "locality of reference" in most programs means that a high proportion of address translations can be resolved from the TLB. The efficiency of these design features means that the DAT hardware can locate the real storage corresponding to any virtual address extremely quickly, thus allowing MVS to implement virtual storage with very little address translation overhead. Although Dynamic Address Translation is a hardware process, MVS has to maintain the environment which DAT relies on. Thus, MVS must: * create and keep up to date the segment table for each address space, which contains pointers to the page tables for that address space * ensure that the address of the new address space's segment table is stored in the STOR whenever the current address space changes * ensure that the segment table itself is in a fixed area of storage (i.e. it cannot be paged out, or address translation would be impossible) * create and keep up to date the page tables. Home Contents Previous Section Next Section Top of Page
3.2.2 Paging
In order to provide vastly more virtual storage than the amount of real storage that exists on the machine being used, MVS uses "paging". When real storage fills up, and an address space requires another page of virtual storage, the paging process swings into action. Simplifying somewhat, the Real Storage Manager (RSM) component of MVS identifies the 4K pages of storage which have not been referenced for the longest time, invokes the Auxiliary Storage Manager (ASM) component to copy these into 4K "slots" in paging datasets on DASD (a "page out" operation), then "steals" one of the pages which have been made available to satisfy the new requirement. If a program subsequently attempts to reference the stolen page, a "page fault" occurs, and the ASM is invoked to "page in" the required page, stealing another page frame to provide the necessary real storage. In effect, then, the inactive pages of each address space are moved out of real storage onto auxiliary storage, and only the active pages (i.e. relatively recently used ones) are kept in real storage. The real storage which each address space retains is known as its "working set". Typically working sets are much smaller than the amount of virtual storage which the address space has initialised with GETMAIN instructions, as a large proportion of each address space's storage is used for routines and data which are very rarely referenced. This means that the amount of paging which is necessary to provide large address spaces to many users in a relatively small amount of real storage can be quite low as only each user's working set need be kept in real storage, even when many users are active concurrently. Let us look in a little more detail at the process by which RSM manages real storage. Every "interval" (an interval is around a second when the pressure on real storage is high, but it is lengthened - up to around 20 seconds - when it is not), RSM checks the status of each frame of real storage. Each frame has a few bytes of control information associated with it (this is not addressable storage in the normal sense), including the "hardware reference bit". Whenever a page is referenced, the hardware sets this bit on; if the bit is on when RSM checks it, it resets the value of the Unreferenced Interval Count (UIC) for this page to zero, and turns off the reference bit; if it is off, RSM increments the UIC for the page by one. These UICs (held in a table called the Page Frame Table) therefore indicate how many intervals it was since each page was last referenced. RSM also maintains an Available Frame Queue (AFQ), which is a list of pages available for stealing. When the number of pages on the AFQ falls below a predetermined limit, RSM scans the PFT for the pages with the highest UICs. It then attempts to add these to the AFQ. If the page has been paged out before and has not been updated since (there is another hardware bit associated with every real frame which is set whenever a page is updated), it can immediately be placed on the AFQ. If it has not been paged out before, or has been updated since it was last paged out, then RSM will invoke the ASM to page it out again, and when this is complete the frame will be placed on the AFQ. This process is intended to ensure that when a page frame needs to be stolen, there will already be a copy of it on auxiliary storage, so the page-in can be started immediately, without waiting for the frame to be paged out first. A frame may need to be stolen to provide new pages to an address space (in response to a GETMAIN request) or to provide space for a page in. When a frame is stolen, the corresponding entry in the page table is updated, removing the address of the real storage frame it was using, and inserting an indicator that the page is on auxiliary storage. If DAT subsequently attempts to resolve a reference to this page of virtual storage it will encounter this indicator and issue a "page fault" exception. ASM will then be invoked to page in the
4 of 8
11/2/2001 1:45 PM
http://www.mvsbook.fsnet.co.uk/chap03a.htm
required page. The RSM and ASM components between them are therefore able to use the paging mechanism to share the available real storage between conflicting requirements for virtual storage in such a way as to minimise the performance degradation which results when a DASD I/O is required to resolve a page-fault. If paging becomes excessive, however, this degradation can become unacceptably high, and there is then a requirement for tuning of the system. If this cannot resolve the problem it is then necessary to increase the amount of real storage available to the system. The pressure on real storage is increased by the ability of MVS to make some pages "immune" to paging using a mechanism known as "page-fixing". This mechanism marks a page so that RSM will never select it for stealing, with the result that it cannot be paged out. While this runs counter to the basic philosophy of virtual storage, it is essential in some circumstances. When an I/O operation is to be performed, for example, the channel subsystem will read/write data from/to a real storage location, not a virtual storage location (see the section on I/O Management below). It is therefore necessary for MVS to prevent the real storage location required by the I/O operation being stolen by RSM before the I/O operation has completed. It does this by fixing the page(s) concerned for the duration of the I/O operation, then un-fixing them after the operation has completed. Home Contents Previous Section Next Section Top of Page
5 of 8
11/2/2001 1:45 PM
http://www.mvsbook.fsnet.co.uk/chap03a.htm
Within each address space, different areas of virtual storage have different attributes and uses. The main attributes which vary between different areas are: * common versus private * above or below the 16 Megabyte line * storage protection Common areas are areas in which any given virtual address is translated to the same real address in every address space. This means that the virtual storage at these addresses is shared between all address spaces, and any unit of work can address the data in these areas. Apart from the PSA, the common areas are all in a contiguous area of virtual storage which starts and ends on 1 Megabyte boundaries. This is because 1 Megabyte is the size of a segment - i.e. the amount of storage described by a single page table - and common storage is implemented by sharing the same page tables between all address spaces. In other words, for the common segments, the entry in each address space's segment table points to the same page table. Private areas, on the other hand, are areas in which any given virtual address is translated to a different real address in every address space, so any virtual storage at these addresses is unique to the address space concerned and can normally only be addressed by tasks running in that address space. Each address space has its own page tables for its own private areas. The 16 Megabyte line is only significant because of the continuing requirement for compatibility with programs written to run under MVS Version 1, referred to below as MVS/370. Under MVS/370, a 24-bit addressing scheme was used, and 16 megabytes was the maximum virtual address that could be referenced with a 24-bit address. Programs written under MVS/370 could therefore only address virtual storage up to this limit. In order to allow such programs to run, MVS/XA and MVS/ESA still support a 24-bit addressing mode, although MVS is also now capable of supporting 31-bit addressing (allowing virtual storage up to 2 Gigabytes to be addressed). As many programs still run in 24-bit mode, all storage areas which may need to be referenced by programs running in this mode must continue to be kept below the 16 Megabyte line. Most of the storage areas we will discuss are now split into two parts, one above and one below the line, so that they can satisfy this requirement, while keeping as many areas as possible above the line. There is a strong incentive to put data above the line, as the major reason for the introduction of MVS/XA was to relieve the shortage of virtual storage addresses below 16 Megabytes, and this shortage can still pose problems for 24-bit programs. Storage protection restricts the ability of programs to fetch or update storage. Each frame of real storage has a "key" associated with it, and access to the page is restricted to users with a matching "storage protect key" in their PSW. If the storage is "fetch-protected", users without a matching key cannot even read it; if it is not, users without a matching key can read but not update it. Typically common areas have a key which prevents ordinary applications from updating them, while private areas are updatable by anyone (but only addressable by work executing within the address space concerned!). Home Contents Previous Section Next Section Top of Page
6 of 8
11/2/2001 1:45 PM
http://www.mvsbook.fsnet.co.uk/chap03a.htm
The main areas of virtual storage are: * PSA - Prefixed Save Area - from 0 to 4K - in a uniprocessor this is fixed in the first 4K of both virtual and real storage, but in a multiprocessor there is a separate PSA for each processor and a control register called the "prefix register" contains the address of the processor's PSA. The PSA contains certain areas which are critical to MVS and the hardware, such as the new PSWs to be used for processing interrupts, and the pointer to the CVT control block, from which most of the MVS control block structure can be traced. * Private area - the bottom limit of this is at 4K; the top limit is set at IPL time and is determined by deducting the the size of the common areas below the 16 Mb line from 16Mb and rounding down to a megabyte boundary. This is the area available to user programs executing in 24-bit mode, and also includes some system areas which relate specifically to this address space, such as the SWA (Scheduler Work Area, containing control blocks relating to the executing job), and the LSQA (Local System Queue Area, containing control blocks for this address space, including the segment table and private area page tables). * CSA - Common Service Area - contains control blocks and data used primarily for communicating between address spaces. Tasks such as VTAM which must pass data between address spaces often use large amounts of buffer space in CSA. The size of the CSA is specified at IPL time. * LPA - Link Pack Area - contains program modules to be shared between address spaces, including many system routines such as SVCs and access methods. Programs in the LPA can not normally be modified between IPLs. The size of the LPA depends on the number and size of the modules loaded into it at IPL time. It is divided into Fixed, Pageable, and Modifiable areas (FLPA, PLPA, and MLPA), of which the PLPA is usually by far the largest. The LPA is discussed in more detail in the Program Management section below. * SQA - System Queue Area - contains control blocks which need to be shared between address spaces, e.g. the page tables for common areas. The size is fixed at IPL time, but if the system runs out of SQA it will use CSA instead. * Nucleus - this contains the core of the MVS control program itself, including certain tables such as the page frame table (PFT) and the UCBs (unit control blocks) for I/O devices. Its size depends on the configuration of your system, but does not vary once it has been loaded at IPL time. * The extended nucleus, SQA, LPA, CSA, and private area perform the same functions as the corresponding areas below the line, and their sizes are determined in the same way (note that programs in the LPA are loaded above or below the line depending on their "residency mode", which is determined at assembly or link-edit time). The only difference is that they can only be addressed by programs running in 31-bit addressing mode. Home Contents Previous Section End of Page Top of Page
3.2.6 Swapping
7 of 8
11/2/2001 1:45 PM
http://www.mvsbook.fsnet.co.uk/chap03a.htm
Swapping is similar to paging (which is sometimes called "demand paging" to distinguish it from swapping) in that its objective is to reduce the usage of real storage by moving less-used areas of virtual storage out to disk. It is also used to reduce the size of the queues used by the dispatcher by removing "swapped-out" tasks from them. Swapping, however, is unlike demand paging in that it deals with entire address spaces instead of individual pages. When a decision is made to physically swap an address space, all of the virtual storage belonging to that address space is paged out (in large blocks known as "swap sets") and the frames it was using are added to the available frame queue. Swapping is controlled by the component of MVS called the System Resources Manager (SRM); there is a complex set of algorithms used by SRM which determines when address spaces should be swapped out and in, and selects the address spaces to be swapped. Systems programmers (or members of your performance tuning team) define parameters which are used by SRM to determine the swap priority of each address space and the conditions in which swapping should occur. The details of these parameters are extremely complex and beyond the scope of this book. The general principles, however, are simple - when the machine is so busy that it cannot provide sufficient real storage to meet the requirements of the most important executing tasks, lower priority tasks should be swapped out, and the machine's resources should be distributed between the competing workloads in accordance with their relative importance. This is what the SRM attempts to achieve. There is also a form of swapping known as logical swapping, which removes inactive tasks from the dispatching queues and thus speeds up the dispatching process. In this case, the storage belonging to the address space is not swapped out immediately, but if there is pressure on real storage, the logical swap may subsequently be converted to a physical swap. It is normal for TSO users to be logically swapped at the end of each transaction, to minimise the system overhead they cause during the relatively long "think time" until they next press the "Enter" key. Home Contents Previous Section Next part of chapter Top of Page
This page last changed: 5 July 1998. All text on this site David Elder-Vass. Please see conditions of use. E-mail comments to: dave@mvsbook.fsnet.co.uk (Please check the FAQ's page first!)
None of the statements or examples on this site are guaranteed to work on your particular MVS system. It is your responsibility to verify any information found here before using it on a live system.
8 of 8
11/2/2001 1:45 PM