UNIT-4 Full
UNIT-4 Full
Main Memory: Background, Contiguous Memory Allocation, Paging, Structure of the Page
Table, Segmentation.
Virtual Memory: Background, Demand Paging, Page Replacement, Allocation of Frames,
Introduction to Thrashing,
MEMORY HARDWARE:
Memory consists of a large array of bytes, each with its own address. The CPU fetches instructions from memory according
to the value of the program counter. These instructions may cause additional loading from and storing to specific memory
addresses.
Typical instruction-execution cycle, for example, first fetches an instruction from memory. The instruction is then decoded
and may cause operands to be fetched from memory. After the instruction has been executed on the operands, results may
be stored back in memory.
Main memory and the registers built into the processor itself are the only general-purpose storage that the CPU can access
directly.
Registers that are built into the CPU are generally accessible within one cycle of the CPU clock. A memory access may take
many cycles of the CPU clock. In such cases, the processor normally needs to stall, since it does not have the data required to
complete the instruction.
The remedy is to add fast memory between the CPU and main memory, typically on the CPU chip for fast access called as
CACHE
• The base register holds the smallest legal physical memory address,
called the starting address of the process.
• If the base register holds 300040 and the limit register is 120900, then the
program can legally access all addresses from 300040 through 420940.
• The base and limit registers can be loaded only by the operating
system into the CPU hardware[Kernel Mode].
The binding of instructions and data to memory addresses can be done in three ways.
1) Compile time: If you know at compile time where the process will reside in memory, then
absolute code can be generated.
2) Load time: If it is not known at compile time where the process will reside in memory,
then the compiler must generate relocatable code.
3) Execution time: If the process can be moved during its execution from one memory
segment to another, then binding must be delayed until run time.
OS
5KB-P0
5KB-P1
3KB-P2
16KB-P3
VINUTHA M S,Dr AIT, DEPT OF CSE Memory Block
MEMORY PROTECTION:
Fixed partition scheme: One of the simplest methods for
We can prevent a process from accessing memory of other process. allocating memory is to divide memory into several fixed-sized
If we have a system with a relocation register together with a limit Partitions. Each partition may contain exactly one process.
register we accomplish our goal.
The relocation register contains the value of the smallest physical Thus, the degree of multiprogramming is bound by the number of
address; the limit register contains the range of logical addresses partitions. In this multiple partition method, when a partition is free,
a process is selected from the input queue and is loaded into the
The MMU maps the logical address dynamically by adding the value
in the relocation register. This mapped address is sent to memory. free partition. When the process terminates, the partition becomes
available for another process.
When the CPU scheduler selects a process for execution, the
dispatcher loads the relocation and limit registers with the correct Case1: P with 4MB
values as part of the context switch. Case2: P with 2MB
Every address generated by a CPU is checked against these registers, Case3:P with 7MB
we can protect both the operating system and the other users’ case4: P with 18MB
programs Case 5: P1=7MB, P2=7MB & P3=14MB
When a process is allocated space, it is loaded into memory, and it can The system may need to check whether there are processes waiting for
then compete for CPU time. When a process terminates, it releases its memory and whether this newly freed and recombined memory could satisfy
memory, which the operating system may then fill with another the demands of any of these waiting processes.
process from the input queue.
OS will have a list of available block sizes and an input queue. The
operating system can order the input queue according to a scheduling
algorithm.
When a process arrives and needs memory, the system searches the
set for a hole that is large enough for this process.
Disadvantages: Despite the variable-size partition scheme's many benefits, there are a few
drawbacks as well:
• This method is dynamic, hence it is challenging to implement a variable-size partition scheme.
• It is challenging to maintain record of processes and available memory space.
• Worst fit: Allocate the largest hole. Again, we must search the entire
list, unless it is sorted by size. This strategy produces the largest
leftover hole, which may be more useful than the smaller leftover
hole from a best-fit approach
Contiguous memory allocation allocates consecutive blocks of memory to Non-Contiguous memory allocation allocates separate blocks of
1.
a file/process. memory to a file/process.
Overhead is minimum as not much address translations are there while More Overheads are there as there are more address
4.
executing a process. translations.
It is of five types:
1.Paging
It is of two types:
2.Multilevel Paging
6. 1.Fixed(or static) partitioning
3.Inverted Paging
2.Dynamic partitioning
4.Segmentation
5.Segmented Paging
Advantages:
1.Efficient use of memory: Virtual memory allows the operating system to allocate only the necessary
amount of physical memory needed by a process, which reduces memory waste and increases overall
system performance.
2.Protection: Page Tables allow the operating system to control access to memory and protect sensitive
data from unauthorized access. Each PTE can be configured with access permissions, such as read-only or
no access, to prevent accidental or malicious modification of memory.
3.Flexibility: Virtual memory allows multiple processes to share the same physical memory space, which
increases system flexibility and allows for better resource utilization.
4.Address translation: Page Tables provide the mechanism for translating virtual addresses used by a
process into physical addresses in memory, which allows for efficient use of memory and simplifies memory
management.
5.Hierarchical design: Some systems use hierarchical page tables, which provide a more efficient method
for managing large virtual address spaces. Hierarchical page tables divide the Page Table into smaller tables,
each pointing to a larger table, which allows for faster access to Page Table Entries and reduces the overall
size of the Page Table. VINUTHA M S,Dr AIT, DEPT OF CSE 17
PAGING HARDWARE:
Every address generated by the CPU is divided into two parts:
a page number (p)
a page offset (d)
The following outlines the steps taken by the MMU to translate a
logical address generated by the CPU to a physical address:
1. Extract the page number p and use it as an index into the page table.
2. Extract the corresponding frame number f from the page table.
3. Replace the page number p in the logical address with the frame
number f.
The page size is defined by the hardware. The size of a page is a power
of 2, varying between 512 bytes and 1 GB per page.
If the size of the logical address space is 2m, and a page size is 2n bytes,
then the high-order m−n bits of a logical address designate the page
number, and the n low-order bits designate the page offset.
• The page number is further divided into a 10-bit page number and a
10-bit page offset.
• Here p1 is an index into the outer page table and p2 is the
displacement within the page of the inner page table.
virtual page value of the mapped page pointer to the next element in the
number frame linked list
The virtual page number in the virtual address is hashed into the hash table.
The virtual page number is compared with field 1 in the first element in the linked
list. A variation of this scheme that is useful for 64-bit address spaces
If there is a match, the corresponding page frame (field 2) is used to form the has been proposed. This variation uses clustered page tables,
desired physical address. which are similar to hashed page tables except that each entry in
the hash table refers to several pages (such as 16) rather than a
If there is no match, subsequent entries in the linked list are searched for a
single page.
matching virtual page number.
A variation to hashed page table is clustered page tables, which are similar to Therefore, a single page-table entry can store the mappings for
hashed page tables except that each entry in the hash table refers to several pages multiple physical-page frames. Clustered page tables are
(such as 16) rather than a single page. particularly useful for sparse address spaces, where memory
references are noncontiguous and scattered throughout the
address space.
To solve this problem, we can use an inverted page table. An inverted page table has one
entry for each frame of memory. Each entry consists of the virtual address of the page
stored in that real memory location; with information about the process that owns the
page. Thus, only one page table is in the system, and it has only one entry for each page
of physical memory.
Each Logical address in the system consists of a triple :
< process-id, page-number, offset>.
Each inverted page-table entry is a pair
When a memory reference occurs, part of the virtual address, consisting of , is presented
to the memory subsystem. The inverted page table is then searched for a match. If a
match is found—say, at entry i—then the physical address is generated.
If no frames are free, two page transfers (one out and one in) are required. This situation
effectively doubles the page-fault service time and increases the effective access time
accordingly.
This overhead can be reduced by using a modify bit (or dirty bit).
When this scheme is used, each page or frame has a modify bit associated with it in the
hardware.
MODIFY BIT: The modify bit for a page is set by the hardware whenever any byte in the
page is written into, indicating that the page has been modified.
When we select a page for replacement, we examine its modify bit. If the bit is set, we
know that the page has been modified since it was read in from the disk. In this case, we
must write the page to the disk.
If the modify bit is not set, however, the page has not been modified since it was read into
memory. In this case, we need not write the memory page to the disk: it is already there.
Page 0 is the next reference and 0 is already in memory, we have no fault for this reference.
The first reference to 3 results in replacement of page 0, since it is now first in line.
Because of this replacement, the next reference, to 0, will fault. Page 1 is then replaced by page 0. The process continues until all
the pages are referenced.
Advantages: The FIFO page-replacement algorithm is easy to understand and program
Disadvantages: The Performance is not always good.
It Suffers from Belady’s Anomaly.
BELADY’S ANOMALY: The page fault increases as the number of allocated memory frame increases. This unexpected
result is called as Belady’s Anomaly.
Advantages
•Efficient.
•Doesn't suffer from Belady’s Anomaly.
Disadvantages
•Complex Implementation.
•Expensive.
•Requires hardware support.
Advantages
•Easy to Implement.
•Simple data structures are used.
•Highly efficient.
Disadvantages
•Requires future knowledge of the program.
•Time-consuming.
Causes of Thrashing:
The operating system monitors CPU utilization If CPU utilization is too low; we increase the degree of multiprogramming by introducing a
new Process to the system.
Now suppose that a process enters a new phase in its execution and needs more frames. It starts faulting and taking frames away from
other processes.
A global page-replacement algorithm is used; it replaces pages without regard to the process to which they belong.
These processes need those pages, however, and so they also fault, taking frames from other processes. These faulting processes must
use the paging device to swap pages in and out. As processes wait for the paging device, CPU utilization decreases.
The CPU scheduler sees the decreasing CPU utilization and increases the degree of multiprogramming as a result. The new process tries to
get started by taking frames from running processes, causing more page faults and a longer queue for the paging device.
As a result, CPU utilization drops even further, and the CPU scheduler tries to increase the degree of multiprogramming even more.
Thrashing has occurred, and system throughput plunges.