0% found this document useful (0 votes)
11 views24 pages

Module 4

The document discusses various page replacement algorithms used in operating systems to manage memory efficiently during page faults. It covers algorithms such as First-In First Out (FIFO), Optimal, Least Recently Used (LRU), Not Recently Used (NRU), and others, explaining their mechanisms and advantages. Additionally, it addresses design issues related to paging systems, including global vs local replacement strategies and the impact of page size on performance.

Uploaded by

HECKER
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views24 pages

Module 4

The document discusses various page replacement algorithms used in operating systems to manage memory efficiently during page faults. It covers algorithms such as First-In First Out (FIFO), Optimal, Least Recently Used (LRU), Not Recently Used (NRU), and others, explaining their mechanisms and advantages. Additionally, it addresses design issues related to paging systems, including global vs local replacement strategies and the impact of page size on performance.

Uploaded by

HECKER
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

MODULE 4:

PAGE REPLACEMENT
ALGORITHMS AND DESIGN
ISSUES
Page Replacement algorithms:
• When a page fault occurs, the operating system has to choose a page to evict (remove from memory) to make room for the
incoming page.
• If the page to be removed has been modified while in memory, it must be rewritten to the disk to bring the disk copy up to
date
• Page removal operation would be happening in cache, TLB and many more places of operating systems

Basic page replacement:


1. Find the location of the desired page on the disk.
2. Find a free frame
a. If there is a free frame use it
b. If there is no free frame, use a page replacement algorithm to select a victim frame/page
c. Write the victim frame to the disk: change the page and frame tables accordingly.
• If we have multiple processes in memory, we must decide how many frames to allocate to each process.
• Further when page replacement is required we must select the frames that to be replaced.
• There are many page replacement algorithms: in general we want the one with the lowest page fault rate.
• We evaluate an algorithm by running it on a particular string of memory references and computing the number of page faults.
• The string of memory references is called a reference string.
First-In First Out page replacement algorithm:
• When a page must be replaced, the oldest page is chosen to remove.
• We can create a FIFO queue to hold all pages in memory.
Example:
Consider a reference string: 1, 3, 0, 3, 5, 6, 3, 7, 8, 5
Frame size : 3
What is the total number of page fault and page hit?

Consider reference string: 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1


Find the page fault and page hit with frame size 3 and frame size 4.

Consider reference string: 1,2,3,4,1,2,5,1,2,3,4,5


Find the page fault and page hit with frame size 3 and frame size 4.

Belady’s anomaly- the most unexpected result where the page fault increases with the increase in the frame
numbers.
Optimal page replacement algorithm:
• The best possible page replacement algorithm is easy to describe but impossible (difficult)to actually implement.
• Objective is very simple: replace the page that will not be used for the longest period of time.
• FIFO is looking backward and OPT is looking forward.

Example:
Consider reference string: 1,2,3,4,1,2,5,1,2,3,4,5
Frame size=3.

Consider reference string: 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1


Find the page fault and page hit with frame size 3 and frame size 4.
LRU page replacement algorithm: (Least Recently Used)
• Objective: Replace the page that has not been used for the longest period of time.
• The challenge here is to determine the order of frames defined by the time of last used.
Example:
Consider reference string: 1,2,3,4,1,2,5,1,2,3,4,5. Find the page fault and page hit with frame size 3
Consider reference string: 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1
Find the page fault and page hit with frame size 3.
Two implementations are feasible:
1. Counters
• We associate with each page table entry a time of use field and add to the CPU logical clock or counter.
• The clock/ counters is incremented for every memory reference
• In this way we always have the time of the last reference to each page. We replace the page with the smallest
value.
2. Stack:
• Another approach to implement a LRU replacement is to keep a stack of page numbers.
• Whenever a page is referenced, it is moved from the stack and put on the top. In this way, most recently used
page is always at the top of the stack and least recently used page is always at the bottom.
• Because entries must be removed from the middle of the stack it is best to implement this approach by using a
doubly linked list with the head and tail pointer.
Not Recently Used page replacement algorithm:
• In order to allow the operating system to collect page usage statistics, most computers with virtual memory have
two status bits, R and M, associated with each page.
• R is set whenever the page is referenced (read or write)
• M is set when the page is written to ( modified).
• It is important to realize that these bits must be updated on every memory reference.
• Once a bit has been set to 1, it stays 1 until the operating system resets it
• When a page fault occurs, the operating system inspects all the pages and divides them into four categories based
on the current values of their R and M bits:
1. Class 0: not referenced, not modified. (0,0) → not at all accessed.
2. Class 1: not referenced, modified. (0,1) → modified but R bit is cleared by clock interrupt.
3. Class 2: referenced, not modified. (1,0) → read
4. Class 3: referenced, modified (1,1) → write
• Although class 1 pages seem, at first glance, impossible, they occur when a class 3 page has its R bit cleared by a
clock interrupt. (Clearing R but not M leads to a class 1 page.)
• The NRU (Not Recently Used) algorithm removes a page at random from the lowest-numbered nonempty class
The Second Chance Page replacement algorithm:
• A simple modification to FIFO that avoids the problem of throwing out a heavily used page is to inspect the R bit
of the oldest page.
• If it is 0, the page is both old and unused, so it is replaced immediately
• If the R bit is 1, the bit is cleared, the page is put onto the end of the list of pages, and its load time is updated as
though it had just arrived in memory.
• Whenever a page to be replaced by actual FIFO , but that page will be reference bit of 1, it’s getting a second
chance.
Example:
Reference string : 0 4 1 4 2 4 3 4 2 4 0 4 1
Frame size is 3.

Reference string: 1 2 3 2 1 4 2 3 2 5 1
Frame size is 3.
The Clock Page replacement Algorithm:
• Another Approach is to keep all the page frames on
a circular list in the form of a clock.
• The hand points to the oldest page.
• When a page fault occurs, the page being pointed
to by the hand is inspected. If its R bit is 0, the
page is evicted
• the new page is inserted into the clock in its place,
and the hand is advanced one position
• If R is 1, it is cleared, and the hand is advanced to
the next page. This process is repeated until a page
is found with R = 0.
Design Issues for paging systems:
1.Global replacement vs Local replacements:
Global replacement:
• Process selects a replacement frame from the set all frames: one process
can take frame from other.
• Increased throughput
• Number of frames allotted to process might get increased.
• Allow high priority process to select frames of low priority process for
replacement.
If a global algorithm is used,
• It may be also possible to start each process up with some frames
• number of pages proportional to the process’ size
• The allocation has to be updated dynamically as the processes run.

Local replacement:
• Each process selects from only its own set of allocated frames.
• Number of frames allotted to a process doesn’t change.
2. Load Control :
• A good way to reduce the number of processes competing for memory is to swap some of them to the disk and free up all the
pages they are holding.
• For example, one process can be swapped to disk and its page frames divided up among other processes that are thrashing
• If the thrashing stops, the system can run for a while this way. If it does not stop, another process has to be swapped out, and
so on, until the thrashing stops.
• Swapping processes out to relieve the load on memory is reminiscent of two level scheduling, in which some processes are
put on disk and a short-term scheduler is used to schedule the remaining processes.
1. Global Replacement vs. Local Replacement
These are two strategies for selecting which page frame to replace when a page fault occurs.
Global Replacement:
Frame replacement is done from any frame in the system, regardless of which process owns it.
Advantages: Load control helps manage memory usage when the system is overloaded.
Higher throughput (because more frames are available for replacement).
Flexible frame allocation—if a process needs more frames, it can take frames from other processes. Why Load Control is Needed:
High-priority processes can take frames from low-priority processes. When too many processes compete for memory, thrashing happens.
Example: (Thrashing = processes spend more time swapping pages in/out of memory
Process A can take a frame from Process B if it needs more memory. than executing.)
Disadvantage: Solution: Swapping to Disk (Load Control Technique)
A process can lose its frames to other processes, causing inconsistent performance. Swap out some processes to disk to free up memory.
Local Replacement: Allocate freed frames to other processes that are thrashing.
A process can only replace its own frames. Example:
Advantages:
Fair and predictable—each process gets a fixed number of frames. If Process A is thrashing, swap out Process B to disk, and give its memory
Isolation—one process won’t affect another process's frames. frames to Process A.
Example: If thrashing continues, swap out another process until the thrashing stops.
Process A can only use its own frames for page replacement.
Disadvantage:
Less efficient than global replacement if a process needs more memory but doesn’t have enough
frames.
Page size:
• Determining the best page size requires balancing several competing factors.
• For larger page size few factors may give good results and vice versa. What is Page Size?
Page size is the amount of memory each page takes in a paging system.
It’s important to decide the right page size to balance memory usage and
• For example: consider the virtual memory size is 4MB. performance. Let's explore the factors to consider when choosing page
size.

Larger page size Smaller page size


Assume page size =1024 bytes Assume page size =512 bytes
For 4Mb it requires 4096 pages For 4Mb it requires 8192 pages
Size of page table is small ( 4096 entries) Size of page table is large (8192 entries)
The last page of the process is usually never filled. Internal wastage will be less.
Internal wastage or fragmentation will be more

If number of total pages are less, page faults are also If number of total pages are more, page faults are
less. Larger Page Size Smaller Page Size
also more
Fewer pages are needed. More pages are needed.
Smaller page table size. Larger page table size.
I/O time will be less I/O time will be more
More internal fragmentation (wasted space inside a page).Less internal
fragmentation.
Fewer page faults (better performance).More page faults (slower performance).
Less I/O time (faster).More I/O time (slower).
Derivation of optimum page size:
s = Process size (in bytes)
s = process size, p= page size, e= size of each page table entry. e = Size of each page table entry (in bytes)
p = Optimum page size (in bytes)

Approximate No. of pages needed = s/p. → eq.1


These pages occupy, se/p bytes of page table space.
Wasted memory due to fragmentation = p/2 → eq 2 ( Assuming 50% of page size is wasted)
The total overhead due to the page table and the internal fragmentation loss is (se/p) + (p/2) → eq 3
In eq3 The first term (page table size) is large when the page size is small.
Given:

The second term (internal fragmentation) is large when the page size is large. Process size = 1 MB (or 1048576 bytes)
Each page table entry = 8 byte
The optimum page size should lie somewhere between se/p and p/2.
p=
By taking the first derivative with respect to p and equating it to zero, we get the equation 2×1048576×
8
=16777216
−se / p2 + 1/2 = 0 =4096
bytes
p = √2se p=4096bytes

Example:
For s = 1MB and e = 8 bytes. What is the optimum page size?
File System implementation: ( File system layout)
• File systems are stored on disk.
• Most disks can be divided up into one or more partitions or sectors.
• Sector 0 of the disk is called the MBR (Master Boot Record) and is
used to boot the computer.
• The end of the MBR contains the partition table. This table gives the
starting and ending addresses of each partition.
• One of the partitions in the table is marked as active.
• The first thing the MBR program does is locate the active partition,
read in its first block, which is called the boot block.
• The program in the boot block loads the operating system contained in
that partition.
• Super block contains all the key parameters about the file system and
is read into memory when the computer is booted or the file system is
first touched. (No. of files, no of blocks etc)
• information about free blocks in the file system,
• i-nodes, an array of data structures, one per file, telling all about the
file
• root directory, which contains the top of the file-system tree.
• remainder of the disk contains all the other directories and files.
Implementing files:
Various methods are in use with different operating systems some of them are discussed here.
1. Contiguous Allocation
2. Linked-List Allocation
3. Linked-List Allocation Using a Table in Memory.
Contiguous Allocation:
• The simplest allocation scheme is to store each file as a contiguous run of disk blocks.
• Thus, on a disk with 1-KB blocks, a 50-KB file would be allocated 50 consecutive blocks. With 2-KB blocks, it would be
allocated 25 consecutive blocks.
Few Advantages:
1. It is simple to implement because keeping track of where a file’s blocks are is reduced.
2. the read performance is excellent because the entire file can be read from the disk in a single operation with
simple increment. ( Only one seek is required)

Few drawbacks:
1. over the course of time, the disk becomes fragmented. The disk is not compacted on the spot to squeeze out the
hole.
2. the disk ultimately consists of files and holes.
Linked-List Allocation: ( Non- Contiguous)
• To keep each one as a linked list of disk blocks. (pointer word)
• The first word of each block is used as a pointer to the next one.
The rest of the block is for data.
Advantages:
• No space is lost to disk fragmentation (except for internal
fragmentation in the last block). The rest can be found starting
there.
• Easy to implement.
Disadvantage:
• On the other hand, although reading a file sequentially is
straightforward, random access is extremely slow.
• To get to block n, the operating system has to start at the beginning
and read the n − 1 blocks prior to it, one at a time
• pointer takes up a few bytes
Linked-List Allocation Using a Table in Memory:
• The pointer word from each disk block and put it in a table in
memory.
• chains are terminated with a special marker (e.g., −1) that is not a
valid block number.
• Such a table in main memory is called a FAT (File Allocation Table)
• The primary disadvantage of this method is that the entire table must
be in memory all the time to make it work.
i-nodes:
• last method for keeping track of which blocks belong
to which file is to associate with each file a data
structure called an i-node (index-node).
• File Attributes lists the attributes and disk addresses of
the file’s blocks.
• Given the i-node, it is then possible to find all the
blocks of the file.
• This array is usually far smaller than the space
occupied by the file table
• One problem with i-nodes is that if each one has room
for a fixed number of disk addresses, what happens
when a file grows beyond this limit?
Implementing directories:
• When a file is opened, the operating system uses the path name supplied by the user to locate the directory entry
on the disk.
• The directory entry provides the information needed to find the disk blocks.
• Depending on the system, this information may be the disk address of the entire file (with contiguous
allocation), the number of the first block (both linked-list schemes), or the number of the i-node.
Ways of handling file names
PPT is only a reference for teaching.
Referring PPT is not sufficient for Exams.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy