PPT-Unit-5 Memory Management

Download as pdf or txt
Download as pdf or txt
You are on page 1of 75

Unit-5 Memory Management

Chapter Outcomes
• Describe the working of specified memory management function.
• Explain characteristic of the given memory management techniques.
• Write algorithm for the given page replacement technique.
• Calculate page fault for the given page reference string

Learning Objectives
• To understand Basic Concepts of Memory and Memory Management in OS
• To study Virtual Memory, Paging, Segmentation, Fragmentation etc.
• To learn various Page Replacement Algorithm such as FIFO, LRU etc.
Introduction

Memory management is one of the important function of operating system which helps in
allocating the main memory space to the processes and their data at the time of their
execution. The main task of memory management is efficient memory utilization.

Memory management is the functionality of an operating system which handles or manages


primary memory and moves processes back and forth between main memory and disk during
execution.

Along with the allocation of memory space, memory management also perform the activities
such as Upgrading the performance of the computer system, Enabling the execution of
multiple processes at the same time, Sharing the same memory space among different
processes and so on.
Overview of Memory

• Memory is central to the operation of a modern computer


system. Memory consists of a large array of words or bytes, each
with its own address.
• Memory management is achieved through memory
management algorithms. Each memory management algorithm
requires its own hardware support.
• Both the CPU and I/O system interact with memory. Interaction
is achieved through a sequence of reads or writes to specific
memory addresses. The CPU fetches from and stores in memory.
Functions of Memory Management

• Process Isolation: Process isolation means controlling of one process interacts with the
data and memory of other process.

• Tracking of Memory Locations: Memory management keeps track of each and every
memory location, regardless of either it is allocated to some process or it is free. It checks
how much memory is to be allocated to processes.

• Automatic Allocation and Management: Memory should be allocated dynamically based


on the priorities of the process. Otherwise the process waiting will increase and it
decreases the CPU utilization and the memory utilization.

• Long Term Storage: Long term storage of process will reduce the memory utilization
Functions of Memory Management

• Support of Modular Programming: A program is divided into number of modules, if the


memory is not sufficient for the entire program, we can load at least some of the modules
instead of the entire program. This will increase CPU utilization and memory utilization.

• Protection and Access Control: Do not apply the protection mechanisms and access control
to all the processes, better to apply to the important applications only. It will save the
execution time.

• 7. Keeping Status of Main Memory Locations: Memory management keeps track of the
status of each memory location, whether it is allocated or free.
Memory Partitioning

• In memory partitioning, memory is divided into a number of regions


or partitions. Each region may have one program to be executed.
• When a region is free, a program is selected from the job queue and
loaded into the free region. When it terminated, the region
becomes available for another program.
• Memory is divided into two sections, one for user and one for
resident monitor of the operating system
• Commonly resident monitor place in low memory and user program
is executing in high memory.
• We need to protect the monitor code and data from changes by the
user program. This protection must be provided by the hardware
and can be implemented in several ways
Hardware address protection for a resident monitor

• If the generated address is greater than or equal to the


fence, then it is a legitimate reference to user memory
and is sent to the memory unit as usual.

• If the generated address is less than the fence, the


address is an illegal reference to monitor memory.

• The reference is intercepted and trap to operating


system is generated. The operating system will then
take the appropriate action. Notice that every reference
to memory by the user program must be checked.
Memory Management Scheme

Two major memory management schemes are possible. Each approach divides memory into
a number of regions or partitions.

• Static (Fixed Sized) Memory Partitioning (or Multiprogramming with Fixed number of
Tasks (MFT))

• Dynamic (Variable) Memory Partitioning (or Multiprogramming with Variable number of


Tasks (MVT)).
Static (Fixed Sized) Memory Partitioning

• In static memory partitioning, the memory is divided into a number of fixed sized partitions and do not
change as the system runs.
• Each partition in static memory partitioning, contains exactly one process. So the number of programs
to be executed (i.e. degree of multiprogramming) depends on the number of partitions.
• There are two alternatives for fixed sized memory partitioning namely, equal sized partitions (a) and
unequal sized partitions (b)
Job Scheduling in fixed sized memory partitions

• As jobs enter the system, they are put into a job queue. The job scheduler takes into account
the memory requirement of each job and the available regions in determining which jobs are
allocated memory.

• When a job is allocated space, it is loaded into a region. It can then complete for the CPU.
When job terminates, it releases its memory region, which the job scheduler may then fill with
another job from the job queue.

• A number of variations are possible in allocation of memory to jobs. One strategy is to classify
all jobs on entry to system, according to its memory requirements. User specifies the
maximum amount of memory required the system can attempt to determine memory
requirements automatically.
MFT with separate queue for each region

• If we have three user memory regions of sizes 2K,


6K, and 12K we need three queues namely Q2, Q6
and Q12.

• An incoming job requiring 4K of memory would be


appended to Q6, a new job needing 8K would be
put in Q12, and a job of 2K would go in Q2.

• Each queue is scheduled separately. Since each


queue has its own memory region, there is no
competition between queues for memory.
MFT with a unified queue

• Another approach is to throw all jobs into one


queue. The job scheduler selects the next job to be
run and waits until a memory region of that size is
available.
• Suppose that we had FCFS job scheduler and
regions of 2K, 6K, and 12K. We would first assign
job1 (5K) to the 6K regions and job2 (2K) to the 2K
regions. Since our next job requires 3K, we need the
6K regions.
• Since 6K regions are being used by job1, we must
wait until job1 terminates, then job3 will be
allocated the 6K regions. Job4 is then allocated to
12K regions and so on.
MFT Advantages and disadvantages

Advantages:
• Simple to implement
• It requires minimal operating system software and processing overhead as
• Fixed partitioning makes efficient utilization of processor and I/O devices

Disadvantages:
• The main problem with the fixed partitioning method is how to determine the number of
partitions, and how to determine their sizes.
• Memory wastage
• Inefficient use of memory due to internal fragmentation.
• Maximum number of active processes is fixed.
Dynamic (Variable) Memory Partitioning

• In variable memory partitioning the partitions can vary in number and size. In variable
memory partitioning the amount of memory allocated is exactly the amount of memory a
process requires.

• The operating system keeps a table indicating which parts of memory are available and
which are occupied. Initially all memory is available for user programs and is considered
as one large block of available memory, a hole.

• When a job arrives and needs memory, we search for a hole large enough for this job. If
we find one, we allocate only as much as is needed, keeping the rest available to satisfy
future requests.
Dynamic (Variable) Memory Partitioning

For example, assume 256K memory available and a resident monitor of 40K. This situation
leaves 216K for user programs.
Example memory allocation and job scheduling for MVT
Internal Fragmentation

Internal fragmentation: Internal fragmentation occurs


when the memory allocator leaves extra space empty
inside of a block of memory that has been allocated for a
client. For example, blocks may be required to be evenly
be divided by four, eight or 16 bytes. When this occurs, a
client that needs 57 bytes of memory, for example, may be
allocated a block that contains 60 bytes, or even 64. The
extra bytes that the client doesn’t need go to waste, and
over time these tiny chunks of unused memory can build
up and create large quantities of memory that can’t be put
to use by the allocator. Because all of these useless bytes
are inside larger memory blocks, the fragmentation is
considered internal.
External Fragmentation

External Fragmentation: External fragmentation exists when


enough total memory space exists to satisfy a request, but it is
not contiguous, storage is fragmented into large number of
small holes. For example, there is a hole of 20K and 10K is
available in multiple partition allocation schemes. The next
process request for 30K of memory. Actually 30K of memory is
free which satisfy the request but hole is not contiguous. To
there is an external fragmentation of memory.
Dynamic Storage Allocation

• Disk space can be viewed as a large array of disk blocks. At any given time some of these
blocks are allocated to files and others are free.

• Disk space seen as a collection of free and used segments, each segment is a contiguous set
of disk blocks. An unallocated segment is called a Hole. The dynamic storage allocation
problem is how to satisfy a request of size ‘n’ from a list of free holes. There are many
solutions to this problem.

• The set of holes is searched to determine which hole is best to allocate. The most common
strategies used to select a free hole from the set of available holes are first fit, best fit and
worst fit.
Dynamic Storage Allocation

1. First Fit: Allocate the first hole (or free block) that is big enough for the new process.
Searching can start either at the beginning of the set of holes or where the previous first fit
search ended. We can stop searching as soon as we find a large enough free hole. First fit is
generally faster.

2. Best Fit: Allocate the smallest hole that is big enough. We search the entire list, unless the
list is kept ordered by size. This strategy produces the smallest left over hole.

3. Worst Fit: Allocate the largest hole. Again we must search the entire list, unless it is sorted
by size.

First fit and best fit are better than worst fit in both time and storage utilization.
First fit is generally faster
Dynamic Storage Allocation

Consider a swapping system in which memory consists of the following hole sizes in memory
order: 10KB, 4KB, 20KB, 18KB, 7KB, 9KB, 12KB and 15KB. Which hole is taken for successive
segment requests for (i) 12 KB (ii) 10KB (iii) 9 KB for first fit, best fit and worst fit.
Sol: Memory arrangements for 12 KB job are as follows:
Dynamic Storage Allocation

Sol: Memory arrangements for 10 KB job are as follows:


Dynamic Storage Allocation

Sol: Memory arrangements for 9 KB job are as follows:


Dynamic Storage Allocation

In first fit allocate the first hole that is big enough for the job. In best fit, we arrange all
holes in ascending order to allocate the smallest hole this are big enough for the job.
In worst fit, we arrange all holes in descending order to allocate the largest hole this
are big enough for the job.
Compare MVT and MFT

Factor MFT MVT


Region size In this petitioning, memory is In this petitioning, region or
divided into several partitions partition size is not fixed and
of fixed size that size never can
changes. vary dynamically.
Process size Cannot grow at run time. Can grow or shrink at run
time
Degree of Fixed Dynamic.
multiprogrammi
ng
Memory Poor Good
utilization
Compare MVT and MFT

Factor MFT MVT


Fragmentation Suffers from internal Suffers from external
fragmentation. fragmentation.
Implementation Difficult Easy
Division of The division of memory into The size and the number of
memory number of partitions and its partitions are decided during
partitions size is made in the beginning the run time by the operating
prior to the execution of user system.
programs and remains fixed
thereafter.
Example IBM 360, DOS etc. IBM OS.
Free Space Management Techniques

• Files are created and deleted frequently during the operation of a computer system. Since
there is only a limited amount of disk space, it is necessary to reuse the space from deleted
files for new files.
• To keep track of free disk space, the file system maintains a free space list. The free space list
records all disk blocks, which are free.
• To create a file, we search the free space list for the required amount of space and allocate it
to the new file. This space is the removed from the free space list. When a file is deleted, its
disk space is added to the free space list.
• The process of looking after and managing the free blocks of the disk is called free space
management. The methods are used in free space management techniques are Bit Vector,
Linked List, Grouping and Counting.
Bit Vector

• The free space list may not be implemented as a list; it is implemented as a Bit Map or Bit
Vector. Bit map is series or collection of bits where each bit corresponds to a disk block.
• Each block in bit map is represented by one bit. If the block is free, the bit is ‘0’, if the block is
allocated the bit is ‘1’.
• For example, consider a disk where blocks 2, 3, 4, 5, 8, 9, 10, 11, 12, 13, 17, 18, 25, 26 and 27
are free, the free space bit map would be, 11000011000000111001111110001111………
Bit Vector

Advantages of Bit Map/Bit Vector:


1. 1. It is simple and efficient method to find the first free block or n-consecutive free blocks
on the disk.
2. 2. A bit map requires lesser space as it uses 1 bit per block.

Disadvantages of Bit Map/Bit Vector:


1. Extra space required to store bit map.
2. It may require a special hardware support to do bit operation i.e. finding out which bit is 1
and which bit is 0.
3. Bit map are inefficient unless they are kept in main memory and are written to disk
occasionally for recovery needs.
Linked List

• Linked list is another technique for free


space management in which, the free disk
blocks are linked together i.e. a free block
contains a pointer to the next free block.
• In linked list technique, link all the disk
blocks together, keeping a pointer to the
first free block. This block contains a
pointer to the next free disk block, and so
on.
• In our example we would keep a pointer to
• block 2, as the first free block. Block 2
would contain a pointer to block 3, which
point to block 4, which would point to
block 5 and so on.
Linked List

Advantages of Linked Free Space List:


• If a file is to be allocated a free block, the operating system can simply allocate the
first block in the free space list and move the head pointer to the next free block in
the list.
• No disk fragmentation as every block is utilized.

Disadvantages of Linked Free Space List:


• It is not very efficient scheme as to traverse the list; we must read each block
requiring substantial time.
• Pointers use here will also consume disk space. Additional memory is therefore
required.
Grouping

• Grouping is a free space management technique for a modification of the free list
method. In grouping, there is a modification of this approach would store the addresses
of ‘n’ free blocks in the first free block. The first n-1 of these is actually free. The last one
is the disk address of another block containing the addresses of another ‘n’ free block.
• The importance of this implementation is that the addresses of a large number of free
blocks can be found quickly.
• In which a disk block contains addresses of many free blocks and a block containing free
block pointers will get free when those blocks are used.
Grouping
Virtual Memory

• Virtual memory is a technique which allows the execution of processes that may not be
completely in memory.

• Virtual memory is the separation of user logical memory from physical memory. This
separation allows an extremely large virtual memory to be provided for programmers when
only a smaller physical memory is available.

• The basic idea behind virtual memory is that the combined size of the program, data and
stack may exceed the amount of physical memory available for it.

• The operating system keeps those parts of the program currently in use in main memory,
and the rest on the disk.
Virtual Memory

Virtual memory that is large than Physical Memory


Paging

• Paging is a memory management technique by which a computer stores and retrieves data
from secondary storage for use in main memory. In paging, the operating system retrieves
data from secondary storage in same-size blocks called pages.

• Paging is an important part of virtual memory implementations in modern operating systems,


using secondary storage to let programs exceed the size of available physical memory.

• The basic idea behind paging is that when a process is swapped in, the pager only loads into
memory those pages that it expects the process to need.

• Paging permits a programs memory to be non-contiguous, thus allowing a program to be


allocated physical memory wherever it is available.
Paging Hardware

• Every address generated by the CPU is divided


into two parts namely, a page number (p) and
a page offset (d).

• The page number is used as an index into a


page table. The page table contains the base
address of each page in physical memory. This
base address is combined with page offset to
define the physical address that is sent to
memory unit.
Paging Model of Logical and Physical Memory
Paging Example for 32 word memory with 4 word pages

For example, using a page size of 4 words and


physical memory of 32 words (8 pages) we
show how the user’s view of memory can be
mapped into physical memory. Logical address
0 is page 0 offset 0. We find that page 0 is in
frame 5.

Thus logical address 0 maps to physical address


20 = (5 × 4 + 0). Logical address 4 is page1,
offset 0. Logical address 4 maps to physical
address (6 × 4 + 0) = 24.
Paging
Paging

Paging itself is a form of dynamic relocation.


Every logical address is mapped by paging
hardware to some physical address. Each
user page needs one frame. Thus if the job
requires n pages, there must be n frames
available in memory.
The page of job is loaded into one of the
allocated frames and the frame number is
put in the page table for this job and so on.
Using a paging scheme we have no external
fragmentation, any free frame can be
allocated to a job that needs it. Each jobs
has its own page table. The page table is
implemented as a set of dedicated registers.
Advantages and Disadvantages of Paging

Advantages of Paged Memory Allocation:


1. Allows jobs to be allocated in non-contiguous memory locations.
2. Paging eliminates fragmentation.
3. Memory used more efficiently.
4. Paging increases memory and processor utilization.
5. Support higher degree of multiprogramming.

Disadvantages of Paged Memory Allocation:


1. Page Address mapping hardware usually increases the cost of the computer.
2. Internal fragmentation still exists, though in last page.
3. Requires the entire job to be stored in memory location.
4. Size of page is crucial (not too small, not too large).
5. Memory must be used to store the various tables like page table, memory map table etc.
Segmentation

• Like paging segmentation is also a memory


management scheme that implements the user’s
view of a program.
• In segmentation, the entire logical address space is
considered as a collection of segments with each
segment having a number and a length.
• The length of a segment may range from 0 to some
maximum value as specified by the hardware and
may also change during the execution. The user
specifies each logical address consisting of a segment
number (s) and an offset (d).
• A segment is a logical unit such as main program,
procedure, function, method, object, local variables,
global variables, common block, stack, symbol table,
arrays etc.
Segmentation Hardware

• A segment is defined as, a logical grouping of


instructions. A logical address space is a collection
of segments. Every program/job is collection of
segments such as subroutine, array etc.
• Each segment has a name and a length. Address
specify both the segment name and the offset
within the segment. The user specifies each
address by two quantities a segment name and an
offset.
• A logical address consists of two parts a segment
number ‘s’ and an offset into that segment ‘d’. The
segment number is used as an index into segment
table. Each entry of segment table has a segment
base and a segment limit.
Segmentation Example
• The segment table has a separate entry for
each segment, giving the beginning address
of the segment in physical memory (the
base) and the length of that segment (the
limit) for example segment 2 is 400 words
long, beginning of location 4300. Thus
reference to word 53 of segment 2 is
mapped on to location 4300 + 53 = 4353.

• A reference to segment 3, word 852 is


mapped to 3200 (the base of segment 3) +
852 = 4052. A reference to word 1222 of
segment 0 would result in a trap to the
operating system, since this segment is only
1000 words long.
Advantages and Disadvantages of Segmentation
Advantages of Segmentation:
1. Eliminate Fragmentation
2. Provides Virtual Memory
3. Growing Segments
4. Dynamic Linking and Loading:
5. Enforced control access (i.e. Read, Write, Executed)
6. It provides a convenient way of organizing programs and data to the programmer.

Disadvantages of Segmentation:
1. It suffers from external fragmentation.
2. Address translation i.e. conversion from logical address to physical address is not a simple
function, as regards paging.
Increased complexity in the operating system.
3. Increased hardware cost processor overhead for address mapping.
4. There is a difficulty in managing variable size segments on the secondary storage.
5. The maximum size of segment is limited by the size of main memory.
Differentiate between Paging and Segmentation

Paging Segmentation
In paging, the main memory is In segmentation, the main memory is
partitioned into page frames (or blocks). partitioned into segments.
In paging, the logical address space is In segmentation, the logical address
divided into pages by the compiler or space is divided into segments as
MMU (Memory Management Unit) specified by the user/programmer.
The OS maintains a page map table for The OS maintains a segment map table
mapping between frames and pages. for mapping purpose.
Paging suffer from internal Segmentation suffers from external
fragmentation or page breaks. fragmentation.
Differentiate between Paging and Segmentation

Paging Segmentation
Paging does not support the user's view Segmentation supports user’s view of
of memory. memory.
In paging, processor uses page number, In segmentation, processor uses
offset to calculate absolute address. segment number, offset to calculate
absolute address.
Paging is invisible to the user. Segmentation is visible to the user.
Paging is faster than segmentation. Segmentation is slower than paging.
Compaction

Compaction is a method used to overcome the external


fragmentation problem. All free blocks are brought
together as one large block of free space.
The collection of free space from multiple non-
contiguous blocks into one large free block in a system's
memory is called compaction.
Compaction is possible only if relocation is dynamic, at
execution time, using base and limit registers. The
simplest compaction algorithm is to simply move all
jobs towards one end of memory, all holes move in the
other direction, producing on large hole of available
memory.
Compaction can be quite expensive. Compaction
changes the allocation of memory to make free space
contiguous and hence useful. Compaction also
consumes system resources.
Page Fault

• It is possible that not all pages of the program were brought into memory. Some pages are
loaded into memory and some pages are kept on the disk.
• The pages that are in memory and the pages that are on the disk, a valid invalid bit is
provided. Pages that are not loaded into memory are marked as invalid in the page table,
using the invalid bit.
• The bit is set to valid if the associated page is in memory.
• But what happens if the process tries to access a page that was not brought into memory?
Access to a page marked invalid causes a Page Fault.
• A page fault occurs when a program accesses a page that has been mapped in address
space, but has not been loaded in the physical memory. When the page (data) requested by
a program is not available in the memory, it is called as a page fault.
Page table when some pages are not in main memory
Demand Paging

• Demand paging is a method of virtual memory management.


• With demand-paged virtual memory, pages are only loaded when they are demanded during
program execution; pages that are never accessed are thus never loaded into physical
memory.
• A demand-paging system is similar to a paging system with swapping, where processes reside
in secondary memory (usually a disk). When we want to execute a process, we swap it into
memory.
• Rather than swapping the entire process into memory, however, we use a lazy swapper
called pager. A lazy swapper never swaps a page into memory unless that page will be
needed.
• When a process is to be swapped in, the pager guesses which pages will be used before the
process is swapped out again. Instead of swapping in a whole process, the pager brings only
those necessary pages into memory.
Overlays

• The ideas of overlays is to keep in memory only those instructions and data that are needed
at any given time. When other instructions are needed, they are loaded into space that was
occupied previously by instructions that are no longer needed.
• For example, consider two pass assembler. During pass-1 it constructs a symbol table, and
then during pass-2 it generates machine language code.
• We may be able to partition such as assembler into pass-1 code, pass-2 code, the symbol
table and common support routines used by both pass-1 and pass-2.
Overlays

To load everything at once we would require 200K of


memory. If only 150K is available, we cannot run our
process. Notice that pass-1 and pass-2 do not need
to be in memory at the same time. Thus
we define two overlays:
1. Overlay A is the symbol table, common routines
and pass-1
2. Overlay B is the symbol table, common routines
and pass-2

We add on overlay driver (10K) and start with


overlay A in memory. When we finish pass-1, we
jump to the overlay driver, which reads overlay B
into memory, overwriting overlay A and then
transfers control to pass-2.
Page Replacement Algorithms

• When the processor needs to execute a page, and if that page is not available in main
memory then this situation is called page fault.

• For bringing in the required page into main memory, if the space is not available in memory
then we need to remove the page from the main memory for allocating the space to the
new page which needs to be executed.

• When a page fault occurs, the operating system has to choose a page to remove from
memory to make room for the page that has to be brought in. This is known as page
replacement.
Steps in Page Replacement

1. Find the location of the desired page on the disk.


2. Find a free frame:
(i) If there is a free frame, use it.
(ii) If there is no free frame, use a page-replacement algorithm to select a victim frame.
(iii) Write the victim frame to the disk; change the page and frame tables accordingly.
3. Read the desired page into the newly freed frame; change the page and frame tables.
4. Restart the user process.
FIFO (First In First Out) Page Replacement Algorithm

• The simplest page replacement algorithm is a FIFO.


• A FIFO replacement algorithm associates with each page the time when that page was
brought into memory.
• When a page must be replaced, the oldest page is chosen.
• FIFO queue is created to hold all pages in memory. We replace the page at the head of the
queue.
• When a page is brought into memory, we insert it at the tail of the queue.
• The FIFO page replacement algorithm is easy to understand and program. It performance is
not always good.
FIFO (First In First Out) Page Replacement Algorithm

consider the following reference string: 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0. The three


frames are initially empty.

There are 15 faults altogether


FIFO (First In First Out) Page Replacement Algorithm

Consider the reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5. Calculate page fault for three and
four frames.

The number of faults for four frames (10) is greater than the number of faults for
three frames (9). This most unexpected result is known as Belady’s anomaly.
FIFO (First In First Out) Page Replacement Algorithm

Advantages of FIFO Page Replacement Algorithm:


1. It is simple to implement.
2. It is easiest algorithm
3. Easy to understand and execute.

Disadvantages of FIFO Page Replacement Algorithm


1. It is not very effective.
2. System needs to track of each frame.
3. Its performance is not always good.
4. It suffers from Belady’s anomaly.
5. Bad replacement choice increases the page fault rate and slow process execution.
Optimal Page Replacement Algorithm

• An optimal page replacement algorithm has the lowest page fault rate of all algorithms and
would never suffer from Belady’s anomaly.
• Optimal replacement algorithm states replace that page which will not be used for the
longest period of time.
• consider the following reference string , 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1.
Optimal Page Replacement Algorithm

Consider the reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5. Calculate the page fault using


Optimal Page Replacement using 3 frames.
Optimal Page Replacement Algorithm

Advantages of Optimal Page Replacement Algorithm:


1. It is the best possible optimal algorithm.
2. It gives the smallest number of page faults.
3. It never suffers from Belady’s anomaly.
4. Twice as good as FIFO page Replacement Algorithm

Disadvantages of Optimal Page Replacement Algorithm:


1. This algorithm is difficult to implement.
2. It is only use as a theoretical part of page replacement.
3. It requires future knowledge of reference string.
LRU (Least Recently Used) Page Replacement Algorithm

• If we use the recent past as an approximation of the near future, then we would replace
that page which has not been used for the longest period of time. This is the least recently
used algorithm.

• LRU replacement associates with each page the time of its last use. When a page is to be
replaced, LRU chooses that page which has not been used for the longest period of time.

• We can think of this strategy as the optimal page-replacement algorithm looking backward
in time, rather than forward.
LRU (Least Recently Used) Page Replacement Algorithm

For example consider the following reference string, 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1,


7, 0, 1.

No. of page faults = 12.


LRU (Least Recently Used) Page Replacement Algorithm

Advantages of LRU Page Replacement Algorithm:


1. LRU is actually quite a good algorithm.
2. It never suffers from Belady’s anomaly.
3. LRU algorithm is very feasible to implement.

Disadvantages of LRU Page Replacement Algorithm:


1. LRU algorithm is that it requires additional data structure and hardware support.
2. Its implementation is not very easy.
LRU (Least Recently Used) Page Replacement Algorithm

Consider the reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5. Calculate the page fault using


LRU Page Replacement using 3 frames.
Summary
Example

Consider the page reference string: 2, 3, 2, 1, 5, 2, 4, 5, 3, 2, 5, 2. How many page faults


occur for the following replacement algorithm, assuming three frames?
(i) FIFO
(ii) LRU
(iii) Optimal.
Solution - FIFO
Solution - LRU
Solution – Optimal
Example

Q. Explain FIFO (First In First Out) page replacement algorithm for reference string
7012030423103. (4m)
Sol: A FIFO replacement associates with each page the time when that page was bought
into memory. When the page must be replaced, the oldest page is chosen. It maintains a
FIFO queue to hold all pages in memory. We replace the page at the head of the queue.
When a page is brought into the memory. We insert it at the tail of the queue.
Consider three frames are available.
Thank You

Vijay Patil
Department of Computer Engineering (NBA Accredited)
Vidyalankar Polytechnic
Vidyalankar College Marg, Wadala(E), Mumbai 400 037
E-mail: vijay.patil@vpt.edu.in
Copy protected with Online-PDF-No-Copy.com

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy