PPT-Unit-5 Memory Management
PPT-Unit-5 Memory Management
PPT-Unit-5 Memory Management
Chapter Outcomes
• Describe the working of specified memory management function.
• Explain characteristic of the given memory management techniques.
• Write algorithm for the given page replacement technique.
• Calculate page fault for the given page reference string
Learning Objectives
• To understand Basic Concepts of Memory and Memory Management in OS
• To study Virtual Memory, Paging, Segmentation, Fragmentation etc.
• To learn various Page Replacement Algorithm such as FIFO, LRU etc.
Introduction
Memory management is one of the important function of operating system which helps in
allocating the main memory space to the processes and their data at the time of their
execution. The main task of memory management is efficient memory utilization.
Along with the allocation of memory space, memory management also perform the activities
such as Upgrading the performance of the computer system, Enabling the execution of
multiple processes at the same time, Sharing the same memory space among different
processes and so on.
Overview of Memory
• Process Isolation: Process isolation means controlling of one process interacts with the
data and memory of other process.
• Tracking of Memory Locations: Memory management keeps track of each and every
memory location, regardless of either it is allocated to some process or it is free. It checks
how much memory is to be allocated to processes.
• Long Term Storage: Long term storage of process will reduce the memory utilization
Functions of Memory Management
• Protection and Access Control: Do not apply the protection mechanisms and access control
to all the processes, better to apply to the important applications only. It will save the
execution time.
• 7. Keeping Status of Main Memory Locations: Memory management keeps track of the
status of each memory location, whether it is allocated or free.
Memory Partitioning
Two major memory management schemes are possible. Each approach divides memory into
a number of regions or partitions.
• Static (Fixed Sized) Memory Partitioning (or Multiprogramming with Fixed number of
Tasks (MFT))
• In static memory partitioning, the memory is divided into a number of fixed sized partitions and do not
change as the system runs.
• Each partition in static memory partitioning, contains exactly one process. So the number of programs
to be executed (i.e. degree of multiprogramming) depends on the number of partitions.
• There are two alternatives for fixed sized memory partitioning namely, equal sized partitions (a) and
unequal sized partitions (b)
Job Scheduling in fixed sized memory partitions
• As jobs enter the system, they are put into a job queue. The job scheduler takes into account
the memory requirement of each job and the available regions in determining which jobs are
allocated memory.
• When a job is allocated space, it is loaded into a region. It can then complete for the CPU.
When job terminates, it releases its memory region, which the job scheduler may then fill with
another job from the job queue.
• A number of variations are possible in allocation of memory to jobs. One strategy is to classify
all jobs on entry to system, according to its memory requirements. User specifies the
maximum amount of memory required the system can attempt to determine memory
requirements automatically.
MFT with separate queue for each region
Advantages:
• Simple to implement
• It requires minimal operating system software and processing overhead as
• Fixed partitioning makes efficient utilization of processor and I/O devices
Disadvantages:
• The main problem with the fixed partitioning method is how to determine the number of
partitions, and how to determine their sizes.
• Memory wastage
• Inefficient use of memory due to internal fragmentation.
• Maximum number of active processes is fixed.
Dynamic (Variable) Memory Partitioning
• In variable memory partitioning the partitions can vary in number and size. In variable
memory partitioning the amount of memory allocated is exactly the amount of memory a
process requires.
• The operating system keeps a table indicating which parts of memory are available and
which are occupied. Initially all memory is available for user programs and is considered
as one large block of available memory, a hole.
• When a job arrives and needs memory, we search for a hole large enough for this job. If
we find one, we allocate only as much as is needed, keeping the rest available to satisfy
future requests.
Dynamic (Variable) Memory Partitioning
For example, assume 256K memory available and a resident monitor of 40K. This situation
leaves 216K for user programs.
Example memory allocation and job scheduling for MVT
Internal Fragmentation
• Disk space can be viewed as a large array of disk blocks. At any given time some of these
blocks are allocated to files and others are free.
• Disk space seen as a collection of free and used segments, each segment is a contiguous set
of disk blocks. An unallocated segment is called a Hole. The dynamic storage allocation
problem is how to satisfy a request of size ‘n’ from a list of free holes. There are many
solutions to this problem.
• The set of holes is searched to determine which hole is best to allocate. The most common
strategies used to select a free hole from the set of available holes are first fit, best fit and
worst fit.
Dynamic Storage Allocation
1. First Fit: Allocate the first hole (or free block) that is big enough for the new process.
Searching can start either at the beginning of the set of holes or where the previous first fit
search ended. We can stop searching as soon as we find a large enough free hole. First fit is
generally faster.
2. Best Fit: Allocate the smallest hole that is big enough. We search the entire list, unless the
list is kept ordered by size. This strategy produces the smallest left over hole.
3. Worst Fit: Allocate the largest hole. Again we must search the entire list, unless it is sorted
by size.
First fit and best fit are better than worst fit in both time and storage utilization.
First fit is generally faster
Dynamic Storage Allocation
Consider a swapping system in which memory consists of the following hole sizes in memory
order: 10KB, 4KB, 20KB, 18KB, 7KB, 9KB, 12KB and 15KB. Which hole is taken for successive
segment requests for (i) 12 KB (ii) 10KB (iii) 9 KB for first fit, best fit and worst fit.
Sol: Memory arrangements for 12 KB job are as follows:
Dynamic Storage Allocation
In first fit allocate the first hole that is big enough for the job. In best fit, we arrange all
holes in ascending order to allocate the smallest hole this are big enough for the job.
In worst fit, we arrange all holes in descending order to allocate the largest hole this
are big enough for the job.
Compare MVT and MFT
• Files are created and deleted frequently during the operation of a computer system. Since
there is only a limited amount of disk space, it is necessary to reuse the space from deleted
files for new files.
• To keep track of free disk space, the file system maintains a free space list. The free space list
records all disk blocks, which are free.
• To create a file, we search the free space list for the required amount of space and allocate it
to the new file. This space is the removed from the free space list. When a file is deleted, its
disk space is added to the free space list.
• The process of looking after and managing the free blocks of the disk is called free space
management. The methods are used in free space management techniques are Bit Vector,
Linked List, Grouping and Counting.
Bit Vector
• The free space list may not be implemented as a list; it is implemented as a Bit Map or Bit
Vector. Bit map is series or collection of bits where each bit corresponds to a disk block.
• Each block in bit map is represented by one bit. If the block is free, the bit is ‘0’, if the block is
allocated the bit is ‘1’.
• For example, consider a disk where blocks 2, 3, 4, 5, 8, 9, 10, 11, 12, 13, 17, 18, 25, 26 and 27
are free, the free space bit map would be, 11000011000000111001111110001111………
Bit Vector
• Grouping is a free space management technique for a modification of the free list
method. In grouping, there is a modification of this approach would store the addresses
of ‘n’ free blocks in the first free block. The first n-1 of these is actually free. The last one
is the disk address of another block containing the addresses of another ‘n’ free block.
• The importance of this implementation is that the addresses of a large number of free
blocks can be found quickly.
• In which a disk block contains addresses of many free blocks and a block containing free
block pointers will get free when those blocks are used.
Grouping
Virtual Memory
• Virtual memory is a technique which allows the execution of processes that may not be
completely in memory.
• Virtual memory is the separation of user logical memory from physical memory. This
separation allows an extremely large virtual memory to be provided for programmers when
only a smaller physical memory is available.
• The basic idea behind virtual memory is that the combined size of the program, data and
stack may exceed the amount of physical memory available for it.
• The operating system keeps those parts of the program currently in use in main memory,
and the rest on the disk.
Virtual Memory
• Paging is a memory management technique by which a computer stores and retrieves data
from secondary storage for use in main memory. In paging, the operating system retrieves
data from secondary storage in same-size blocks called pages.
• The basic idea behind paging is that when a process is swapped in, the pager only loads into
memory those pages that it expects the process to need.
Disadvantages of Segmentation:
1. It suffers from external fragmentation.
2. Address translation i.e. conversion from logical address to physical address is not a simple
function, as regards paging.
Increased complexity in the operating system.
3. Increased hardware cost processor overhead for address mapping.
4. There is a difficulty in managing variable size segments on the secondary storage.
5. The maximum size of segment is limited by the size of main memory.
Differentiate between Paging and Segmentation
Paging Segmentation
In paging, the main memory is In segmentation, the main memory is
partitioned into page frames (or blocks). partitioned into segments.
In paging, the logical address space is In segmentation, the logical address
divided into pages by the compiler or space is divided into segments as
MMU (Memory Management Unit) specified by the user/programmer.
The OS maintains a page map table for The OS maintains a segment map table
mapping between frames and pages. for mapping purpose.
Paging suffer from internal Segmentation suffers from external
fragmentation or page breaks. fragmentation.
Differentiate between Paging and Segmentation
Paging Segmentation
Paging does not support the user's view Segmentation supports user’s view of
of memory. memory.
In paging, processor uses page number, In segmentation, processor uses
offset to calculate absolute address. segment number, offset to calculate
absolute address.
Paging is invisible to the user. Segmentation is visible to the user.
Paging is faster than segmentation. Segmentation is slower than paging.
Compaction
• It is possible that not all pages of the program were brought into memory. Some pages are
loaded into memory and some pages are kept on the disk.
• The pages that are in memory and the pages that are on the disk, a valid invalid bit is
provided. Pages that are not loaded into memory are marked as invalid in the page table,
using the invalid bit.
• The bit is set to valid if the associated page is in memory.
• But what happens if the process tries to access a page that was not brought into memory?
Access to a page marked invalid causes a Page Fault.
• A page fault occurs when a program accesses a page that has been mapped in address
space, but has not been loaded in the physical memory. When the page (data) requested by
a program is not available in the memory, it is called as a page fault.
Page table when some pages are not in main memory
Demand Paging
• The ideas of overlays is to keep in memory only those instructions and data that are needed
at any given time. When other instructions are needed, they are loaded into space that was
occupied previously by instructions that are no longer needed.
• For example, consider two pass assembler. During pass-1 it constructs a symbol table, and
then during pass-2 it generates machine language code.
• We may be able to partition such as assembler into pass-1 code, pass-2 code, the symbol
table and common support routines used by both pass-1 and pass-2.
Overlays
• When the processor needs to execute a page, and if that page is not available in main
memory then this situation is called page fault.
• For bringing in the required page into main memory, if the space is not available in memory
then we need to remove the page from the main memory for allocating the space to the
new page which needs to be executed.
• When a page fault occurs, the operating system has to choose a page to remove from
memory to make room for the page that has to be brought in. This is known as page
replacement.
Steps in Page Replacement
Consider the reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5. Calculate page fault for three and
four frames.
The number of faults for four frames (10) is greater than the number of faults for
three frames (9). This most unexpected result is known as Belady’s anomaly.
FIFO (First In First Out) Page Replacement Algorithm
• An optimal page replacement algorithm has the lowest page fault rate of all algorithms and
would never suffer from Belady’s anomaly.
• Optimal replacement algorithm states replace that page which will not be used for the
longest period of time.
• consider the following reference string , 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1.
Optimal Page Replacement Algorithm
• If we use the recent past as an approximation of the near future, then we would replace
that page which has not been used for the longest period of time. This is the least recently
used algorithm.
• LRU replacement associates with each page the time of its last use. When a page is to be
replaced, LRU chooses that page which has not been used for the longest period of time.
• We can think of this strategy as the optimal page-replacement algorithm looking backward
in time, rather than forward.
LRU (Least Recently Used) Page Replacement Algorithm
Q. Explain FIFO (First In First Out) page replacement algorithm for reference string
7012030423103. (4m)
Sol: A FIFO replacement associates with each page the time when that page was bought
into memory. When the page must be replaced, the oldest page is chosen. It maintains a
FIFO queue to hold all pages in memory. We replace the page at the head of the queue.
When a page is brought into the memory. We insert it at the tail of the queue.
Consider three frames are available.
Thank You
Vijay Patil
Department of Computer Engineering (NBA Accredited)
Vidyalankar Polytechnic
Vidyalankar College Marg, Wadala(E), Mumbai 400 037
E-mail: vijay.patil@vpt.edu.in
Copy protected with Online-PDF-No-Copy.com