Unit 4 Ca213
Unit 4 Ca213
MEMORY MANAGEMENT
Memory Management
Memory management is the functionality of an operating system which handles or
manages primary memory and moves processes back and forth between main
memory and disk during execution.
Memory management keeps track of each and every memory location, regardless of
either it is allocated to some process or it is free.
It checks how much memory is to be allocated to processes.
It decides which process will get memory at what time.
It tracks whenever some memory gets freed or unallocated and correspondingly it
updates the status.
Logical and Physical Address
The terms "logical address space" and "physical address space" are commonly used in computer
systems and operating systems to describe different aspects of memory management. Here's a brief
explanation of each:
To facilitate the mapping between the logical and physical address spaces, the operating system
uses a memory management unit (MMU) and various techniques such as paging or segmentation.
The MMU translates logical addresses into physical addresses, allowing the program to access the
corresponding physical memory locations.
1
There are two Memory Management Techniques: Contiguous, and Non-Contiguous.
In Contiguous Technique, executing process must be loaded entirely in main-
memory. In contiguous memory allocation each process is contained in a single
contiguous block of memory. Memory is divided into several fixed size partitions.
Each partition contains exactly one process.
Contiguous Technique can be divided into:
1. Fixed (or static) partitioning
2. Variable (or dynamic) partitioning
Partitioning
Partitioning in memory management refers to the division of the computer's physical memory into
fixed-size or variable-size partitions. Each partition is used to allocate and manage different
processes or programs. There are two common types of partitioning in memory management:
Static partitioning and dynamic partitioning are two approaches used in memory management for
dividing the available memory into partitions to accommodate multiple processes. Here's an
explanation of each approach:
Static Partitioning:
Static partitioning, also known as fixed partitioning, involves dividing the available memory into
fixed-size partitions or regions in advance. Each partition is assigned to a specific process or job,
and the size of each partition remains constant throughout the execution of the system. Each
process is allocated a fixed partition, and it cannot exceed the allocated size.
2
Key characteristics of static partitioning include:
Fixed partition sizes: The memory is divided into fixed-size partitions, which are typically
determined during system boot or initialization.
No memory fragmentation: Since each partition has a fixed size, there is no fragmentation
within the partitions themselves.
External fragmentation: Over time, as processes are loaded and removed, free memory
blocks become scattered, leading to external fragmentation. This fragmentation occurs
when the total amount of free memory is sufficient to satisfy a request but is not contiguous.
Inefficient memory utilization: Static partitioning can lead to inefficient memory utilization
because partitions may not always be fully utilized, resulting in wasted memory space.
Static partitioning is commonly used in systems with simple memory management requirements
and a small number of fixed-size processes.
Dynamic partitioning offers better memory utilization compared to fixed partitioning but involves
more complex memory management algorithms to allocate and deallocate memory dynamically.
Some common algorithms used with dynamic partitioning include:
First Fit: Allocates the first available partition that is large enough to hold the process.
Best Fit: Allocates the smallest partition that is large enough to hold the process,
minimizing wasted memory.
Worst Fit: Allocates the largest available partition, leaving behind larger unallocated spaces
for future allocations.
3
Key characteristics of dynamic partitioning include:
Variable partition sizes: Partitions are created and resized dynamically based on the
memory needs of the processes.
4
Dynamic partitioning is commonly used in modern operating systems where memory
allocation and deallocation occur frequently and memory needs of processes vary
dynamically.
Both static partitioning and dynamic partitioning have their advantages and limitations, and
the choice of approach depends on the specific requirements and characteristics of the system.
Compaction
Compaction is a memory management technique used to reduce external fragmentation in
dynamic memory allocation systems. External fragmentation occurs when free memory blocks
are scattered throughout the memory, making it difficult to allocate larger contiguous blocks
of memory to processes.
The main idea behind compaction is to rearrange the occupied and free memory blocks in
order to create larger contiguous free memory blocks. This can be achieved by moving
processes in memory to eliminate the gaps between them. The goal is to compact the memory
space and create a large enough continuous block of free memory that can be allocated to new
processes.
5
2. Linked List
In this approach, the free disk blocks are linked together i.e. a free block contains a pointer to
the next free block. The block number of the very first disk block is stored at a separate
location on disk and is also cached in memory.
In Figure-2, the free space list head points to Block 5 which points to Block 6, the next free
block and so on. The last free block would contain a null pointer indicating the end of free list.
A drawback of this method is the I/O required for free space list traversal.
6
Paging
Paging is a memory management scheme that eliminates the need for contiguous
allocation of physical memory.
This scheme permits the physical address space of a process to be non –
contiguous.
Paging is a fixed size partitioning scheme.
In paging, secondary memory and main memory are divided into equal fixed size
partitions.
The partitions of secondary memory are called as pages.
The partitions of main memory are called as frames.
Each process is divided into parts where size of each part is same as page size.
The size of the last part may be less than the page size.
The pages of process are stored in the frames of main memory depending upon
their availability.
7
The hardware implementation of page table can be done by using dedicated
registers.
But the usage of register for the page table is satisfactory only if page table is
small.
If page table contain large number of entries then we can use TLB (translation
Look-aside buffer), a special, small, fast look up hardware cache.
1. The TLB is associative, high speed memory.
2. Each entry in TLB consists of two parts: a tag and a value.
3. When this memory is used, then an item is compared with all tags
simultaneously. If the item is found, then corresponding value is returned.
Advantages-
The advantages of paging are-
It allows to store parts of a single process in a non-contiguous fashion.
It solves the problem of external fragmentation.
Disadvantages-
The disadvantages of paging are-
It suffers from internal fragmentation.
There is an overhead of maintaining a page table for each process.
The time taken to fetch the instruction increases since now two memory accesses
are required.
Page Table-
Page table is a data structure.
It maps the page number referenced by the CPU to the frame number where that
page is stored.
.
8
Characteristics-
Page table is stored in the main memory
Number of entries in a page table = Number of pages in which the process is
divided.
Page Table Base Register (PTBR) contains the base address of page table.
Each process has its own independent page table.
Page Table Base Register (PTBR) provides the base address of the page table.
The base address of the page table is added with the page number referenced by
the CPU.
It gives the entry of the page table containing the frame number where the
referenced page is stored.
Page Table Entry format:
A page table entry contains several information about the page.
The information contained in the page table entry varies from operating system to
operating system.
The most important information in a page table entry is frame number, Present /
Absent Bit, Protection Bit, Reference Bit, Dirty Bit (Modified bit)
9
Page Replacement: If there are no free frames available in the physical memory, the operating
system needs to free up a frame by selecting a page to be replaced. This involves swapping out a
page from the physical memory to make room for the requested page. The choice of the page
replacement algorithm determines which page will be selected for replacement.
Translation Lookaside Buffer (TLB): To speed up the translation process, a special cache called
the Translation Lookaside Buffer (TLB) is used. The TLB stores recently used page table entries,
allowing for faster address translation.
Simplified memory management: Pages can be easily allocated and deallocated, allowing
for efficient use of physical memory.
Increased flexibility: Paging allows processes to have a larger logical address space than
the available physical memory.
Protection and isolation: Each process has its own page table, providing memory protection
and isolation between processes.
Efficient memory allocation: The use of fixed-size pages reduces external fragmentation
compared to other memory allocation techniques.
However, paging also introduces overhead due to the need for page table lookups and
potential page faults, which can impact system performance.
10
Demand Paging
Demand paging is a memory management technique used in operating systems to optimize the use
of physical memory by loading pages into memory only when they are required. Instead of loading
the entire process into memory at once, as in traditional paging, demand paging brings in pages
on-demand, based on the specific memory references made by the process.
Initial Loading: When a process is first executed, only a small portion of it, typically the initial
set of pages needed to start execution, is loaded into memory. This initial set is often referred to as
the "working set" of the process.
Page Fault: When a process accesses a memory location that is not currently in physical memory
(i.e., a page fault occurs), the operating system intervenes to handle the fault. It locates the required
page, if it exists in secondary storage (such as the disk), and brings it into an available physical
frame in memory.
Page Replacement: If there are no available free frames in memory, the operating system selects
a page to be replaced using a page replacement algorithm. The selected page is written back to
secondary storage if it has been modified, and the requested page is brought in and mapped to a
free frame.
Page Validity: Each page has a validity bit associated with it in the page table. The validity bit
indicates whether a page is currently present in physical memory or not. On a page fault, the
validity bit is checked, and if it is set to invalid, the page is loaded into memory; otherwise, the
page is already in memory and can be accessed directly.
Reduced Memory Footprint: Only the pages that are actively used by a process are loaded
into memory, reducing the overall memory requirements. This allows for efficient memory
utilization, especially for processes with large address spaces.
Faster Process Startup: By loading only the essential pages initially, the process can start
execution faster. It eliminates the need to load the entire process into memory before
execution begins.
Efficient Memory Usage: Demand paging allows for effective use of physical memory
resources by bringing in pages only when needed. It helps in avoiding unnecessary memory
wastage.
11
However, demand paging also has some drawbacks:
Page Fault Overhead: Page faults incur additional overhead due to the need to bring in
pages from secondary storage, resulting in increased response times and potentially
slowerexecution.
Thrashing: If the demand for pages exceeds the available physical memory, and the
systemspends most of its time swapping pages in and out, it can lead to a phenomenon
called thrashing. Thrashing significantly degrades system performance.
First-In, First-Out (FIFO): This algorithm replaces the page that has been in memory the
longest. It maintains a queue of pages and evicts the page at the front of the queue when a page
fault occurs.
Least Recently Used (LRU): This algorithm replaces the page that has not been used for the
longest period of time. It requires tracking the usage history of each page and updating it each
timea page is accessed. The page with the oldest access timestamp is selected for replacement.
Optimal Page Replacement (OPT): This theoretical algorithm replaces the page that will not
be used for the longest duration in the future. It requires knowledge of the future memory
references, which is usually not available in practice. OPT is used as a benchmark to measure
the performanceof other algorithms.
Initially all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page
faults
0 is already there so —> 0 Page fault.
when 3 came it will take the place of 7 because it is not used for the longest duration of time
in the future.—>1 Page fault.
0 is already there so —> 0 Page fault..
4 will takes place of 1 —> 1 Page Fault.
Now for the further page reference string —> 0 Page fault because they are already available
in the memory.
Optimal page replacement is perfect, but not possible in practice as the operating system
cannot know future requests. The use of Optimal Page replacement is to set up a benchmark
so that other replacement algorithms can be analyzed against it.
13
3. Least Recently Used –
In this algorithm page will be replaced which is least recently used.
Example-3Consider the page reference string 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2 with 4 page
frames. Find number of page faults.
Initially all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page faults
0 is already their so —> 0 Page fault.
when 3 came it will take the place of 7 because it is least recently used —>1 Page fault
0 is already in memory so —> 0 Page fault.
4 will takes place of 1 —> 1 Page Fault
Now for the further page reference string —> 0 Page fault because they are already available
in the memory.
Q. Consider a reference string: 4, 7, 6, 1, 7, 6, 1, 2, 7, 2. the number of frames in the
memory is 3. Find out the number of page faults respective to:
1. Optimal Page Replacement Algorithm
2. FIFO Page Replacement Algorithm
3. LRU Page Replacement Algorithm
Optimal Page Replacement Algorithm
14
LRU Page Replacement Algorithm
15
Thrashing
Thrashing refers to a situation in computer systems where the system spends a significant
amount of time and resources continuously swapping pages between physical memory and
disk, instead of executing useful tasks. It occurs when the system is under a high memory load
and is unable to allocate enough physical memory to meet the demands of running processes.
When thrashing occurs, the system's performance severely degrades, and the response time
for executing tasks becomes excessively long. The CPU is occupied with swapping pages in
and out of memory, resulting in minimal time spent on actual processing. This situation can be
detrimentalto overall system efficiency.
Insufficient physical memory: When there is not enough physical memory to hold all
the required pages for running processes, frequent swapping occurs, leading to
thrashing.
Poor process scheduling: If the system scheduler does not allocate sufficient CPU time
to processes, they may not be able to make progress in their execution, leading to
increased page faults and thrashing.
Segmentation:
Segmentation is a memory management technique in which each job is divided
into several segments of different sizes, one for each module that contains pieces
that perform related functions.
Each segment is actually a different logical address space of the program.
When a process is to be executed, its corresponding segmentation are loaded into
non-contiguous memory though every segment is loaded into a contiguous block
of available memory.
Segmentation memory management works very similar to paging but here
segments are of variable-length where as in paging pages are of fixed size.
The operating system maintains a segment map table for every process and a list
of free memory blocks along with segment numbers, their size and
corresponding memory locations in main memory.
For each segment, the table stores the starting address of the segment and the
length of the segment.
A reference to a memory location includes a value that identifies a segment and
an offset.
16
Segment Table – It maps two-dimensional Logical address into one-
dimensional Physical address. It’s each table entry has:
1. Base Address: It contains the starting physical address where the segments
reside in memory.
2. Limit: It specifies the length of the segment.
17
Address generated by the CPU is divided into:
Segment number (s): Number of bits required to represent the segment.
Segment offset (d): Number of bits required to represent the size of the
segment.
Advantages of Segmentation –
No Internal fragmentation.
18
19