0% found this document useful (0 votes)
11 views

Unit 4 Ca213

The document discusses memory management in operating systems, detailing its functions, including tracking memory allocation and managing logical and physical address spaces. It covers memory management techniques such as contiguous and non-contiguous allocation, static and dynamic partitioning, and paging, along with their advantages and disadvantages. Additionally, it explains demand paging and page replacement algorithms, emphasizing their roles in optimizing memory usage and system performance.

Uploaded by

24zaid7
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views

Unit 4 Ca213

The document discusses memory management in operating systems, detailing its functions, including tracking memory allocation and managing logical and physical address spaces. It covers memory management techniques such as contiguous and non-contiguous allocation, static and dynamic partitioning, and paging, along with their advantages and disadvantages. Additionally, it explains demand paging and page replacement algorithms, emphasizing their roles in optimizing memory usage and system performance.

Uploaded by

24zaid7
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

DEPARTMENT OF COMPUTER APPLICATION

PRINCIPLES OF OPERATING SYSTEM


UNIT 4

MEMORY MANAGEMENT

Memory Management
 Memory management is the functionality of an operating system which handles or
manages primary memory and moves processes back and forth between main
memory and disk during execution.
 Memory management keeps track of each and every memory location, regardless of
either it is allocated to some process or it is free.
 It checks how much memory is to be allocated to processes.
 It decides which process will get memory at what time.
 It tracks whenever some memory gets freed or unallocated and correspondingly it
updates the status.
Logical and Physical Address

The terms "logical address space" and "physical address space" are commonly used in computer
systems and operating systems to describe different aspects of memory management. Here's a brief
explanation of each:

Logical Address Space:


The logical address space refers to the virtual memory addresses that a process or program uses. It
is the range of addresses that a program can reference or access, without considering the underlying
physical memory organization. The logical address space provides an abstraction for the program,
allowing it to access a larger address range than what is physically available. Each process has its
own logical address space, which is typically divided into pages or segments.

Physical Address Space:


The physical address space represents the actual physical memory addresses in the computer's
memory system. It refers to the physical memory locations where data and instructions are stored.
Unlike the logical address space, the physical address space corresponds to the real physical
memory chips or storage devices.

To facilitate the mapping between the logical and physical address spaces, the operating system
uses a memory management unit (MMU) and various techniques such as paging or segmentation.
The MMU translates logical addresses into physical addresses, allowing the program to access the
corresponding physical memory locations.

1
There are two Memory Management Techniques: Contiguous, and Non-Contiguous.
 In Contiguous Technique, executing process must be loaded entirely in main-
memory. In contiguous memory allocation each process is contained in a single
contiguous block of memory. Memory is divided into several fixed size partitions.
 Each partition contains exactly one process.
 Contiguous Technique can be divided into:
1. Fixed (or static) partitioning
2. Variable (or dynamic) partitioning

Partitioning
Partitioning in memory management refers to the division of the computer's physical memory into
fixed-size or variable-size partitions. Each partition is used to allocate and manage different
processes or programs. There are two common types of partitioning in memory management:

Static partitioning and dynamic partitioning are two approaches used in memory management for
dividing the available memory into partitions to accommodate multiple processes. Here's an
explanation of each approach:

Static Partitioning:
Static partitioning, also known as fixed partitioning, involves dividing the available memory into
fixed-size partitions or regions in advance. Each partition is assigned to a specific process or job,
and the size of each partition remains constant throughout the execution of the system. Each
process is allocated a fixed partition, and it cannot exceed the allocated size.

2
Key characteristics of static partitioning include:

 Fixed partition sizes: The memory is divided into fixed-size partitions, which are typically
determined during system boot or initialization.

 No memory fragmentation: Since each partition has a fixed size, there is no fragmentation
within the partitions themselves.

 External fragmentation: Over time, as processes are loaded and removed, free memory
blocks become scattered, leading to external fragmentation. This fragmentation occurs
when the total amount of free memory is sufficient to satisfy a request but is not contiguous.

 Inefficient memory utilization: Static partitioning can lead to inefficient memory utilization
because partitions may not always be fully utilized, resulting in wasted memory space.

Static partitioning is commonly used in systems with simple memory management requirements
and a small number of fixed-size processes.

Advantages of Fixed Partitioning –


1. Easy to implement:
2. Little OS overhead:
Disadvantages of Fixed Partitioning –
 Internal Fragmentation
 External Fragmentation
 Limit process size
 Limitation on Degree of Multiprogramming

Variable Partitioning Or Dynamic Partitioning:


Dynamic partitioning, also known as variable partitioning, is an approach where memory is
divided into variable-sized partitions based on the size of the processes being executed. Unlike
static partitioning, the partition sizes are not predetermined and can vary depending on the memory
requirements of the processes.

Dynamic partitioning offers better memory utilization compared to fixed partitioning but involves
more complex memory management algorithms to allocate and deallocate memory dynamically.
Some common algorithms used with dynamic partitioning include:

 First Fit: Allocates the first available partition that is large enough to hold the process.

 Best Fit: Allocates the smallest partition that is large enough to hold the process,
minimizing wasted memory.

 Worst Fit: Allocates the largest available partition, leaving behind larger unallocated spaces
for future allocations.
3
Key characteristics of dynamic partitioning include:

 Variable partition sizes: Partitions are created and resized dynamically based on the
memory needs of the processes.

 Memory fragmentation: Dynamic partitioning can lead to external fragmentation, which


occurs when free memory blocks are scattered and not contiguous.
 It is a part of Contiguous allocation technique.
 It is used to alleviate the problem faced by Fixed Partitioning.
 In contrast with fixed partitioning, partitions are not made before the execution or
during system configure.
 Various features associated with variable Partitioning-
1. Initially RAM is empty and partitions are made during the run-time
according to process’s need instead of partitioning during system configure.
2. The size of partition will be equal to incoming process.
3. The partition size varies according to the need of the process so that the
internal fragmentation can be avoided to ensure efficient utilisation of RAM.
4. Number of partitions in RAM is not fixed and depends on the number of
incoming process and Main Memory’s size.

Advantages of Variable Partitioning


1. No Internal Fragmentation
2. No restriction on Degree of Multiprogramming
3. No Limitation on the size of the process
Disadvantages of Variable Partitioning
1. Difficult Implementation
2. External Fragmentation

4
Dynamic partitioning is commonly used in modern operating systems where memory
allocation and deallocation occur frequently and memory needs of processes vary
dynamically.
Both static partitioning and dynamic partitioning have their advantages and limitations, and
the choice of approach depends on the specific requirements and characteristics of the system.

Compaction
Compaction is a memory management technique used to reduce external fragmentation in
dynamic memory allocation systems. External fragmentation occurs when free memory blocks
are scattered throughout the memory, making it difficult to allocate larger contiguous blocks
of memory to processes.

The main idea behind compaction is to rearrange the occupied and free memory blocks in
order to create larger contiguous free memory blocks. This can be achieved by moving
processes in memory to eliminate the gaps between them. The goal is to compact the memory
space and create a large enough continuous block of free memory that can be allocated to new
processes.

Free space management techniques in Operating System


 The system keeps tracks of the free disk blocks for allocating space to files when
they are created.
 Also, to reuse the space released from deleting the files, free space management
becomes crucial.
 The system maintains a free space list which keeps track of the disk blocks that
are not allocated to some file or directory.
 The free space list can be implemented mainly as:
1. Bitmap or Bit vector:
 A Bitmap or Bit Vector is series or collection of bits where each bit corresponds
to a disk block.
 The bit can take two values: 0 and 1: 0 indicates that the block is allocated and
1 indicates a free block.
 The given instance of disk blocks on the disk in Figure 1 (where green blocks

are allocated) can be represented by a bitmap of 16 bits as: 0000111000000110.

5
2. Linked List
In this approach, the free disk blocks are linked together i.e. a free block contains a pointer to
the next free block. The block number of the very first disk block is stored at a separate
location on disk and is also cached in memory.

In Figure-2, the free space list head points to Block 5 which points to Block 6, the next free
block and so on. The last free block would contain a null pointer indicating the end of free list.
A drawback of this method is the I/O required for free space list traversal.

6
Paging
 Paging is a memory management scheme that eliminates the need for contiguous
allocation of physical memory.
 This scheme permits the physical address space of a process to be non –
contiguous.
 Paging is a fixed size partitioning scheme.
 In paging, secondary memory and main memory are divided into equal fixed size
partitions.
 The partitions of secondary memory are called as pages.
 The partitions of main memory are called as frames.

 Each process is divided into parts where size of each part is same as page size.
 The size of the last part may be less than the page size.
 The pages of process are stored in the frames of main memory depending upon
their availability.

Address generated by CPU is divided into


 Page number(p): Number of bits required to represent the pages in Logical
Address Space or Page number
 Page offset(d): Number of bits required to represent particular word in a page
or page size of Logical Address Space or word number of a page or page offset.
Physical Address is divided into
 Frame number(f): Number of bits required to represent the frame of Physical
Address Space or Frame number.
 Frame offset(d): Number of bits required to represent particular word in a frame
or frame size of Physical Address Space or word number of a frame or frame
offset.

7
 The hardware implementation of page table can be done by using dedicated
registers.
 But the usage of register for the page table is satisfactory only if page table is
small.
 If page table contain large number of entries then we can use TLB (translation
Look-aside buffer), a special, small, fast look up hardware cache.
1. The TLB is associative, high speed memory.
2. Each entry in TLB consists of two parts: a tag and a value.
3. When this memory is used, then an item is compared with all tags
simultaneously. If the item is found, then corresponding value is returned.

Advantages-
The advantages of paging are-
 It allows to store parts of a single process in a non-contiguous fashion.
 It solves the problem of external fragmentation.
Disadvantages-
The disadvantages of paging are-
 It suffers from internal fragmentation.
 There is an overhead of maintaining a page table for each process.
 The time taken to fetch the instruction increases since now two memory accesses
are required.
Page Table-
 Page table is a data structure.
 It maps the page number referenced by the CPU to the frame number where that
page is stored.
.

8
Characteristics-
 Page table is stored in the main memory
 Number of entries in a page table = Number of pages in which the process is
divided.
 Page Table Base Register (PTBR) contains the base address of page table.
 Each process has its own independent page table.
 Page Table Base Register (PTBR) provides the base address of the page table.
 The base address of the page table is added with the page number referenced by
the CPU.
 It gives the entry of the page table containing the frame number where the
referenced page is stored.
Page Table Entry format:
 A page table entry contains several information about the page.
 The information contained in the page table entry varies from operating system to
operating system.
 The most important information in a page table entry is frame number, Present /
Absent Bit, Protection Bit, Reference Bit, Dirty Bit (Modified bit)

9
Page Replacement: If there are no free frames available in the physical memory, the operating
system needs to free up a frame by selecting a page to be replaced. This involves swapping out a
page from the physical memory to make room for the requested page. The choice of the page
replacement algorithm determines which page will be selected for replacement.

Translation Lookaside Buffer (TLB): To speed up the translation process, a special cache called
the Translation Lookaside Buffer (TLB) is used. The TLB stores recently used page table entries,
allowing for faster address translation.

Paging provides several advantages, including:

 Simplified memory management: Pages can be easily allocated and deallocated, allowing
for efficient use of physical memory.

 Increased flexibility: Paging allows processes to have a larger logical address space than
the available physical memory.

 Protection and isolation: Each process has its own page table, providing memory protection
and isolation between processes.

 Efficient memory allocation: The use of fixed-size pages reduces external fragmentation
compared to other memory allocation techniques.

 However, paging also introduces overhead due to the need for page table lookups and
potential page faults, which can impact system performance.

10
Demand Paging
Demand paging is a memory management technique used in operating systems to optimize the use
of physical memory by loading pages into memory only when they are required. Instead of loading
the entire process into memory at once, as in traditional paging, demand paging brings in pages
on-demand, based on the specific memory references made by the process.

Here's how demand paging works:

Initial Loading: When a process is first executed, only a small portion of it, typically the initial
set of pages needed to start execution, is loaded into memory. This initial set is often referred to as
the "working set" of the process.

Page Fault: When a process accesses a memory location that is not currently in physical memory
(i.e., a page fault occurs), the operating system intervenes to handle the fault. It locates the required
page, if it exists in secondary storage (such as the disk), and brings it into an available physical
frame in memory.

Page Replacement: If there are no available free frames in memory, the operating system selects
a page to be replaced using a page replacement algorithm. The selected page is written back to
secondary storage if it has been modified, and the requested page is brought in and mapped to a
free frame.

Page Validity: Each page has a validity bit associated with it in the page table. The validity bit
indicates whether a page is currently present in physical memory or not. On a page fault, the
validity bit is checked, and if it is set to invalid, the page is loaded into memory; otherwise, the
page is already in memory and can be accessed directly.

Demand paging provides several benefits:

 Reduced Memory Footprint: Only the pages that are actively used by a process are loaded
into memory, reducing the overall memory requirements. This allows for efficient memory
utilization, especially for processes with large address spaces.

 Faster Process Startup: By loading only the essential pages initially, the process can start
execution faster. It eliminates the need to load the entire process into memory before
execution begins.

 Efficient Memory Usage: Demand paging allows for effective use of physical memory
resources by bringing in pages only when needed. It helps in avoiding unnecessary memory
wastage.

 Increased System Responsiveness: Demand paging can improve system responsiveness by


prioritizing the loading of pages that are currently in use. This ensures that the most
frequently accessed pages are kept in memory, minimizing page faults.

11
However, demand paging also has some drawbacks:

 Page Fault Overhead: Page faults incur additional overhead due to the need to bring in
pages from secondary storage, resulting in increased response times and potentially
slowerexecution.

 Thrashing: If the demand for pages exceeds the available physical memory, and the
systemspends most of its time swapping pages in and out, it can lead to a phenomenon
called thrashing. Thrashing significantly degrades system performance.

Page Replacement Algorithm


A page replacement algorithm is used in demand-paged virtual memory systems to determine
which page should be evicted from physical memory when a page fault occurs and there are
no free frames available. The goal of a page replacement algorithm is to minimize the number
of pagefaults and optimize the overall system performance.

Here are some commonly used page replacement algorithms:

First-In, First-Out (FIFO): This algorithm replaces the page that has been in memory the
longest. It maintains a queue of pages and evicts the page at the front of the queue when a page
fault occurs.

Least Recently Used (LRU): This algorithm replaces the page that has not been used for the
longest period of time. It requires tracking the usage history of each page and updating it each
timea page is accessed. The page with the oldest access timestamp is selected for replacement.

Optimal Page Replacement (OPT): This theoretical algorithm replaces the page that will not
be used for the longest duration in the future. It requires knowledge of the future memory
references, which is usually not available in practice. OPT is used as a benchmark to measure
the performanceof other algorithms.

Page Replacement Algorithms :

1. First In First Out (FIFO)


 This is the simplest page replacement algorithm.
 In this algorithm, the operating system keeps track of all pages in the memory in
a queue, the oldest page is in the front of the queue.
 When a page needs to be replaced page in the front of the queue is selected for
removal.
Example-1 Consider page reference string 1, 3, 0, 3, 5, 6 with 3 page frames. Find number
of page faults.

Total Page fault =6


12
Initially all slots are empty, so when 1, 3, 0 came they are allocated to the empty slots
—> 3 Page Faults.
when 3 comes, it is already in memory so —> 0 Page Faults.
Then 5 comes, it is not available in memory so it replaces the oldest page slot i.e 1. —
>1 Page Fault.
6 comes, it is also not available in memory so it replaces the oldest page slot i.e 3 —>1 Page
Fault.
Finally when 3 come it is not available so it replaces 0 1 page fault
Hence Fage Fault Ratio = 6/7 = 0.85

2. Optimal Page replacement


In this algorithm, pages are replaced which would not be used for the longest duration of time
in the future.
Example-2: Consider the page references 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, with 4-page frame.
Find number of page fault.

Hence Fage Fault Ratio = 6/14 = 0.85

Initially all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page
faults
0 is already there so —> 0 Page fault.
when 3 came it will take the place of 7 because it is not used for the longest duration of time
in the future.—>1 Page fault.
0 is already there so —> 0 Page fault..
4 will takes place of 1 —> 1 Page Fault.

Now for the further page reference string —> 0 Page fault because they are already available
in the memory.

Optimal page replacement is perfect, but not possible in practice as the operating system
cannot know future requests. The use of Optimal Page replacement is to set up a benchmark
so that other replacement algorithms can be analyzed against it.

13
3. Least Recently Used –
In this algorithm page will be replaced which is least recently used.
Example-3Consider the page reference string 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2 with 4 page
frames. Find number of page faults.

Initially all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page faults
0 is already their so —> 0 Page fault.
when 3 came it will take the place of 7 because it is least recently used —>1 Page fault
0 is already in memory so —> 0 Page fault.
4 will takes place of 1 —> 1 Page Fault
Now for the further page reference string —> 0 Page fault because they are already available
in the memory.
Q. Consider a reference string: 4, 7, 6, 1, 7, 6, 1, 2, 7, 2. the number of frames in the
memory is 3. Find out the number of page faults respective to:
1. Optimal Page Replacement Algorithm
2. FIFO Page Replacement Algorithm
3. LRU Page Replacement Algorithm
Optimal Page Replacement Algorithm

Number of Page Faults in Optimal Page Replacement Algorithm = 5

14
LRU Page Replacement Algorithm

Number of Page Faults in LRU = 6 FIFO


Page Replacement Algorithm

Number of Page Faults in FIFO = 6

Q. Given page reference string: 1,2,3,4,2,1,5,6,2,1,2,3,7,6,3,2,1,2,3,6. Compare the


number of page faults for LRU, FIFO and Optimal page replacement algorithm
using 3 and 4 Frame

15
Thrashing

Thrashing refers to a situation in computer systems where the system spends a significant
amount of time and resources continuously swapping pages between physical memory and
disk, instead of executing useful tasks. It occurs when the system is under a high memory load
and is unable to allocate enough physical memory to meet the demands of running processes.

When thrashing occurs, the system's performance severely degrades, and the response time
for executing tasks becomes excessively long. The CPU is occupied with swapping pages in
and out of memory, resulting in minimal time spent on actual processing. This situation can be
detrimentalto overall system efficiency.

Thrashing can happen due to various reasons, such as:

 Insufficient physical memory: When there is not enough physical memory to hold all
the required pages for running processes, frequent swapping occurs, leading to
thrashing.

 Poor process scheduling: If the system scheduler does not allocate sufficient CPU time
to processes, they may not be able to make progress in their execution, leading to
increased page faults and thrashing.

 Memory fragmentation: Fragmentation of physical memory can result in inefficient


memory allocation, causing excessive page swapping and thrashing.

Segmentation:
 Segmentation is a memory management technique in which each job is divided
into several segments of different sizes, one for each module that contains pieces
that perform related functions.
 Each segment is actually a different logical address space of the program.
 When a process is to be executed, its corresponding segmentation are loaded into
non-contiguous memory though every segment is loaded into a contiguous block
of available memory.
 Segmentation memory management works very similar to paging but here
segments are of variable-length where as in paging pages are of fixed size.
 The operating system maintains a segment map table for every process and a list
of free memory blocks along with segment numbers, their size and
corresponding memory locations in main memory.
 For each segment, the table stores the starting address of the segment and the
length of the segment.
 A reference to a memory location includes a value that identifies a segment and
an offset.

16
 Segment Table – It maps two-dimensional Logical address into one-
dimensional Physical address. It’s each table entry has:
1. Base Address: It contains the starting physical address where the segments
reside in memory.
2. Limit: It specifies the length of the segment.

Translation of Two Dimensional Logical Address to one dimensional Physical Address.

17
Address generated by the CPU is divided into:
 Segment number (s): Number of bits required to represent the segment.
 Segment offset (d): Number of bits required to represent the size of the
segment.
Advantages of Segmentation –
 No Internal fragmentation.

 Segment Table consumes less space in comparison to Page table in paging.


Disadvantage of Segmentation –
 As processes are loaded and removed from the memory, the free memory space
is broken into little pieces, causing External fragmentation.

18
19

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy