0% found this document useful (0 votes)
91 views

Os - Unit-3

Uploaded by

aarthikurra77
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
91 views

Os - Unit-3

Uploaded by

aarthikurra77
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 33

UNIT-3

I. Memory-Management Strategies
1. Introduction
2. Swapping
3. Contiguous memory allocation
4. Paging
5. Segmentation

II. Virtual Memory Management


1. Introduction,
2. Demand paging
3. Copy on-write
4. Page replacement
5. Frame allocation
6. Thrashing
7. Memory-mapped files
8. Kernel memory allocation

Prepared by Mr. Isaac Paul P, Assoc Prof & HOD, Dept of CSE, RISE Krishna Sai Gandhi Groups 1
UNIT-3
I. Memory-Management Strategies
1. Introduction
 Program must be brought (from disk) into memory and placed within a
process for it to be run.
 Main memory and registers are only storage CPU can access directly
 Memory unit only sees a stream of “addresses + read requests” , or “ address +
data and write requests”
MICROPROCESSOR
 Register access in one CPU clock (or less). Main memory can take many cycles,
causing a stall. Control
 Cache sits between main memory and CPU ALUregisters.
 Protection of memory required to ensure correct operation Unit

Register Register Register Register


s s s s

CACHE

RA
M

DISK

Prepared by Mr. Isaac Paul P, Assoc Prof & HOD, Dept of CSE, RISE Krishna Sai Gandhi Groups 2
Base and Limit Registers
 A pair of base and limit registers define the logical address space
 CPU must check every memory access generated in user mode to be sure it is
between base and limit for that user

Hardware Address Protection

Prepared by Mr. Isaac Paul P, Assoc Prof & HOD, Dept of CSE, RISE Krishna Sai Gandhi Groups 3
Address Binding
 Programs on disk, ready to be brought into memory to execute form an input
queue
o Without support, must be loaded into address 0000
 Further, addresses represented in different ways at different stages of a
program’s life
o Source code addresses usually symbolic
o Compiled code addresses bind to relocatable addresses
 i.e. “14 bytes from beginning of this module”
o Linker or loader will bind relocatable addresses to absolute addresses
 i.e. 74014
o Each binding maps one address space to another

Binding of Instructions and Data to Memory


 Address binding of instructions and data to memory addresses can happen at
three different stages
o Compile time: If memory location known a priori, absolute code can
be generated; must recompile code if starting location changes
o Load time: Must generate relocatable code if memory location is
not known at compile time
o Execution time: Binding delayed until run time if the process can be
moved during its execution from one memory segment to another
 Need hardware support for address maps (e.g., base and limit
registers)
Multistep Processing of a User Program

Prepared by Mr. Isaac Paul P, Assoc Prof & HOD, Dept of CSE, RISE Krishna Sai Gandhi Groups 4
Logical vs. Physical Address Space

 The concept of a logical address space that is bound to a separate physical


address space is central to proper memory management
 Logical address – generated by the CPU; also referred to as virtual
address
 Physical address – address seen by the memory unit
 Logical and physical addresses are the same in compile-time and load-time
address-binding schemes; logical (virtual) and physical addresses differ in
execution-time address-binding scheme
 Logical address space is the set of all logical addresses generated by a
program
 Physical address space is the set of all physical addresses generated by a
program

Memory-Management Unit (MMU)

 Hardware device that at run time maps virtual to physical address


 To start, consider simple scheme where the value in the relocation register is
added to every address generated by a user process at the time it is sent to
memory
o Base register now called relocation register
o MS-DOS on Intel 80x86 used 4 relocation registers
 The user program deals with logical addresses; it never sees the real
physical addresses
o Execution-time binding occurs when reference is made to location
in memory
o Logical address bound to physical addresses

Dynamic relocation using a relocation register


 Routine is not loaded until it is called
 Better memory-space utilization; unused routine is never loaded
 All routines kept on disk in relocatable load format
 Useful when large amounts of code are needed to handle infrequently
occurring cases
 No special support from the operating system is required
o Implemented through program design
o OS can help by providing libraries to implement dynamic loading

Dynamic Linking
Prepared by Mr. Isaac Paul P, Assoc Prof & HOD, Dept of CSE, RISE Krishna Sai Gandhi Groups 5
 Static linking – system libraries and program code combined by the loader
into the binary program image
 Dynamic linking –linking postponed until execution time
 Small piece of code, stub, used to locate the appropriate memory-resident
library routine
 Stub replaces itself with the address of the routine, and executes the routine
 Operating system checks if routine is in processes’ memory address
o If not in address space, add to address space
 Dynamic linking is particularly useful for libraries
 System also known as shared libraries
 Consider applicability to patching system libraries
o Versioning may be needed

2. Swapping

a. A process can be swapped temporarily out of memory to a backing store, and then
brought back into memory for continued execution
i. Total physical memory space of processes can exceed physical memory
b. Backing store – fast disk large enough to accommodate copies of all memory images
for all users; must provide direct access to these memory images
c. Roll out, roll in – swapping variant used for priority-based scheduling algorithms;
lower-priority process is swapped out so higher-priority process can be loaded and
executed
d. Major part of swap time is transfer time; total transfer time is directly proportional to
the amount of memory swapped
e. System maintains a ready queue of ready-to-run processes which have memory
images on disk

Context Switch Time including Swapping

Prepared by Mr. Isaac Paul P, Assoc Prof & HOD, Dept of CSE, RISE Krishna Sai Gandhi Groups 6
a. If next processes to be put on CPU is not in memory, need to swap out a process and
swap in target process
b. Context switch time can then be very high
c. 100MB process swapping to hard disk with transfer rate of 50MB/sec
i. Swap out time = 100MB / 50 MB/Sec = 2 Sec= 2000 ms
ii. Plus swap in of same sized process
iii. Total context switch swapping component time of 4000ms (4 seconds)
d. Other constraints as well on swapping
i. Pending I/O – can’t swap out as I/O would occur to wrong process
ii. Or always transfer I/O to kernel space, then to I/O device
 Known as double buffering, adds overhead
e. Standard swapping not used in modern operating systems
i. But modified version common
 Swap only when free memory extremely low

Advantages of Swapping :
a. It helps the CPU to manage multiple processes within a single main memory.
b. It helps to create and use virtual memory.
c. Swapping allows the CPU to perform multiple tasks simultaneously. Therefore, processes
do not have to wait very long before they are executed.
d. It improves the main memory utilization.

Disadvantages of Swapping :
a. If the computer system loses power, the user may lose all information related to the
program in case of substantial swapping activity.
b. If the swapping algorithm is not good, the composite method can increase the number of
Page Fault and decrease the overall processing performance.

3. Contiguous memory allocation


 Contiguous memory allocation is a memory allocation method that allocates a single
contiguous section of memory to a process or a file.
 This method takes into account the size of the file or a process and also estimates the
maximum size, up to what the file or process can grow.
 Relocation registers used to protect user processes from each other, and from changing
operating-system code and data
o Base register contains value of smallest physical address
o Limit register contains range of logical addresses – each logical address must
be less than the limit register
o MMU maps logical address dynamically
o Can then allow actions such as kernel code being transient and kernel
changing size

Prepared by Mr. Isaac Paul P, Assoc Prof & HOD, Dept of CSE, RISE Krishna Sai Gandhi Groups 7
Dynamic Storage-Allocation Problem

How to satisfy a request of size n from a list of free positions?

 First-fit: Allocate the first position that is big enough


 Best-fit: Allocate the smallest position that is big enough; must search entire list,
unless ordered by size
o Produces the smallest leftover position
 Worst-fit: Allocate the largest position; must also search entire list
o Produces the largest leftover position

First-fit and best-fit better than worst-fit in terms of speed and storage utilization

Fragmentation

As processes are loaded and removed from memory, the free memory space is broken into little
pieces. It happens after sometimes that processes cannot be allocated to memory blocks considering
their small size and memory blocks remains unused. This problem is known as Fragmentation.

The following diagram shows how fragmentation can cause waste of memory and a compaction
technique can be used to create more free memory out of fragmented memory –

Prepared by Mr. Isaac Paul P, Assoc Prof & HOD, Dept of CSE, RISE Krishna Sai Gandhi Groups 8
 All the memory allocation strategies suffer from external fragmentation, though
first and best fits experience the problems more so than worst fit. External
fragmentation means that the available memory is broken up into lots of little pieces,
none of which is big enough to satisfy the next memory requirement, although the
sum total could.
 The amount of memory lost to fragmentation may vary with algorithm, usage
patterns, and some design decisions such as which end of a hole to allocate and
which end to save on the free list.
 Statistical analysis of first fit, for example, shows that for N blocks of allocated
memory, another 0.5 N will be lost to fragmentation.

 Internal fragmentation also occurs, with all memory allocation strategies. This is
caused by the fact that memory is allocated in blocks
of a fixed size, whereas the actual memory needed will rarely be that exact size. For
a random distribution of memory requests, on the
memory request, because on the average the last allocated block will be only half
full.

o Note that the same effect happens with hard drives, and that modern hardware
gives us increasingly larger drives and memory at the expense of ever larger
block sizes, which translates to more memory lost to internal fragmentation.
o Some systems use variable size blocks to minimize losses due to internal
fragmentation.
 If the programs in memory are relocatable, ( using execution- timeaddress binding ),
then the external fragmentation problem can be reduced via compaction, i.e. moving all
processes down to one end of physical memory. This only involves updating the
relocation register for each process, as all internal work is done using logical
addresses.

4. Paging
 Paging is a memory management technique in which process address space is broken into
blocks of the same size called pages (size is power of 2, between 512 bytes and 8192
bytes). The size of the process is measured in the number of pages.
 Similarly, main memory is divided into small fixed-sized blocks of (physical) memory
called frames and the size of a frame is kept the same as that of a page to have optimum
utilization of the main memory and to avoid external fragmentation.
 Divide physical memory into fixed-sized blocks called frames
a. Size is power of 2, between 512 bytes and 16 Mbytes
 Divide logical memory into blocks of same size called pages
 Keep track of all free frames
 To run a program of size N pages, need to find N free frames and load program
 Set up a page table to translate logical to physical addresses
 Backing store likewise split into pages

Prepared by Mr. Isaac Paul P, Assoc Prof & HOD, Dept of CSE, RISE Krishna Sai Gandhi Groups 9
 Address generated by CPU is divided into:
a. Page number (p) – used as an index into a page table which contains base
address of each page in physical memory
b. Page offset (d) – combined with base address to define the physical memory
address that is sent to the memory
page unit page offset
number
p d
m -n n

c. For given logical address space 2m and page size 2n

Paging Model of Logical and Physical Memory


Prepared by Mr. Isaac Paul P, Assoc Prof & HOD, Dept of CSE, RISE Krishna Sai Gandhi Groups 10
 Calculating internal fragmentation
o Page size = 2,048 bytes
o Process size = 72,766 bytes
o 72,766 / 2048 = 35 pages + 1,086 bytes
o Number of Total pages allotted = 36 pages

Prepared by Mr. Isaac Paul P, Assoc Prof & HOD, Dept of CSE, RISE Krishna Sai Gandhi Groups 11
5. Segmentation

 Memory-management scheme that supports user view of memory


 A program is a collection of segments
o A segment is a logical unit such as:
 main program
 procedure
 function
 method
 object
 local variables, global variables
 common block
 stack
 symbol table
 arrays

User’s View of a Program

Logical View of Segmentation

4
1

3 2
4

user space physical memory space

Prepared by Mr. Isaac Paul P, Assoc Prof & HOD, Dept of CSE, RISE Krishna Sai Gandhi Groups 12
Segmentation Architecture

 In Operating Systems, Segmentation is a memory management technique in


which the memory is divided into the variable size parts. Each part is
known as a segment which can be allocated to a process.
 The details about each segment are stored in a table called a segment table.
 Logical address consists of a two tuple:
 <segment-number, offset>,
 Segment table – maps two-dimensional physical addresses; each table entry
has:
o base – contains the starting physical address where the segments reside
in memory
o limit – specifies the length of the segment
o Segment-table base register (STBR) points to the segment table’s
location in memory
 Segment-table length register (STLR) indicates number of segments used
by a program;
 segment number s is legal if s < STLR
 Protection
o With each entry in segment table associate:
 validation bit = 0 Þ illegal segment
 read/write/execute privileges
 Protection bits associated with segments; code sharing occurs at segment level
 Since segments vary in length, memory allocation is a dynamic storage-allocation
problem
 A segmentation example is shown in the following diagram

Prepared by Mr. Isaac Paul P, Assoc Prof & HOD, Dept of CSE, RISE Krishna Sai Gandhi Groups 13
Advantages of Segmentation
1. No internal fragmentation
2. Average Segment Size is larger than the actual page size.
3. Less overhead
4. It is easier to relocate segments than entire address space.
5. The segment table is of lesser size as compared to the page table in paging.

Disadvantages
1. It can have external fragmentation.
2. It is difficult to allocate contiguous memory to variable sized partition.
3. Costly memory management algorithms.

Prepared by Mr. Isaac Paul P, Assoc Prof & HOD, Dept of CSE, RISE Krishna Sai Gandhi Groups 14
II. Virtual Memory Management
1. Introduction,

Virtual Memory is a storage scheme that provides user an illusion of having a very big main
memory. This is done by treating a part of secondary memory as the main memory.

A computer can address more memory than the amount physically installed on the system.
This extra memory is actually called virtual memory and it is a section of a hard disk.

In this scheme, User can load the bigger size processes than the available main memory by
having the illusion that the memory is available to load the process.

Instead of loading one big process in the main memory, the Operating System loads the
different parts of more than one process in the main memory.

By doing this, the degree of multiprogramming will be increased and therefore, the CPU
utilization will also be increased

Virtual Memory That is Larger Than Physical Memory

Prepared by Mr. Isaac Paul P, Assoc Prof & HOD, Dept of CSE, RISE Krishna Sai Gandhi Groups 15
Virtual address space – logical view of how process is stored in memory
o Usually start at address 0, contiguous addresses until end of space
o Meanwhile, physical memory organized in page frames
o MMU must map logical to physical

 Virtual memory can be implemented via:


o Demand paging
o Demand segmentation

Shared Library Using Virtual Memory

Prepared by Mr. Isaac Paul P, Assoc Prof & HOD, Dept of CSE, RISE Krishna Sai Gandhi Groups 16
2. Demand paging

The process of loading the page into memory on demand (whenever page fault occurs) is
known as demand paging.
The process includes the following steps :

1. If the CPU tries to refer to a page that is currently not available in the main memory, it
generates an interrupt indicating a memory access fault.
2. The OS puts the interrupted process in a blocking state. For the execution to proceed the
OS must bring the required page into the memory.
3. The OS will search for the required page in the logical address space.
4. The required page will be brought from logical address space to physical address
space. The page replacement algorithms are used for the decision-making of replacing
the page in physical address space.
5. The page table will be updated accordingly.
6. The signal will be sent to the CPU to continue the program execution and it will place
the process back into the ready state.

Prepared by Mr. Isaac Paul P, Assoc Prof & HOD, Dept of CSE, RISE Krishna Sai Gandhi Groups 17
Valid-Invalid Bit

 With each page table entry a valid–invalid bit is associated


(v Þ in-memory – memory resident, i Þ not-in-memory)
 Initially valid–invalid bit is set to i on all entries
 During MMU address translation, if valid–invalid bit in page table entry is i Þ
page fault
 Example of a page table snapshot:

Prepared by Mr. Isaac Paul P, Assoc Prof & HOD, Dept of CSE, RISE Krishna Sai Gandhi Groups 18
Page Fault
Page faults is an error. A page fault will happen if a program tries to access a piece of
memory that does not exist in physical memory (main memory). The fault specifies the
operating system to trace all data into virtual memory management and then relocate it from
secondary memory to its primary memory, such as a hard disk.
Page fault occurs when
 Operating system looks at another table to decide:
o Invalid reference Þ abort
o Just not in memory
 Find free frame
 Swap page into frame via scheduled disk operation
 Reset tables to indicate page now in memory
Set validation bit = v
 Restart the instruction that caused the page fault

Prepared by Mr. Isaac Paul P, Assoc Prof & HOD, Dept of CSE, RISE Krishna Sai Gandhi Groups 19
The procedure of page fault handling in the OS:
1. Firstly, an internal table for this process to assess whether the reference was valid or
invalid memory access.
2. If the reference becomes invalid, the system process would be terminated.
Otherwise, the page will be paged in.
3. After that, the free-frame list finds the free frame in the system.
4. Now, the disk operation would be scheduled to get the required page from the disk.
5. When the I/O operation is completed, the process's page table will be updated with
a new frame number, and the invalid bit will be changed. Now, it is a valid page
reference.
6. If any page fault is found, restart these steps from starting.

Performance of Demand Paging

 Three major activities


o Service the interrupt – careful coding means just several hundred instructions
needed
o Read the page – lots of time
o Restart the process – again just a small amount of time
 Page Fault Rate 0 £ p £ 1
o if p = 0 no page faults
o if p = 1, every reference is a fault
 Effective Access Time (EAT)
EAT = (1 – p) x memory access
+ p (page fault overhead
+ swap page out
+ swap page in )

Example
 Memory access time = 200 nanoseconds
 Average page-fault service time = 8 milliseconds
 EAT = (1 – p) x 200 + p (8 milliseconds)
= 200 – p x 200 + p x 8,000,000
= 200 + p x 7,999,800

Prepared by Mr. Isaac Paul P, Assoc Prof & HOD, Dept of CSE, RISE Krishna Sai Gandhi Groups 20
3. Copy on-write

Copy-on-Write (COW) allows both parent and child processes to initially share the
same pages in memory
o If either process modifies a shared page, only then is the page copied
COW allows more efficient process creation as only modified pages are copied

Copy-on-Write(CoW) is mainly a resource management technique that allows


the parent and child process to share the same pages of the memory initially. If any
process either parent or child modifies the shared page, only then the page is copied.

The CoW is basically a technique of efficiently copying the data resources in the computer
system. In this case, if a unit of data is copied but is not modified then "copy" can mainly
exist as a reference to the original data.

But when the copied data is modified, then at that time its copy is created(where new bytes
are actually written )as suggested from the name of the technique. The main use of this
technique is in the implementation of the fork system call in which it shares the virtual
memory/pages of the Operating system.

Recall in the UNIX(OS), the fork() system call is used to create a duplicate process of the
parent process which is known as the child process.

The CoW technique is used by several Operating systems like Linux, Solaris, and Windows
XP. Let us take an example where Process A creates a new process that is Process B, initially
both these processes will share the same pages of the memory.

Now, let us assume that process A wants to modify a page in the memory. When the Copy-
on-write(CoW) technique is used, only those pages that are modified by either process are
copied; all the unmodified pages can be easily shared by the parent and child process.

Prepared by Mr. Isaac Paul P, Assoc Prof & HOD, Dept of CSE, RISE Krishna Sai Gandhi Groups 21
Whenever it is determined that a page is going to be duplicated using the copy-on-write
technique, then it is important to note the location from where the free pages will be
allocated. There is a pool of free pages for such requests; provided by many operating
systems.

And these free pages are allocated typically when the stack/heap for a process must expand or
when there are copy-on-write pages to manage.

These pages are typically allocated using the technique that is known as Zero-fill-on-
demand. And the Zero-fill-on-demand pages are zeroed-out before being allocated and thus
erasing the previous content.

4. Page replacement

 The page replacement algorithm decides which memory page is to be replaced.


 The process of replacement is sometimes called swap out or write to disk.
 Page replacement is done when the requested page is not found in the main memory
(page fault).

Prepared by Mr. Isaac Paul P, Assoc Prof & HOD, Dept of CSE, RISE Krishna Sai Gandhi Groups 22
1. Find the location of the desired page on disk
2. Find a free frame:
- If there is a free frame, use it
- If there is no free frame, use a page replacement algorithm to select a
victim frame
- Write victim frame to disk if dirty
3. Bring the desired page into the (newly) free frame; update the page and
frame tables
4. Continue the process by restarting the instruction that caused the trap

Note now potentially 2 page transfers for page fault – increasing EAT

Prepared by Mr. Isaac Paul P, Assoc Prof & HOD, Dept of CSE, RISE Krishna Sai Gandhi Groups 23
Page and Frame Replacement Algorithms
 Frame-allocation algorithm determines
o How many frames to give each process
o Which frames to replace
 Page-replacement algorithm
o Want lowest page-fault rate on both first access and re-access
 Evaluate algorithm by running it on a particular string of memory
references (reference string) and computing the number of page faults on
that string
o String is just page numbers, not full addresses
o Repeated access to the same page does not cause a page fault
o Results depend on number of frames available
 In all our examples, the reference string of referenced page numbers is
7,0,1,2,0,3,0,4,2,3,0,3,0,3,2,1,2,0,1,7,0,1

Prepared by Mr. Isaac Paul P, Assoc Prof & HOD, Dept of CSE, RISE Krishna Sai Gandhi Groups 24
First-In-First-Out (FIFO) Algorithm
 Reference string: 7,0,1,2,0,3,0,4,2,3,0,3,0,3,2,1,2,0,1,7,0,1
 3 frames (3 pages can be in memory at a time per process)

 Can vary by reference string: consider 1,2,3,4,1,2,5,1,2,3,4,5


o Adding more frames can cause more page faults!
 Belady’s Anomaly
 How to track ages of pages?
o Just use a FIFO queue

Least Recently Used (LRU) Algorithm

 Use past knowledge rather than future


 Replace page that has not been used in the most amount of time
 Associate time of last use with each page

 12 faults – better than FIFO but worse than OPT


 Generally good algorithm and frequently used

Prepared by Mr. Isaac Paul P, Assoc Prof & HOD, Dept of CSE, RISE Krishna Sai Gandhi Groups 25
Page-Buffering Algorithms

 Keep a pool of free frames, always


o Then frame available when needed, not found at fault time
o Read page into free frame and select victim to evict and add to
free pool
o When convenient, evict victim
 Possibly, keep list of modified pages
o When backing store otherwise idle, write pages there and set to
non-dirty
 Possibly, keep free frame contents intact and note what is in them
o If referenced again before reused, no need to load contents again
from disk
o Generally useful to reduce penalty if wrong victim frame selected

5. Frame allocation
The main memory of the operating system is divided into various frames. The process is
stored in these frames, and once the process is saved as a frame, the CPU may run it. As a
result, the operating system must set aside enough frames for each process. As a result, the
operating system uses various algorithms in order to assign the frame.

Demand paging is used to implement virtual memory, an essential operating system feature.
It requires the development of a page replacement mechanism and a frame allocation system.
If OS is running multiple processes, the frame allocation techniques are utilized to define
how many frames to allot to each one.

A number of factors constrain the strategies for allocating frames:


1. OS cannot assign more frames than the total number of frames available.
2. A specific number of frames should be assigned to each process. This limitation is
due to two factors.
 The first is that when the number of frames assigned drops, the page fault ratio
grows, decreasing the process's execution performance.
 Second, there should be sufficient frames to hold all the multiple pages that
any instruction may reference.

Fixed Allocation

1. Equal allocation – For example, if there are 100 frames (after allocating frames for the
OS) and 5 processes, give each process 20 frames

 Keep some as free frame buffer pool

2. Proportional allocation – Allocate according to the size of process

 Dynamic as degree of multiprogramming, process sizes change


Prepared by Mr. Isaac Paul P, Assoc Prof & HOD, Dept of CSE, RISE Krishna Sai Gandhi Groups 26
There are mainly five ways of frame allocation algorithms in the OS. These are as follows:

1. Equal Frame Allocation

For example, if there are 100 frames (after allocating frames for the OS) and 5
processes, give each process 20 frames

1. Keep some as free frame buffer pool

2. Proportional Frame Allocation


For a process pi of size si, the number of allocated frames is ai = (si/S)*m, where S is
the sum of the sizes of all the processes and m is the number of frames in the system.

For instance, in a system with 62 frames, if there is a process of 10KB and another
process of 127KB, then the first process will be allocated (10/137)*62 = 4 frames and
the other process will get (127/137)*62 = 57 frames.

3. Priority Frame Allocation


 Use a proportional allocation scheme using priorities rather than size
 If process Pi generates a page fault,
o select for replacement one of its frames
o select for replacement a frame from a process with lower priority number
4. Global Replacement Allocation
Process selects a replacement frame from the set of all frames; one process can take a
frame from another
1. But then process execution time can vary greatly
2. But greater throughput so more common

5. Local Replacement Allocation


Each process selects from only its own set of allocated frames
1. More consistent per-process performance
Prepared by Mr. Isaac Paul P, Assoc Prof & HOD, Dept of CSE, RISE Krishna Sai Gandhi Groups 27
2. But possibly underutilized memory

6. Thrashing

Thrashing is when the page fault and swapping happens very frequently at a higher rate, and
then the operating system has to spend more time swapping these pages. This state in the
operating system is known as thrashing. Because of thrashing, the CPU utilization is going to
be reduced or negligible.

In computer science, thrash is the poor performance of a virtual memory (or paging) system
when the same pages are being loaded repeatedly due to a lack of main memory to keep them
in memory. Depending on the configuration and algorithm, the actual throughput of a system
can degrade by multiple orders of magnitude.

In computer science, thrashing occurs when a computer's virtual memory resources are
overused, leading to a constant state of paging and page faults, inhibiting most application-
level processing. It causes the performance of the computer to degrade or collapse. The
situation can continue indefinitely until the user closes some running applications or the active
processes free up additional virtual memory resources.

To know more clearly about thrashing, first, we need to know about page fault and swapping.

o Page fault: We know every program is divided into some pages. A page fault occurs
when a program attempts to access data or code in its address space but is not
currently located in the system RAM.
o Swapping: Whenever a page fault happens, the operating system will try to fetch that
page from secondary memory and try to swap it with one of the pages in RAM. This
process is called swapping.

Prepared by Mr. Isaac Paul P, Assoc Prof & HOD, Dept of CSE, RISE Krishna Sai Gandhi Groups 28
Algorithms during Thrashing

Whenever thrashing starts, the operating system tries to apply either the Global page
replacement Algorithm or the Local page replacement algorithm.

1. Global Page Replacement


Since global page replacement can bring any page, it tries to bring more pages whenever
thrashing is found. But what actually will happen is that no process gets enough frames, and
as a result, the thrashing will increase more and more. Therefore, the global page replacement
algorithm is not suitable when thrashing happens.
2. Local Page Replacement
Unlike the global page replacement algorithm, local page replacement will select pages
which only belong to that process. So there is a chance to reduce the thrashing. But it is
proven that there are many disadvantages if we use local page replacement. Therefore, local
page replacement is just an alternative to global page replacement in a thrashing scenario.
Causes of Thrashing
Programs or workloads may cause thrashing, and it results in severe performance
problems, such as:
o If CPU utilization is too low, we increase the degree of multiprogramming by
introducing a new system. A global page replacement algorithm is used. The CPU
scheduler sees the decreasing CPU utilization and increases the degree of
multiprogramming.
o CPU utilization is plotted against the degree of multiprogramming.
o As the degree of multiprogramming increases, CPU utilization also increases.
o If the degree of multiprogramming is increased further, thrashing sets in, and CPU
utilization drops sharply.
o So, at this point, to increase CPU utilization and to stop thrashing, we must decrease
the degree of multiprogramming
How to Eliminate Thrashing
Thrashing has some negative impacts on hard drive health and system performance.
Therefore, it is necessary to take some actions to avoid it. To resolve the problem of
thrashing, here are the following methods, such as:
o Adjust the swap file size: If the system swap file is not configured correctly, disk
thrashing can also happen to you.
o Increase the amount of RAM: As insufficient memory can cause disk thrashing, one
solution is to add more RAM to the laptop. With more memory, your computer can
handle tasks easily and don't have to work excessively. Generally, it is the best long-
term solution.
o Decrease the number of applications running on the computer: If there are too
many applications running in the background, your system resource will consume a
lot. And the remaining system resource is slow that can result in thrashing. So while
closing, some applications will release some resources so that you can avoid thrashing
to some extent.
o Replace programs: Replace those programs that are heavy memory occupied with
equivalents that use less memory.

Prepared by Mr. Isaac Paul P, Assoc Prof & HOD, Dept of CSE, RISE Krishna Sai Gandhi Groups 29
Prepared by Mr. Isaac Paul P, Assoc Prof & HOD, Dept of CSE, RISE Krishna Sai Gandhi Groups 30
7. Memory-mapped files

A sequential read of a file on disk using the standard system calls open(), read(), and write().
Each file access requires a system call and disk access. This approach, known as memory
mapping a file, allows a part of the virtual address space to be logically associated with the
file. Memory mapping a file can improve the performance of CPU processing.

 A file is initially read using demand paging


o A page-sized portion of the file is read from the file system into a physical page
o Subsequent reads/writes to/from the file are treated as ordinary memory accesses
 Simplifies and speeds file access by driving file I/O through memory rather than read()
and write() system calls
 Also allows several processes to map the same file allowing the pages in memory to be
shared

Memory-Mapped File Technique for all I/O


 Some OSes uses memory mapped files for standard I/O
 Process can explicitly request memory mapping a file via mmap() system call
o Now file mapped into process address space
 For standard I/O (open(), read(), write(), close()), mmap anyway
o But map file into kernel address space
o Process still does read() and write()
 Copies data to and from kernel space and user space
o Uses efficient memory management subsystem
 Avoids needing separate subsystem
 Copy on Write(COW) can be used for read/write non-shared pages
 Memory mapped files can be used for shared memory (although again via separate
system calls)

Memory Mapped Files

Prepared by Mr. Isaac Paul P, Assoc Prof & HOD, Dept of CSE, RISE Krishna Sai Gandhi Groups 31
Shared Memory via Memory-Mapped I/O

Shared Memory in Windows API

 First create a file mapping for file to be mapped


o Then establish a view of the mapped file in process’s virtual address space
 Consider producer / consumer
o Producer create shared-memory object using memory mapping features
o Open file via CreateFile(), returning a HANDLE
o Create mapping via CreateFileMapping() creating a named shared-memory object
o Create view via MapViewOfFile()
 Sample code in Textbook

8. Kernel memory allocation

 When a process running in user mode requests additional memory, pages are allocated
from the list of free page frames maintained by the kernel.
 This list is typically populated using a page-replacement algorithm and most likely
contains free pages scattered throughout physical memory.
 If a user process requests a single byte of memory, internal fragmentation will result, as
the process will be granted an entire page frame.
 Kernel memory is often allocated from a free-memory pool different from the list used to
satisfy ordinary user-mode processes.
 There are two primary reasons for this:
1. The kernel requests memory for data structures of varying sizes, some of which are less
than a page in size.
2. Pages allocated to user-mode processes do not necessarily have to be in contiguous
physical memory.
Buddy System
 The buddy system allocates memory from a fixed-size segment consisting of physically
contiguous pages.
 Memory is allocated from this segment using a power-of-2 allocator, which satisfies
requests in units sized as a power of 2 (4 KB, 8 KB, 16 KB, and so forth). A request in
units not appropriately sized is rounded up to the next highest power of 2.
 For example, a request for 11 KB is satisfied with a 16-KB segment.
 Let’s consider a simple example. Assume the size of a memory segment is initially 256
KB and the kernel requests 21 KB of memory.
Prepared by Mr. Isaac Paul P, Assoc Prof & HOD, Dept of CSE, RISE Krishna Sai Gandhi Groups 32
 The segment is initially divided into two buddies—which we will call AL and AR — each
128 KB in size.
 One of these buddies is further divided into two 64-KB buddies BL and BR. However, the
next-highest power of 2 from 21 KB is 32 KB so either
 BL or BR is again divided into two 32-KB buddies, CL and CR. One of these buddies is used
to satisfy the 21-KB request.

This scheme is illustrated in the following Diagram

Slab Allocator.
 Alternate strategy is Slab Allocator.
 Slab is one or more physically contiguous pages
 Cache consists of one or more slabs
 Single cache for each unique kernel data structure
o Each cache filled with objects – instantiations of the data structure
 When cache created, filled with objects marked as free
 When structures stored, objects marked as used
 If slab is full of used objects, next object allocated from empty slab
o If no empty slabs, new slab allocated
 Benefits include no fragmentation, fast memory request satisfaction

Prepared by Mr. Isaac Paul P, Assoc Prof & HOD, Dept of CSE, RISE Krishna Sai Gandhi Groups 33

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy