Java Programming - Unit - 4

Download as pdf or txt
Download as pdf or txt
You are on page 1of 42

Unit-4

Memory management
Virtual memory
• Memory management is the functionality of an operating
system which handles or manages primary memory and
moves processes back and forth between main memory
and disk during execution.
• Memory management keeps track of each and every
memory location, regardless of either it is allocated to
some process or it is free.
• It checks how much memory is to be allocated to
processes. It decides which process will get memory at
what time.
• It tracks whenever some memory gets freed or
unallocated and correspondingly it updates the status.
Logical and Physical Address in Operating System

• Logical Address is generated by CPU while a program is running.


The logical address is virtual address as it does not exist physically,
therefore, it is also known as Virtual Address.
• This address is used as a reference to access the physical memory
location by CPU. The term Logical Address Space is used for the set
of all logical addresses generated by a program’s perspective.

The hardware device called Memory-Management Unit is used for


mapping logical address to its corresponding physical address.
• Physical Address identifies a physical location of required data
in a memory. The user never directly deals with the physical
address but can access by its corresponding logical address.
• The user program generates the logical address and thinks that
the program is running in this logical address but the program
needs physical memory for its execution, therefore, the logical
address must be mapped to the physical address by MMU
before they are used.
• The term Physical Address Space is used for all physical
addresses corresponding to the logical addresses in a Logical
address space.
Differences Between Logical and Physical Address in Operating System
Swapping

• Swapping is a mechanism in which a process can be


swapped temporarily out of main memory (or move) to
secondary storage (disk) and make that memory
available to other processes. At some later time, the
system swaps back the process from the secondary
storage to main memory.
• Though performance is usually affected by swapping
process but it helps in running multiple and big
processes in parallel and that's the reason Swapping is
also known as a technique for memory compaction.
• The total time taken by swapping process includes the
time it takes to move the entire process to a secondary
disk and then to copy the process back to memory, as
well as the time the process takes to regain main
memory.
• Let us assume that the user process is of size 2048KB
and on a standard hard disk where swapping will take
place has a data transfer rate around 1 MB per second.
The actual transfer of the 1000K process to or from
memory will take
• 2048KB / 1024KB per second
• = 2 seconds
• = 2000 milliseconds
• Now considering in and out time, it will take complete
4000 milliseconds plus other overhead where the
process competes to regain main memory.
Memory Alloca on
• Main memory usually has two partitions −
• Low Memory − Operating system resides in this memory.
• High Memory − User processes are held in high memory.
• Operating system uses the following memory allocation mechanism.
• 1.Single-partition allocation
• In this type of allocation, relocation-register scheme is used to
protect user processes from each other, and from changing
operating-system code and data. Relocation register contains value
of smallest physical address whereas limit register contains range of
logical addresses. Each logical address must be less than the limit
register.
• 2.Multiple-partition allocation
• In this type of allocation, main memory is divided into a number of
fixed-sized partitions where each partition should contain only one
process. When a partition is free, a process is selected from the
input queue and is loaded into the free partition. When the process
terminates, the partition becomes available for another process.
• 1. First Fit: In the first fit, the partition is allocated
which is first sufficient block from the top of Main
Memory.
• 2. Best Fit Allocate the process to the partition which is
the first smallest sufficient partition among the free
available partition.
• 3. Worst Fit Allocate the process to the partition which
is the largest sufficient among the freely available
partitions available in the main memory.
• 4. Next Fit Next fit is similar to the first fit but it will
search for the first sufficient partition from the last
allocation point.
• Consider the requests from processes in given order 300K, 25K, 125K
and 50K. Let there be two blocks of memory available of size 150K
followed by a block size 350K.
Which of the following partition allocation schemes can satisfy
above requests?
A) Best fit but not first fit. B) First fit but not best fit.
C) Both First fit & Best fit. D) neither first fit nor best fit.
• Solution: Let us try all options.
Best Fit:
300K is allocated from block of size 350K. 50 is left in the block.
25K is allocated from the remaining 50K block. 25K is left in the block.
125K is allocated from 150 K block. 25K is left in this block also.
50K can’t be allocated even if there is 25K + 25K space available.
• First Fit:
300K request is allocated from 350K block, 50K is left out.
25K is be allocated from 150K block, 125K is left out.
Then 125K and 50K are allocated to remaining left out partitions.
So, first fit can handle requests.
• So option B is the correct choice.
Contiguous memory allocation

• There are two Memory Management Techniques:


Contiguous, and Non-Contiguous.
In Contiguous Technique, executing process must
be loaded entirely in main-memory.
Contiguous Technique can be divided into:
• Fixed (or static) partitioning
• Variable (or dynamic) partitioning
Fixed Partitioning

This is the oldest and simplest technique used to


put more than one processes in the main
memory.
• In this partitioning, number of partitions (non-
overlapping) in RAM are fixed but size of each
partition may or may not be same.
• As it is contiguous allocation, hence no
spanning is allowed. Here partition are made
before execution or during system configure.
• As illustrated in above figure, first process is only
consuming 1MB out of 4MB in the main memory.
Hence, Internal Fragmentation in first block is (4-1) =
3MB.
Sum of Internal Fragmentation in every block =
(4-1)+(8-7)+(8-7)+(16-14)= 3+1+1+2 = 7MB.
• Suppose process P5 of size 7MB comes. But this
process cannot be accommodated inspite of available
free space because of contiguous allocation (as
spanning is not allowed). Hence, 7MB becomes part of
External Fragmentation.
• There are some advantages and disadvantages of fixed
partitioning.
Advantages of Fixed Partitioning –
• Easy to implement:
Algorithms needed to implement Fixed Partitioning are
easy to implement. It simply requires putting a process
into certain partition without focusing on the emergence
of Internal and External Fragmentation.
• Little OS overhead:
Processing of Fixed Partitioning require lesser excess
and indirect computational power.
Disadvantages of Fixed Par oning –
• Internal Fragmenta on:
Main memory use is inefficient. Any program, no ma er how small,
occupies an en re par on. This can cause internal fragmenta on.
• External Fragmenta on:
The total unused space (as stated above) of various par ons cannot be
used to load the processes even though there is space available but not
in the con guous form (as spanning is not allowed).
• Limit process size:
Process of size greater than size of par on in Main Memory cannot be
accommodated. Par on size cannot be varied according to the size of
incoming process’s size. Hence, process size of 32MB in above stated
example is invalid.
• Limita on on Degree of Mul programming:
Par on in Main Memory are made before execu on or during system
configure. Main Memory is divided into fixed number of par on.
Suppose if there are par ons in RAM and are the number of
processes, then condi on must be fulfilled. Number of processes
greater than number of par ons in RAM is invalid in Fixed Par oning.
Variable (or dynamic) partitioning
• Variable Partitioning –
It is a part of Contiguous allocation technique. It is used to alleviate
the problem faced by Fixed Partitioning. In contrast with fixed
partitioning, partitions are not made before the execution or during
system configure.
Various features associated with variable Partitioning-
• Initially RAM is empty and partitions are made during the run-time
according to process’s need instead of partitioning during system
configure.
• The size of partition will be equal to incoming process.
• The partition size varies according to the need of the process so that
the internal fragmentation can be avoided to ensure efficient
utilization of RAM.
• Number of partitions in RAM is not fixed and depends on the number
of incoming process and Main Memory’s size.
Advantages of Variable Partitioning –
• No Internal Fragmentation:
In variable Partitioning, space in main memory is allocated strictly
according to the need of process, hence there is no case of internal
fragmentation. There will be no unused space left in the partition.
• No restriction on Degree of Multiprogramming:
More number of processes can be accommodated due to absence
of internal fragmentation. A process can be loaded until the memory
is empty.
• No Limitation on the size of the process:
In Fixed partitioning, the process with the size greater than the size
of the largest partition could not be loaded and process can not be
divided as it is invalid in contiguous allocation technique. Here, In
variable partitioning, the process size can’t be restricted since the
partition size is decided according to the process size.
Disadvantages of Variable Partitioning –

• Difficult Implementation:
Implementing variable Partitioning is difficult as compared to Fixed
Partitioning as it involves allocation of memory during run-time rather
than during system configure.
• External Fragmentation:
There will be external fragmentation in spite of absence of internal
fragmentation. For example, suppose in above example- process
P1(2MB) and process P3(1MB) completed their execution. Hence
two spaces are left i.e. 2MB and 1MB. Let’s suppose process P5 of
size 3MB comes. The empty space in memory cannot be allocated
as no spanning is allowed in contiguous allocation. The rule says that
process must be contiguously present in main memory to get
executed. Hence it results in External Fragmentation.
• Now P5 of size 3 MB cannot be accommodated in spite
of required available space because in contiguous no
spanning is allowed.
Compaction
• We got to know that the dynamic partitioning suffers
from external fragmentation. However, this can cause
some serious problems.
• To avoid compaction, we need to change the rule which
says that the process can't be stored in the different
places in the memory.
• We can also use compaction to minimize the probability
of external fragmentation. In compaction, all the free
partitions are made contiguous and all the loaded
partitions are brought together.
• By applying this technique, we can store the bigger
processes in the memory. The free partitions are merged
which can now be allocated according to the needs of
new processes. This technique is also called
defragmentation.
• As shown in the image above, the process P5, which
could not be loaded into the memory due to the lack of
contiguous space, can be loaded now in the memory
since the free partitions are made contiguous.
Problem with Compaction
• The efficiency of the system is decreased in the case of
compaction due to the fact that all the free spaces will
be transferred from several places to a single place.
• Huge amount of time is invested for this procedure and
the CPU will remain idle for all this time. Despite of the
fact that the compaction avoids external fragmentation, it
makes system inefficient.
• Let us consider that OS needs 6 NS to copy 1 byte from
one place to another.
• 1 B transfer needs 6 NS
• 256 MB transfer needs 256 X 2^20 X 6 X 10 ^ -9 secs
• hence, it is proved to some extent that the larger size
memory transfer needs some huge amount of time that
is in seconds.
Paging with Example
• In Operating Systems, Paging is a storage mechanism used to
retrieve processes from the secondary storage into the main memory
in the form of pages.
• The main idea behind the paging is to divide each process in the form
of pages. The main memory will also be divided in the form of frames.
• One page of the process is to be stored in one of the frames of the
memory. The pages can be stored at the different locations of the
memory but the priority is always to find the contiguous frames or
holes.
• Pages of the process are brought into the main memory only when
they are required otherwise they reside in the secondary storage.
• Different operating system defines different frame sizes. The sizes of
each frame must be equal. Considering the fact that the pages are
mapped to the frames in Paging, page size needs to be as same as
frame size.
Example
• Let us consider the main memory size 16 Kb and Frame
size is 1 KB therefore the main memory will be divided
into the collection of 16 frames of 1 KB each.
• There are 4 processes in the system that is P1, P2, P3
and P4 of 4 KB each. Each process is divided into
pages of 1 KB each so that one page can be stored in
one frame.
• Initially, all the frames are empty therefore pages of the
processes will get stored in the contiguous way.
• Frames, pages and the mapping between the two is
shown in the image below.

• Let us consider that, P2 and P4 are moved to waiting
state after some time. Now, 8 frames become empty and
therefore other pages can be loaded in that empty place.
The process P5 of size 8 KB (8 pages) is waiting inside
the ready queue.
• Given the fact that, we have 8 non contiguous frames
available in the memory and paging provides the
flexibility of storing the process at the different places.
Therefore, we can load the pages of process P5 in the
place of P2 and P4.
Why Segmentation is required?
• In the Operating System, an important drawback of memory
management is the separation of the user's view of memory and the
actual physical memory. Paging is the scheme which provides the
separation of these two memories.
• The user's view is mapped onto the physical storage. This mapping
permits differentiation between logical memory and physical memory.
• Operating System may divide the same function into different pages
and those pages may or may not be loaded at the same time into the
memory and it doesn't care about the User's view of the process.
This technique decreases the efficiency of the system.
• Segmentation is better than this because it divides the process into
the segments.
What is Segmentation?
• Segmentation is a memory management technique
which supports user's view of memory. This technique of
division of a computer's primary memory into sections
called segments.
• Types of Segmentation
• Virtual memory segmentation
Each processor job is divided into several segments, It is
not essential all of which are resident at any one point in
time.
• Simple segmentation
Each process is divided into many segments, and all
segments are loaded into the memory at run time, but
not necessarily contiguously.
Basic method for Segmentation
• In a computer system using segmentation, a logical address space
can be viewed as multiple segments. The size of the segment may
grow or shrink that is it is of variable length.
• During execution, each segment has a name and a length. The
address specifies both the segment name and the displacement
within the segment. The user, therefore, specifies each address by
two quantities; segment name and an offset.
• Normally it is implemented as segments are numbered and are
referred to by a segment number, in place of a segment name. Thus
a logical address consists of two tuples:
• < segment – number, offset >
• Segment number(s) – It is the total number of bits required to
represent the segment.
• Segment Offset(d) – It specifies the number of bits required to
represent the size of the segment.
Hardware support for segmentation
• In the program, the user refers to objects by a two-
dimensional address, the actual physical memory is still,
of course, a one- dimensional sequence of bytes. Thus
we have to define an implementation to map two-
dimensional user-defined addresses into one-
dimensional physical addresses.
• This mapping is affected by a segment table. In the
segment table, each entry has a segment base and a
segment limit.
• Segment Base – It contains the starting physical
address where the segment kept in memory.
• Segment Limit – It specifies the length of the segment.
Segmentation Hardware
• The logical address consists of two parts: a segment
number (s) and an offset (d) into that segment.
• The segment number used as an index into the
segment table.
• The offset d of the logical address must be between 0
and the segment limit.
• If offset is beyond the end of the segment, we trap the
Operating System.
• If offset is in the limit, then it is combined with the
segment base to produce the address in physical
memory, hence the segment table is an array of base
limit and register pairs.
Advantages of Segmentation
• There is no internal fragmentation.
• Segment Table is used to record the segments and it
consumes less space in comparison to the Page table in
paging.
Disadvantage of Segmentation
• At the time of swapping, processes are loaded and
removed from the main memory, then the free memory
space is broken into small pieces, cause of this occurs
External fragmentation.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy