OS(UNIT-4) (1)
OS(UNIT-4) (1)
o Memory manager is used to keep track of the status of memory locations, whether it is free
or allocated. It addresses primary memory by providing abstractions so that software
perceives a large memory is allocated to it.
o Memory manager permits computers with a small amount of main memory to execute
programs larger than the size or amount of available memory. It does this by moving
information back and forth between primary memory and secondary memory by using the
concept of swapping.
o The memory manager is responsible for protecting the memory allocated to each process
from being corrupted by another process. If this is not ensured, then the system may exhibit
unpredictable behavior.
o Memory managers should enable sharing of memory space between processes. Thus, two
programs can reside at the same memory location although at different times.
1
Differences Between Logical and Physical Address in Operating System
1. The basic difference between Logical and physical address is that Logical address is
generated by CPU in perspective of a program whereas the physical address is a location
that exists in the memory unit.
2. Logical Address Space is the set of all logical addresses generated by CPU for a program
whereas the set of all physical address mapped to corresponding logical addresses is called
Physical Address Space.
3. The logical address does not exist physically in the memory whereas physical address is a
location in the memory that can be accessed physically.
4. Identical logical addresses are generated by Compile-time and Load time address binding
methods whereas they differs from each other in run-time address binding method. Please
refer this for details.
5. The logical address is generated by the CPU while the program is running whereas the
physical address is computed by the Memory Management Unit (MMU).
Comparison Chart:
Logical Address Space is set of all Physical Address is set of all physical
Address logical addresses generated by CPU addresses mapped to the
Space in reference to a program. corresponding logical addresses.
User can view the logical address of a User can never view physical address
Visibility program. of program.
The user can use the logical address The user can indirectly access physical
Access to access the physical address. address but not directly.
Editable Logical address can be change. Physical address will not change.
2
Swapping
Swapping is a memory management scheme in which any process can be temporarily swapped
from main memory to secondary memory so that the main memory can be made available for other
processes. It is used to improve main memory utilization. In secondary memory, the place where
the swapped-out process is stored is called swap space.The purpose of the swapping in operating
system is to access the data present in the hard disk and bring it to RAM so that the application
programs can use it. The thing to remember is that swapping is used only when data is not present
in RAM.
Although the process of swapping affects the performance of the system, it helps to run larger and
more than one process. This is the reason why swapping is also referred to as memory compaction.
The concept of swapping has divided into two more concepts: Swap-in and Swap-out.
o Swap-out is a method of removing a process from RAM and adding it to the hard disk.
o Swap-in is a method of removing a program from a hard disk and putting it back into the
main memory or RAM.
Advantages of Swapping
1. It helps the CPU to manage multiple processes within a single main memory.
2. It helps to create and use virtual memory.
3. Swapping allows the CPU to perform multiple tasks simultaneously. Therefore, processes
do not have to wait very long before they are executed.
4. It improves the main memory utilization.
Disadvantages of Swapping
1. If the computer system loses power, the user may lose all information related to the
program in case of substantial swapping activity.
2. If the swapping algorithm is not good, the composite method can increase the number of
Page Fault and decrease the overall processing performance.
The Memory management Techniques can be classified into following main categories:
3
Contiguous Memory Allocation-
Techniques-
There are two popular techniques used for contiguous memory allocation-
1. Static Partitioning
2. Dynamic Partitioning
The earliest and one of the simplest technique which can be used to load more than one processes
into the main memory is Fixed partitioning or Contiguous memory allocation.
4
In this technique, the main memory is divided into partitions of equal or different sizes. The
operating system always resides in the first partition while the other partitions can be used to store
user processes. The memory is assigned to the processes in contiguous way.
In fixed partitioning,
1. Internal Fragmentation
If the size of the process is lesser then the total size of the partition then some size of the partition
get wasted and remain unused. This is wastage of the memory and called internal fragmentation.
As shown in the image below, the 4 MB partition is used to load only 3 MB process and the
remaining 1 MB got wasted.
2. External Fragmentation
The total unused space of various partitions cannot be used to load the processes even though there
is space available but not in the contiguous form.
As shown in the image below, the remaining 1 MB space of each partition cannot be used as a unit
to store a 4 MB process. Despite of the fact that the sufficient space is available to load the process,
process will not be loaded.
If the process size is larger than the size of maximum sized partition then that process cannot be
loaded into the memory. Therefore, a limitation can be imposed on the process size that is it cannot
be larger than the size of the largest partition.
By Degree of multi programming, we simply mean the maximum number of processes that can be
loaded into the memory at the same time. In fixed partitioning, the degree of multiprogramming is
fixed and very less due to the fact that the size of the partition cannot be varied according to the
size of processes.
5
Dynamic Partitioning
Dynamic partitioning tries to overcome the problems caused by fixed partitioning. In this
technique, the partition size is not declared initially. It is declared at the time of process
loading.The first partition is reserved for the operating system. The remaining space is divided into
parts. The size of each partition will be equal to the size of the process. The partition size varies
according to the need of the process so that the internal fragmentation can be avoided.
6
Advantages of Dynamic Partitioning over fixed partitioning
1. No Internal Fragmentation
Given the fact that the partitions in dynamic partitioning are created according to the need of the
process, It is clear that there will not be any internal fragmentation because there will not be any
unused remaining space in the partition.
In Fixed partitioning, the process with the size greater than the size of the largest partition could
not be executed due to the lack of sufficient contiguous memory. Here, In Dynamic partitioning,
the process size can't be restricted since the partition size is decided according to the process size.
Due to the absence of internal fragmentation, there will not be any unused space in the partition
hence more processes can be loaded in the memory at the same time.
External Fragmentation
Absence of internal fragmentation doesn't mean that there will not be external fragmentation.
Let's consider three processes P1 (1 MB) and P2 (3 MB) and P3 (1 MB) are being loaded in the
respective partitions of the main memory.
After some time P1 and P3 got completed and their assigned space is freed. Now there are two
unused partitions (1 MB and 1 MB) available in the main memory but they cannot be used to load
a 2 MB process in the memory since they are not contiguously located.
The rule says that the process must be contiguously present in the main memory to get executed.
We need to change this rule to avoid external fragmentation.
7
Complex Memory Allocation
In Fixed partitioning, the list of partitions is made once and will never change but in dynamic
partitioning, the allocation and deallocation is very complex since the partition size will be varied
every time when it is assigned to a new process. OS has to keep track of all the partitions.
Due to the fact that the allocation and deallocation are done very frequently in dynamic memory
allocation and the partition size will be changed at each time, it is going to be very difficult for OS
to manage everything.
Compaction
We got to know that the dynamic partitioning suffers from external fragmentation. However, this
can cause some serious problems.To avoid compaction, we need to change the rule which says
that the process can't be stored in the different places in the memory.We can also use compaction
to minimize the probability of external fragmentation. In compaction, all the free partitions are
made contiguous and all the loaded partitions are brought together.
By applying this technique, we can store the bigger processes in the memory. The free partitions
are merged which can now be allocated according to the needs of new processes. This technique
is also called defragmentation.
8
As shown in the image above, the process P5, which could not be loaded into the memory due to
the lack of contiguous space, can be loaded now in the memory since the free partitions are made
contiguous.
The efficiency of the system is decreased in the case of compaction due to the fact that all the free
spaces will be transferred from several places to a single place.
Huge amount of time is invested for this procedure and the CPU will remain idle for all this time.
Despite of the fact that the compaction avoids external fragmentation, it makes system inefficient.
Let us consider that OS needs 6 NS to copy 1 byte from one place to another.
1. 1 B transfer needs 6 NS
2. 256 MB transfer needs 256 X 2^20 X 6 X 10 ^ -9 secs
hence, it is proved to some extent that the larger size memory transfer needs some huge amount of
time that is in seconds.
Partitioning Algorithms
There are various algorithms which are implemented by the Operating System in order to find out
the holes in the linked list and allocate them to the processes.
Popular algorithms used for allocating the partitions to the arriving processes are-
9
1. First Fit Algorithm
2. Best Fit Algorithm
3. Worst Fit Algorithm
In the figure given below, there are strategies that are used to select a hole from the set of
available holes.
According to this strategy, allocate the first hole or first free partition to the process that is big
enough. This searching can start either from the beginning of the set of holes or from the location
where the previous first-fit search ended.
Searching can be stopped as soon as we find a free hole that is large enough.
Process P1 of size 10KB has arrived and then the first hole that is enough to meet the
requirements of size 20KB is chosen and allocated to the process.
10
2. Best Fit Allocation
With this strategy, the smallest free partition/ hole that is big enough and meets the requirements
of the process is allocated to the process. This strategy searches the entire list of free
partitions/holes in order to find a hole whose size is either greater than or equal to the size of the
process.
Process P1 of size 10KB is arrived and then the smallest hole to meet the requirements of size 10
KB is chosen and allocated to the process.
11
3. Worst Fit Allocation
With this strategy, the Largest free partition/ hole that meets the requirements of the process is
allocated to the process. It is done so that the portion that is left is big enough to be useful. This
strategy is just the opposite of Worst Fit.
This strategy searches the entire list of holes in order to find the largest hole and then allocate the
largest hole to process.
Process P1 of size 10KB has arrived and then the largest hole of size 80 KB is chosen and
allocated to the process.
This strategy is the modified version of the First fit because in Next Fit and in this memory is
searched for empty spaces similar to the first fit memory allocation scheme. But it differs from
the first fit as when called Next time it starts from where it let off and not from the beginning.
Following steps are followed to translate logical address into physical address-
12
Step-01:
• The translation scheme uses two registers that are under the control of operating system.
• During context switching, the values corresponding to the process being loaded are set in the
registers.
• Relocation Register stores the base address or starting address of the process in the main
memory.
• Limit Register stores the size or length of the process.
Step-02:
• CPU generates a logical address containing the address of the instruction that it wants to read.
Step-03:
• The logical address generated by the CPU is compared with the limit of the process.
• Now, two cases are possible-
13
Case-02: Generated Address < Limit
Diagram-
The following diagram illustrates the above steps of translating logical address into physical
address-
Advantages-
Disadvantages-
The disadvantages of static partitioning are-
• It suffers from both internal fragmentation and external fragmentation.
• It utilizes memory inefficiently.
• The degree of multiprogramming is limited equal to number of partitions.
14
• There is a limitation on the size of process since processes with size greater than the size of
largest partition can’t be stored and executed.
Techniques-
There are two popular techniques used for non-contiguous memory allocation-
1. Paging
2. Segmentation
Paging-
15
• Each process is divided into parts where size of each part is same as page size.
• The size of the last part may be less than the page size.
• The pages of process are stored in the frames of main memory depending upon their
availability.
Paging is a storage mechanism that allows OS to retrieve processes from the secondary storage
into the main memory in the form of pages. In the Paging method, the main memory is divided
into small fixed-size blocks of physical memory, which is called frames. The size of a frame should
be kept the same as that of a page to have maximum utilization of the main memory and to avoid
external fragmentation. Paging is used for faster access to data, and it is a logical concept.
Example of Paging in OS
For example, if the main memory size is 16 KB and Frame size is 1 KB. Here, the main memory
will be divided into the collection of 16 frames of 1 KB each.
There are 4 separate processes in the system that is A1, A2, A3, and A4 of 4 KB each. Here, all
the processes are divided into pages of 1 KB each so that operating system can store one page in
one frame.
At the beginning of the process, all the frames remain empty so that all the pages of the
processes will get stored in a contiguous way.
In this example you can see that A2 and A4 are moved to the waiting state after some time.
Therefore, eight frames become empty, and so other pages can be loaded in that empty blocks.
The process A5 of size 8 pages (8 KB) are waiting in the ready queue.
16
In this example, you can see that there are eight non-contiguous frames which is available in the
memory, and paging offers the flexibility of storing the process at the different places. This
allows us to load the pages of process A5 instead of A2 and A4.
Advantages of Paging
Here, are advantages of using Paging method:
Disadvantages of Paging
Here, are drawback/ cons of Paging:
17
Following steps are followed to translate logical address into physical address-
Step-01:
• Page Number specifies the specific page of the process from which CPU wants to read the
data.
• Page Offset specifies the specific word on the page that CPU wants to read.
Step-02:
Step-03:
• The frame number combined with the page offset forms the required physical address.
18
• Frame number specifies the specific frame where the required page is stored.
• Page Offset specifies the specific word that has to be read from that page.
Diagram-
The following diagram illustrates the above steps of translating logical address into physical
address-
Page Table-
Characteristics-
Working-
19
• Page Table Base Register (PTBR) provides the base address of the page table.
• The base address of the page table is added with the page number referenced by the CPU.
• It gives the entry of the page table containing the frame number where the referenced page is
stored.
Segmentation-
Characteristics-
20
• Segmentation is a variable size partitioning scheme.
• In segmentation, secondary memory and main memory are divided into partitions of unequal
size.
• The size of partitions depend on the length of modules.
• The partitions of secondary memory are called as segments.
Example-
Segment Table-
• Segment table is a table that stores the information about each segment of the process.
• It has two columns.
• First column stores the size or length of the segment.
• Second column stores the base address or starting address of the segment in the main memory.
• Segment table is stored as a separate segment in the main memory.
• Segment table base register (STBR) stores the base address of the segment table.
21
For the above illustration, consider the segment table is-
Here,
• Limit indicates the length or size of the segment.
• Base indicates the base address or starting address of the segment in the main memory.
In accordance to the above segment table, the segments are stored in the main memory as-
22
• CPU always generates a logical address.
• A physical address is needed to access the main memory.
Following steps are followed to translate logical address into physical address-
Step-01:
• Segment Number specifies the specific segment of the process from which CPU wants to read
the data.
• Segment Offset specifies the specific word in the segment that CPU wants to read.
Step-02:
• For the generated segment number, corresponding entry is located in the segment table.
• Then, segment offset is compared with the limit (size) of the segment.
• If segment offset is found to be greater than or equal to the limit, a trap is generated.
• If segment offset is found to be smaller than the limit, then request is treated as a valid request.
23
• The segment offset must always lie in the range [0, limit-1],
• Then, segment offset is added with the base address of the segment.
• The result obtained after addition is the address of the memory location storing the required
word.
Diagram-
The following diagram illustrates the above steps of translating logical address into physical
address-
Advantages-
24
Disadvantages-
Pure segmentation is not very popular and not being used in many of the operating systems.
However, Segmentation can be combined with Paging to get the best features out of both the
techniques.
In Segmented Paging, the main memory is divided into variable size segments which are further
divided into fixed size pages.
Each Page table contains the various information about every page of the segment. The Segment
Table contains the information about every segment. Each segment table entry points to a page
table entry and every page table entry is mapped to one of the page within a segment.
25
Translation of logical address to physical address
The CPU generates a logical address which is divided into two parts: Segment Number and
Segment Offset. The Segment Offset must be less than the segment limit. Offset is further divided
into Page number and Page Offset. To map the exact page number in the page table, the page
number is added into the page table base.
The actual frame number with the page offset is mapped to the main memory to get the desired
word in the page of the certain segment of the process.
26
Advantages of Segmented Paging
1. It reduces memory usage.
2. Page table size is limited by the segment size.
3. Segment table has only one entry corresponding to one actual segment.
4. External Fragmentation is not there.
5. It simplifies memory allocation.
Demand Paging
Each process has a multiple pages. But, it not enough to insert all pages of that process into primary
memory because, RAM size is limited as well. Therefore, while getting to execute all process then
loading the pages as per the requirement. It may be probability that any application does not require
all its pages for executing the applications.
How Does Demand Paging Working
Demand paging system is totally depend on the page table implementation because page table
helps to maps logical memory to physical memory. Bitwise operators are implemented in the page
27
table to indication that pages are ok or not (valid or invalid). All valid pages are existed
into primary memory, and other side invalid pages are existed into secondary memory.
Every process in the virtual memory contains lots of pages and in some cases, it might not be
efficient to swap all the pages for the process at once. Because it might be possible that the program
may need only a certain page for the application to run. Let us take an example here, suppose there
is a 500 MB application and it may need as little as 100MB pages to be swapped, so in this case,
there is no need to swap all pages at once.
The demand paging system is somehow similar to the paging system with swapping where
processes mainly reside in the main memory(usually in the hard disk). Thus demand paging is
the process that solves the above problem only by swapping the pages on Demand. This is also
known as lazy swapper( It never swaps the page into the memory unless it is needed).
Swapper that deals with the individual pages of a process are referred to as Pager.
Demand Paging is a technique in which a page is usually brought into the main memory only
when it is needed or demanded by the CPU. Initially, only those pages are loaded that are
required by the process immediately. Those pages that are never accessed are thus never loaded
into the physical memory.
28
Valid-Invalid Bit
Some form of hardware support is used to distinguish between the pages that are in the memory
and the pages that are on the disk. Thus for this purpose Valid-Invalid scheme is used:
• With each page table entry, a valid-invalid bit is associated( where 1 indicates in the
memory and 0 indicates not in the memory)
• Initially, the valid-invalid bit is set to 0 for all table entries.
1. If the bit is set to "valid", then the associated page is both legal and is in memory.
2. If the bit is set to "invalid" then it indicates that the page is either not valid or the page
is valid but is currently not on the disk.
• For the pages that are brought into the memory, the page table is set as usual.
• But for the pages that are not currently in the memory, the page table is either simply
marked as invalid or it contains the address of the page on the disk.
During the translation of address, if the valid-invalid bit in the page table entry is 0 then it leads
to page fault.
The above figure is to indicates the page table when some pages are not in the main memory.
First of all the components that are involved in the Demand paging process are as follows:
29
• Main Memory
• CPU
• Secondary Memory
• Interrupt
• Physical Address space
• Logical Address space
• Operating System
• Page Table
1. If a page is not available in the main memory in its active state; then a request may be
made to the CPU for that page. Thus for this purpose, it has to generate an interrupt.
2. After that, the Operating system moves the process to the blocked state as an interrupt has
occurred.
3. Then after this, the Operating system searches the given page in the Logical address
space.
4. And Finally with the help of the page replacement algorithms, replacements are made in
the physical address space. Page tables are updated simultaneously.
5. After that, the CPU is informed about that update and then asked to go ahead with the
execution and the process gets back into its ready state.
When the process requires any of the pages that are not loaded into the memory, a page fault trap
is triggered and the following steps are followed,
1. The memory address which is requested by the process is first checked, to verify the
request made by the process.
2. If it is found to be invalid, the process is terminated.
3. In case the request by the process is valid, a free frame is located, possibly from a free-
frame list, where the required page will be moved.
4. A new operation is scheduled to move the necessary page from the disk to the specified
memory location. ( This will usually block the process on an I/O wait, allowing some
other process to use the CPU in the meantime. )
5. When the I/O operation is complete, the process's page table is updated with the new
frame number, and the invalid bit is changed to valid.
6. The instruction that caused the page fault must now be restarted from the beginning.
30
Disadvantages of Demand paging
In some cases when initially no pages are loaded into the memory, pages in such cases are only
loaded when are demanded by the process by generating page faults. It is then referred to
as Pure Demand Paging.
• In the case of pure demand paging, there is not even a single page that is loaded into the
memory initially. Thus pure demand paging causes the page fault.
• When the execution of the process starts with no pages in the memory, then the operating
system sets the instruction pointer to the first instruction of the process and that is on a
non-memory resident page and then in this case the process immediately faults for the
page.
• After that when this page is brought into the memory then the process continues its
execution, page fault is necessary until every page that it needs is in the memory.
• And at this point, it can execute with no more faults.
• This scheme is referred to as Pure Demand Paging: means never bring a page into the
memory until it is required.
Page Fault-
• When a page referenced by the CPU is not found in the main memory, it is called as a page
fault.
• When a page fault occurs, the required page has to be fetched from the secondary memory into
the main memory.
Page Replacement-
Page replacement is a process of swapping out an existing page from the frame of a main
memory and replacing it with the required page.
31
Page Replacement Algorithms-
Page replacement algorithms help to decide which page must be swapped out from the main
memory to create a room for the incoming page.
A good page replacement algorithm is one that minimizes the number of page faults.
• As the name suggests, this algorithm works on the principle of “First in First out“.
• It replaces the oldest page that has been present in the main memory for the longest time.
• It is implemented by keeping track of all the pages in a queue.
Problem:
32
A system uses 3 page frames for storing process pages in main memory. It uses the First in First
out (FIFO) page replacement policy. Assume that all the page frames are initially empty. What is
the total number of page faults that will occur while processing the page reference string given
below-
4 , 7, 6, 1, 7, 6, 1, 2, 7, 2
Also calculate the hit ratio and miss ratio.
Solution-
From here,
Total number of page faults occurred = 6
33
Thus, Miss ratio
= Total number of page misses / Total number of references
= 6 / 10
= 0.6 or 60%
Alternatively,
Miss ratio
= 1 – Hit ratio
= 1 – 0.4
= 0.6 or 60%
• As the name suggests, this algorithm works on the principle of “Least Recently Used“.
• It replaces the page that has not been referred by the CPU for the longest time.
Problem:
A system uses 3 page frames for storing process pages in main memory. It uses the Least
Recently Used (LRU) page replacement policy. Assume that all the page frames are initially
empty. What is the total number of page faults that will occur while processing the page
reference string given below-
4 , 7, 6, 1, 7, 6, 1, 2, 7, 2
Also calculate the hit ratio and miss ratio.
Solution-
34
From here,
Total number of page faults occurred = 6
• This algorithm replaces the page that will not be referred by the CPU in future for the longest
time.
• It is practically impossible to implement this algorithm.
• This is because the pages that will not be used in future for the longest time can not be
predicted.
• However, it is the best known algorithm and gives the least number of page faults.
• Hence, it is used as a performance measure criterion for other algorithms.
Problem:
A system uses 3 page frames for storing process pages in main memory. It uses the Optimal page
replacement policy. Assume that all the page frames are initially empty. What is the total number
of page faults that will occur while processing the page reference string given below-
4 , 7, 6, 1, 7, 6, 1, 2, 7, 2
Also calculate the hit ratio and miss ratio.
Solution-
35
From here,
Total number of page faults occurred = 5
In the similar manner as above-
• Hit ratio = 0.5 or 50%
• Miss ratio = 0.5 or 50%
Belady’s Anomaly-
Belady’s Anomaly is the phenomenon of increasing the number of page faults on increasing the
number of frames in main memory.
• An algorithm suffers from Belady’s Anomaly if and only if it does not follow stack property.
• Algorithms that follow stack property are called as stack based algorithms.
• Stack based algorithms do not suffer from Belady’s Anomaly.
• This is because these algorithms assign priority to a page for replacement that is independent
of the number of frames in the main memory.
Examples-
36
1. LRU Page Replacement Algorithm
2. Optimal Page Replacement Algorithm
Hence, they do not suffer from Belady’s Anomaly.
Stack Property-
Consider-
• Initially, we had ‘m’ number of frames in the main memory.
• Now, the number of frames in the main memory is increased to ‘m+1’.
37
Case-02: When frame size = 4
38
Number of page faults = 8
39
Number of page faults = 10
NOTE-
In the above illustration,
• FIFO Page Replacement Algorithm suffers from Belady’s Anomaly.
• It would be wrong to say that “FIFO Page Replacement Algorithm always suffer from
Belady’s Anomaly”.
Random Page Replacement Algorithm-
40