0% found this document useful (0 votes)
20 views

UNIT-4 Full

Uploaded by

chandubizgurukul
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views

UNIT-4 Full

Uploaded by

chandubizgurukul
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 54

UNIT-4 8 HOURS

Main Memory: Background, Contiguous Memory Allocation, Paging, Structure of the Page
Table, Segmentation.
Virtual Memory: Background, Demand Paging, Page Replacement, Allocation of Frames,
Introduction to Thrashing,

VINUTHA M S,Dr AIT, DEPT OF CSE 1


MAIN MEMORY

MEMORY HARDWARE:
Memory consists of a large array of bytes, each with its own address. The CPU fetches instructions from memory according
to the value of the program counter. These instructions may cause additional loading from and storing to specific memory
addresses.
Typical instruction-execution cycle, for example, first fetches an instruction from memory. The instruction is then decoded
and may cause operands to be fetched from memory. After the instruction has been executed on the operands, results may
be stored back in memory.
Main memory and the registers built into the processor itself are the only general-purpose storage that the CPU can access
directly.
Registers that are built into the CPU are generally accessible within one cycle of the CPU clock. A memory access may take
many cycles of the CPU clock. In such cases, the processor normally needs to stall, since it does not have the data required to
complete the instruction.
The remedy is to add fast memory between the CPU and main memory, typically on the CPU chip for fast access called as
CACHE

VINUTHA M S,Dr AIT, DEPT OF CSE 2


MEMORY PROTECTION:
• For proper system operation we must protect the operating system from
access by user processes.

• Each process has a separate memory space. Separate per-process


memory space protects the processes from each other.

• The hardware protection of memory is provided by two registers


 Base Register
 Limit Register

• The base register holds the smallest legal physical memory address,
called the starting address of the process.

• The Limit register specifies the size of range of the process.

• If the base register holds 300040 and the limit register is 120900, then the
program can legally access all addresses from 300040 through 420940.

VINUTHA M S,Dr AIT, DEPT OF CSE 3


• Protection of memory space is accomplished by having the CPU
hardware compare every address generated in user mode
with the registers.

• Any attempt by a program executing in user mode to access


operating-system memory or other users’ memory results in a
trap to the operating system, resulting in addressing error.

• The base and limit registers can be loaded only by the operating
system into the CPU hardware[Kernel Mode].

• This scheme prevents a user program from modifying the code


or data structures of either the operating system or other
users.

• The address generated by the CPU for a process should lie


between the Base address of the process and base + Limit of
the process, Else the hardware sends an interrupt to the OS.

VINUTHA M S,Dr AIT, DEPT OF CSE 4


ADDRESS BINDING:
• Address binding is the process of mapping the program's logical or virtual addresses to
corresponding physical or main memory addresses.
• Addresses in the source program are generally symbolic.
• A compiler typically binds these symbolic addresses to relocatable addresses.
• The linkage editor or loader in turn binds the relocatable addresses to absolute
addresses
• Each binding is a mapping from one address space to another .

The binding of instructions and data to memory addresses can be done in three ways.
1) Compile time: If you know at compile time where the process will reside in memory, then
absolute code can be generated.
2) Load time: If it is not known at compile time where the process will reside in memory,
then the compiler must generate relocatable code.
3) Execution time: If the process can be moved during its execution from one memory
segment to another, then binding must be delayed until run time.

VINUTHA M S,Dr AIT, DEPT OF CSE 5


LOGICAL VERSUS PHYSICAL ADDRESS SPACE:
• Logical address: An address generated by the CPU is commonly
referred to as a logical address. which is also called as virtual address
• The set of all logical addresses generated by a program is a logical
address space.
• Physical address: An address seen by the memory unit that is, the
one loaded into the memory-address register of the memory is
commonly referred to as a physical address.
• The set of all physical addresses corresponding to these logical
addresses is a physical address space.
• The run-time mapping from virtual to physical addresses is done by a
device called the memory-management unit (MMU).
• The base register is also called as relocation register.
• The value in the relocation register is added to every address
generated by a user process at the time the address is sent to
memory.
• For example, if the base is at 14000, then an attempt by the user to
address location 0 is dynamically relocated to location 14000; an
access to location 346 is mapped to location 14346

VINUTHA M S,Dr AIT, DEPT OF CSE 6


DYNAMIC LOADING DYNAMIC LINKING AND SHARED LIBRARIES
• Dynamically linked libraries are system libraries that are linked
Dynamic Loading is the process of loading a routine only to user programs when the programs are in execution.
when it is called or needed during runtime. • In Dynamic linking the linking of system libraries are postponed
Initially all routines are kept on disk in a relocatable load until execution time.
format. • Static Linking combines the system libraries to the user
The main program is loaded into memory and is executed. program at the time of compilation.
When a routine needs to call another routine, the calling • Dynamic linking saves both the disk space and the main
routine first checks to see whether the other routine has been memory space.
loaded. If it has not, the relocatable linking loader is called to
load the desired routine into memory. • The libraries can be replaced by a new version and all the
programs that reference the library will use the new version.
Advantage of dynamic loading This system is called as Shared libraries which can be done
• A routine is loaded only when it is needed. with dynamic linking.
• This method is particularly useful when large amounts of
code are needed to handle infrequently occurring cases,
such as error routines.

VINUTHA M S,Dr AIT, DEPT OF CSE 7


MEMORY ALLOCATION
CONTIGUOUS MEMORY ALLOCATION
Memory Management Techniques are basic
techniques that are used in managing the The memory is usually divided into two partitions: one for the resident
memory in operating system. Memory operating system and one for the user processes. In Multiprogramming
Management Techniques are basically classified several user processes to reside in memory at the same time. The OS need to
into two categories: decide how to allocate available memory to the processes that are in the
input queue waiting to be brought into memory.
In contiguous memory allocation, each process is contained in a single
section of memory that is contiguous to the section containing the next
process. In Contiguous memory allocation which is a memory
management technique, whenever there is a request by the user
process for the memory then a single section of the contiguous
memory block is given to that process according to its requirement.

OS
5KB-P0
5KB-P1
3KB-P2
16KB-P3
VINUTHA M S,Dr AIT, DEPT OF CSE Memory Block
MEMORY PROTECTION:
Fixed partition scheme: One of the simplest methods for
We can prevent a process from accessing memory of other process. allocating memory is to divide memory into several fixed-sized
If we have a system with a relocation register together with a limit Partitions. Each partition may contain exactly one process.
register we accomplish our goal.
The relocation register contains the value of the smallest physical Thus, the degree of multiprogramming is bound by the number of
address; the limit register contains the range of logical addresses partitions. In this multiple partition method, when a partition is free,
a process is selected from the input queue and is loaded into the
The MMU maps the logical address dynamically by adding the value
in the relocation register. This mapped address is sent to memory. free partition. When the process terminates, the partition becomes
available for another process.
When the CPU scheduler selects a process for execution, the
dispatcher loads the relocation and limit registers with the correct Case1: P with 4MB
values as part of the context switch. Case2: P with 2MB
Every address generated by a CPU is checked against these registers, Case3:P with 7MB
we can protect both the operating system and the other users’ case4: P with 18MB
programs Case 5: P1=7MB, P2=7MB & P3=14MB

VINUTHA M S,Dr AIT, DEPT OF CSE 9


Advantages: A fixed-size partition system has the following benefits:
• This strategy is easy to employ because each block is the same size. Now all that is left to do is allocate
processes to the fixed memory blocks that have been divided up.
• It is simple to keep track of how many memory blocks are still available, which determines how many
further processes can be allocated memory.
• This approach can be used in a system that requires multiprogramming since numerous processes can
be maintained in memory at once.
Disadvantages: Although the fixed-size partitioning strategy offers numerous benefits, there are a few
drawbacks as well:
• We won't be able to allocate space to a process whose size exceeds the block since the size of the
blocks is fixed.
• The amount of multiprogramming is determined by block size, and only as many processes can run
simultaneously in memory as there are available blocks.
• We must assign the process to this block if the block's size is more than that of the process;
nevertheless, this will leave a lot of free space in the block. This open area might have been used to
facilitate another procedure.

VINUTHA M S,Dr AIT, DEPT OF CSE 10


Variable Partition scheme: In the variable-partition scheme, the If the hole is too large, it is split into two parts. One part is allocated to the
operating system keeps a table indicating which parts of memory are arriving process; the other is returned to the set of holes.
available and which are occupied.
When a process terminates, it releases its block of memory, which is then
Initially, all memory is available for user processes and is considered placed back in the set of holes. If the new hole is adjacent to other holes,
one large block of available memory, a hole. these adjacent holes are merged to form one larger hole.

When a process is allocated space, it is loaded into memory, and it can The system may need to check whether there are processes waiting for
then compete for CPU time. When a process terminates, it releases its memory and whether this newly freed and recombined memory could satisfy
memory, which the operating system may then fill with another the demands of any of these waiting processes.
process from the input queue.

OS will have a list of available block sizes and an input queue. The
operating system can order the input queue according to a scheduling
algorithm.

Memory is allocated to processes until, finally, the memory


requirements of the next process cannot be satisfied—that is, no
available block of memory (or hole) is large enough to hold that
process.

When a process arrives and needs memory, the system searches the
set for a hole that is large enough for this process.

VINUTHA M S,Dr AIT, DEPT OF CSE 11


Advantages: A variable-size partitioning system has the following benefits:
• There is no internal fragmentation because the processes are given blocks of space according to
their needs. Therefore, this technique does not waste RAM.
• How many processes are in the memory at once and how much space they take up will determine
how many processes can be running simultaneously. As a result, it will vary depending on the
situation and be dynamic.
• Even a large process can be given space because there are no blocks with set sizes.

Disadvantages: Despite the variable-size partition scheme's many benefits, there are a few
drawbacks as well:
• This method is dynamic, hence it is challenging to implement a variable-size partition scheme.
• It is challenging to maintain record of processes and available memory space.

VINUTHA M S,Dr AIT, DEPT OF CSE 12


This procedure is a particular instance of the general dynamic storage FRAGMENTATION: Memory space in the system constantly goes through loading
allocation problem. There are many solutions to this problem. and releasing processes and their resources because of which the total memory
spaces gets broken into a lot of small pieces, this causes creation small non
utilized fragmented memory spaces, which are so small that normal processes
• First fit: Allocate the first hole that is big enough. Searching can start can not fit into those small fragments, causing those memory spaces not getting
either at the beginning of the set of holes or at the location where utilized at all, this is called memory Fragmentation in operating system.
the previous first-fit search ended. We can stop searching as soon as
we find a free hole that is large enough. INTERNAL FRAGMENTATION: In fixed size partitions, each process is allocated
with a partition, irrespective of its size. The allocated memory for a process may
• Best fit: Allocate the smallest hole that is big enough. We must be slightly larger than requested memory; this memory that is wasted internal to
search the entire list, unless the list is ordered by size. This strategy a partition, is called as internal fragmentation.
produces the smallest leftover hole.

• Worst fit: Allocate the largest hole. Again, we must search the entire
list, unless it is sorted by size. This strategy produces the largest
leftover hole, which may be more useful than the smaller leftover
hole from a best-fit approach

VINUTHA M S,Dr AIT, DEPT OF CSE 13


SOLUTION:
EXTERNAL FRAGMENTATION: As processes are loaded and removed
from memory, the free memory space is broken into little pieces.
External fragmentation exists when there is enough total memory space
COMPACTION: One solution to the problem of external
to satisfy a request but the available spaces are not contiguous, so that fragmentation is compaction. The goal is to shuffle the memory
the memory cannot be allocated to the process. contents so as to place all free memory together in one large
block.
Both the first-fit and best-fit strategies suffer from external
fragmentation. NON CONTIGUOUS MEMORY ALLOCATION: The solution to the
external-fragmentation problem is to permit the logical address
space of the processes to be noncontiguous, thus allowing a
process to be allocated physical memory wherever memory is
available. Two complementary techniques achieve this solution:
 segmentation
 paging

VINUTHA M S,Dr AIT, DEPT OF CSE 14


Sl.N
Contiguous Memory Allocation Non-Contiguous Memory Allocation
O

Contiguous memory allocation allocates consecutive blocks of memory to Non-Contiguous memory allocation allocates separate blocks of
1.
a file/process. memory to a file/process.

2. Faster in Execution. Slower in Execution.

3. It is easier for the OS to control. It is difficult for the OS to control.

Overhead is minimum as not much address translations are there while More Overheads are there as there are more address
4.
executing a process. translations.

5. Wastage of memory is there. No memory wastage is there.

It is of five types:
1.Paging
It is of two types:
2.Multilevel Paging
6. 1.Fixed(or static) partitioning
3.Inverted Paging
2.Dynamic partitioning
4.Segmentation
5.Segmented Paging

7. Degree of multiprogramming is fixed as fixed partitions Degree of multiprogramming is not fixed

VINUTHA M S,Dr AIT, DEPT OF CSE 15


Physical memory
PAGING [Collection of frames]

Paging involves breaking physical memory into fixed


size blocks called frames and breaking logical
memory [belong to process] into blocks of the same
size called pages.
Paging avoids external fragmentation and the need
for compaction. When a process is to be executed, Logical memory
its pages are loaded into any available memory [Collection of pages]
frames

VINUTHA M S,Dr AIT, DEPT OF CSE 16


Page model
• The data structure that is used by the virtual memory system in the operating system of a computer in order
to store the mapping between physical and logical addresses is commonly known as Page Table.
• As we had already told you that the logical address that is generated by the CPU is translated into the
physical address with the help of the page table. Thus page table mainly provides the corresponding
frame number (base address of the frame) where that page is stored in the main memory.

Advantages:
1.Efficient use of memory: Virtual memory allows the operating system to allocate only the necessary
amount of physical memory needed by a process, which reduces memory waste and increases overall
system performance.

2.Protection: Page Tables allow the operating system to control access to memory and protect sensitive
data from unauthorized access. Each PTE can be configured with access permissions, such as read-only or
no access, to prevent accidental or malicious modification of memory.

3.Flexibility: Virtual memory allows multiple processes to share the same physical memory space, which
increases system flexibility and allows for better resource utilization.

4.Address translation: Page Tables provide the mechanism for translating virtual addresses used by a
process into physical addresses in memory, which allows for efficient use of memory and simplifies memory
management.

5.Hierarchical design: Some systems use hierarchical page tables, which provide a more efficient method
for managing large virtual address spaces. Hierarchical page tables divide the Page Table into smaller tables,
each pointing to a larger table, which allows for faster access to Page Table Entries and reduces the overall
size of the Page Table. VINUTHA M S,Dr AIT, DEPT OF CSE 17
PAGING HARDWARE:
Every address generated by the CPU is divided into two parts:
 a page number (p)
 a page offset (d)
The following outlines the steps taken by the MMU to translate a
logical address generated by the CPU to a physical address:
1. Extract the page number p and use it as an index into the page table.
2. Extract the corresponding frame number f from the page table.
3. Replace the page number p in the logical address with the frame
number f.
The page size is defined by the hardware. The size of a page is a power
of 2, varying between 512 bytes and 1 GB per page.
If the size of the logical address space is 2m, and a page size is 2n bytes,
then the high-order m−n bits of a logical address designate the page
number, and the n low-order bits designate the page offset.

VINUTHA M S,Dr AIT, DEPT OF CSE 18


Paging Model-example
• Using a page size of 4 bytes and a physical memory of 32 Data[4byte]
bytes (8 pages),
• we show how the programmer’s view of memory can be F0
mapped into physical memory. P0
• Logical address 0 is page 0, offset 0. Indexing into the page P1 F1
table
• PHYSICAL ADDRESS=((FRAME*PAGE SIZE)+OFFSET) P2 F2
• we find that page 0 is in frame 5. Thus, logical address 0
maps to physical address 20 [= (5 × 4) + 0]. P3 F3
Logical Address 0 Maps Physical Address 20 F4
• Logical address 3 (page 0, offset 3) maps to physical
address 23 [= (5 × 4) + 3]. F5
• Logical address 4 is page 1, offset 0; according to the page F6
table, page 1 is mapped to frame 6. Thus, logical address 4
maps to physical address 24 [= (6 × 4) + 0].
F7
• Logical address 13 maps to physical address 9.

VINUTHA M S,Dr AIT, DEPT OF CSE 19


FREE FRAME LIST
When a process arrives in the system to be executed, its size, expressed in pages,
is examined. Each page of the process needs one frame. Thus, if the process
requires n pages, at least n frames must be available in memory.
If n frames are available, they are allocated to this arriving process. The first
page of the process is loaded into one of the allocated frames, and the frame
number is put in the page table for this process. The next page is loaded into
another frame, its frame number is put into the page table, and so on .
An important aspect of paging is the clear separation between the
programmer’s view of memory and the actual physical memory. The
programmer views memory as one single space, containing only this one
program. In fact, the user program is scattered throughout physical memory,
which also holds other programs.
The difference between the programmer’s view of memory and the actual
physical memory is reconciled by the address-translation hardware. The logical
addresses are translated into physical addresses. This mapping is hidden from
the programmer and is controlled by the operating system.
Since the operating system is managing physical memory, it must be aware of
the allocation details of physical memory—which frames are allocated, which
frames are available, how many total frames there are, and so on. This
information is generally kept in a single, system-wide data structure called a
frame table.

VINUTHA M S,Dr AIT, DEPT OF CSE 20


HARDWARE SUPPORT
The hardware implementation of the page table can be done in several ways.
• Case1:The page table is implemented as a set of dedicated registers if the size of the
page table is too small.
• Case2: If the size of the page table is too large then the page table is kept in main
memory and a page table base register[PTBR] is used to point to the page table.
When the page table is kept in main memory then two memory accesses are
required to access a byte. One for accessing the page table entry, another one for
accessing the byte.
Thus the overhead of accessing the main memory increases. The standard solution to
this problem is to use a special, small, fast lookup hardware cache called a translation
look-aside buffer (TLB).
• Each entry in the TLB consists of two parts: a key (or tag) and a value. The TLB
contains only a few of the page-table entries. When a logical address is generated by
the CPU, its page number is presented to the TLB.
If the page number is found (TLB HIT), its frame number is immediately available and
is used to access memory. 1. An 80-percent hit ratio
If the page number is not in the TLB (TLB miss), a memory reference to the page table
must be made. When the frame number is obtained, we can use it to access memory.
The percentage of times that the page number of interest is found in the TLB is called
the hit ratio. 2. For a 99-percent hit ratio,
The access time of a byte is said to be effective when the TLB hit ratio is high.
Thus the effective access time is given by
Effective access time = TLB hit ratio* Memory access time +TLB miss ratio* (2*memory
access time)
VINUTHA M S,Dr AIT, DEPT OF CSE 21
PROTECTION
Memory protection in a paged environment is accomplished
by protection bits associated with each frame.
One bit can define a page to be read–write or read-only.
When the physical address is being computed, the
protection bits can be checked to verify that no writes are
being made to a read-only page
One additional bit is generally attached to each entry in
the page table: a valid–invalid bit.
 When this bit is set to valid, the associated page is in the
process’s logical address space and is thus a legal.
 When the bit is set to invalid, the page is not in the
process’s logical address space.
Page-table length register (PTLR),is used to indicate the size
of the page table. This value is checked against every logical
address to verify that the address is in the valid range for
the process

VINUTHA M S,Dr AIT, DEPT OF CSE 22


SHARED PAGES
• An advantage of paging is the possibility of sharing
common code.
• If the code is reentrant code (or pure code), however, it
can be shared.
• Reentrant code is non-self-modifying code: it never
changes during execution. Thus, two or more processes
can execute the same code at the same time
• EXAMPLE: Consider three processes that share a page
editor which is of three pages. Each process has its own
data page. Only one copy of the editor need be kept in
physical memory.
• Each user’s page table maps onto the same physical copy
of the editor, but data pages are mapped onto different
frames.

VINUTHA M S,Dr AIT, DEPT OF CSE 23


STRUCTURE OF PAGE TABLE
The structure of page table includes
 Hierarchical paging
 Hashed page table
 Inverted page table.
HIERARCHIAL PAGING:
• Most modern computer systems support a large logical address space
(232 to 264). In such an environment, the page table itself becomes
excessively large.
• For example, consider a system with a 32-bit logical address space. If
the page size in such a system is 4 KB (212), then a page table may
consist of over 1 million entries (220 = 232/212).
• Assuming that each entry consists of 4 bytes, each process may need
up to 4 MB [4*1 million] of physical address space for the page table
alone.
Address translation works from the outer page table inward, this scheme is
• A logical address is divided into a page number consisting of 20 bits also known as a forward mapped page table.
and a page offset consisting of 12 bits.
Page no(20 bits) Page offset(12 bits)

• The page number is further divided into a 10-bit page number and a
10-bit page offset.
• Here p1 is an index into the outer page table and p2 is the
displacement within the page of the inner page table.

VINUTHA M S,Dr AIT, DEPT OF CSE 24


HASHED PAGE TABLES
A common approach for handling address spaces larger than 32 bits is to use a
hashed page table, with the hash value being the virtual page number.
Each entry in the hash table contains a linked list of elements that hash to the
same location
Each element consists of three fields:

virtual page value of the mapped page pointer to the next element in the
number frame linked list

The virtual page number in the virtual address is hashed into the hash table.
The virtual page number is compared with field 1 in the first element in the linked
list. A variation of this scheme that is useful for 64-bit address spaces
If there is a match, the corresponding page frame (field 2) is used to form the has been proposed. This variation uses clustered page tables,
desired physical address. which are similar to hashed page tables except that each entry in
the hash table refers to several pages (such as 16) rather than a
If there is no match, subsequent entries in the linked list are searched for a
single page.
matching virtual page number.
A variation to hashed page table is clustered page tables, which are similar to Therefore, a single page-table entry can store the mappings for
hashed page tables except that each entry in the hash table refers to several pages multiple physical-page frames. Clustered page tables are
(such as 16) rather than a single page. particularly useful for sparse address spaces, where memory
references are noncontiguous and scattered throughout the
address space.

VINUTHA M S,Dr AIT, DEPT OF CSE 25


INVERTED PAGE TABLES

• Inverted Page Table is the global page table which is


maintained by the Operating System for all the
processes. In inverted page table, the number of
entries is equal to the number of frames in the main
memory. It can be used to overcome the drawbacks
of page table.
• There is always a space reserved for the page
regardless of the fact that whether it is present in the
main memory or not. However, this is simply the
wastage of the memory if the page is not present.
• We can save this wastage by just inverting the page
table. We can save the details only for the pages
which are present in the main memory. Frames are
the indices and the information saved inside the
block will be Process ID and page number.

VINUTHA M S,Dr AIT, DEPT OF CSE 26


INVERTED PAGE TABLES
Each process has an associated page table. The page table has one entry for each page
that the process is using. The table is sorted by virtual address, the operating system
calculate where in the table the associated physical address entry is located and to use
that value directly. One of the drawbacks of this method is that each page table may
consist of millions of entries.

To solve this problem, we can use an inverted page table. An inverted page table has one
entry for each frame of memory. Each entry consists of the virtual address of the page
stored in that real memory location; with information about the process that owns the
page. Thus, only one page table is in the system, and it has only one entry for each page
of physical memory.
Each Logical address in the system consists of a triple :
< process-id, page-number, offset>.
Each inverted page-table entry is a pair

When a memory reference occurs, part of the virtual address, consisting of , is presented
to the memory subsystem. The inverted page table is then searched for a match. If a
match is found—say, at entry i—then the physical address is generated.

VINUTHA M S,Dr AIT, DEPT OF CSE 27


SEGMENTATION HARDWARE:

SEGMENTATION The programmer can refer to objects in the program by


a two-dimensional address (segment number and offset); the
• Segmentation is a memory-management scheme that supports actual physical memory a one dimensional sequence of bytes.
the programmer view of memory. A logical address space is a
collection of segments. Each segment has a name and a length. The two-dimensional user-defined addresses should be mapped
into one-dimensional physical addresses.
• The logical addresses specify both the segment name and the
offset within the segment. Each address is specified by two The mapping of logical address to physical address is done by a table
quantities: a segment name and an offset. called segment table. Each entry in the segment table has a segment
• A compiler might create separate segments for the following: base and a segment limit. The segment base contains the starting
physical address where the segment resides in memory. The
1. The code segment limit specifies the length of the segment.
2. Global variables
3. The heap, from which memory is allocated
4. The stacks used by each thread
5. The standard C library

VINUTHA M S,Dr AIT, DEPT OF CSE 28


A logical address consists of two parts: a segment number,
s, and an offset into that segment, d.
• The segment number is used as an index to the segment
table. The offset d of the logical address must be
between 0 and the segment limit.
• If it is not between 0 and limit then hardware trap to the
operating system (logical addressing attempt beyond end
of segment).
• When an offset is legal, it is added to the segment base to
produce the address in physical memory of the desired
byte.
• The segment table is an array of base–limit register pairs.
• Segmentation can be combined with paging.

VINUTHA M S,Dr AIT, DEPT OF CSE 29


UNIT-4 8 HOURS
Main Memory: Background, Contiguous Memory Allocation, Paging, Structure of the Page
Table, Segmentation.
Virtual Memory: Background, Demand Paging, Page Replacement, Allocation of Frames,
Introduction to Thrashing,

VINUTHA M S,Dr AIT, DEPT OF CSE 1


VIRTUAL MEMORY
• Virtual memory is a memory management technique that allows the execution of
processes that are not completely in memory. In some cases during the execution of the
program the entire program may not be needed, such as error conditions, menu selection
options etc.
• The virtual address space of a process refers to the logical view of how a process is stored in
memory. The heap will grow upward in memory as it is used for dynamic memory allocation.
The stack will grow downward in memory through successive function calls . The large blank
space (or hole) between the heap and the stack is part of the virtual address space but will
require actual physical pages only if the heap or stack grows.
• Virtual address spaces that include holes are known as sparse address spaces. Sparse
address space can be filled as the stack or heap segments grow or if we wish to dynamically
link libraries Virtual memory allows files and memory to be shared by two or more
processes through page sharing
ADVANTAGES:
• One major advantage of this scheme is that programs can be larger than physical memory
• Virtual memory also allows processes to share files easily and to implement shared
memory.
• Increase in CPU utilization and throughput.
• Less I/O would be needed to load or swap user programs into memory

VINUTHA M S,Dr AIT, DEPT OF CSE 2


DEMAND PAGING
• Demand paging is the process of loading the pages only when they are
demanded by the process during execution. Pages that are never
accessed are thus never loaded into physical memory.
• A demand-paging system is similar to a paging system with swapping
where processes reside in secondary memory
• When we want to execute a process, we swap it into memory. Rather than
swapping the entire process into memory we use a lazy swapper that
never swaps a page into memory unless that page will be needed.
• Lazy swapper is termed to as pager in demand paging. When a process is
to be swapped in, the pager guesses which pages will be used before the
process is swapped out again. Instead of swapping in a whole process, the
pager brings only those pages into memory.
• OS need the hardware support to distinguish between the pages that
are in memory and the pages that are on the disk. The valid–invalid bit
scheme can be used for this purpose.
• If the bit is set to ―valid, the associated page is both legal and in
memory.
• If the bit is set to ―invalid, the page either is not valid or is valid but is
currently on the disk.

VINUTHA M S,Dr AIT, DEPT OF CSE 3


PAGE FAULT
• If the process tries to access a page that was not brought into
memory, then it is called as a page fault. Access to a page marked
invalid causes a page fault. The paging hardware, will notice that the
invalid bit is set, causing a trap to the operating system.
PROCEDURE FOR HANDLING THE PAGE FAULT:
1. Check an internal table (usually kept with the process control block) for
this process to determine whether the reference was a valid or an invalid
memory access.
2. If the reference was invalid, we terminate the process. If it was valid but
we have not yet brought in that page, we now page it in.
3. Find a free frame
4. Schedule a disk operation to read the desired page into the newly
allocated frame.
5. When the disk read is complete, we modify the internal table kept with
the process and the page table to indicate that the page is now in
memory.
6. Restart the instruction that was interrupt. Though it had always been in
memory.

VINUTHA M S,Dr AIT, DEPT OF CSE 4


PURE DEMAND PAGING
The process of executing a program with no pages in main If a page fault occurs, we must first read the relevant page
memory is called as pure demand paging. This never brings a page from disk and then access the desired word.
into memory until it is required.
There are three major components of the page-fault service
The hardware to support demand paging is the same as the time:
hardware for paging and swapping:
1. Service the page-fault interrupt.
Page table: This table has the ability to mark an entry invalid
through a valid–invalid bit or a special value of protection bits. 2. Read in the page.
Secondary memory. This memory holds those pages that are not 3. Restart the process
present in main memory.
With an average page-fault service time of 8 milliseconds
The secondary memory is usually a high-speed disk. It is known as and a memory access time of 200 nanoseconds, the
the swap device, and the section of disk used for this purpose is effective access time in nanoseconds is
known as swap space.
Effective access time = (1 − p) × (200) + p (8 milliseconds)
PERFORMANCE OF DEMAND PAGING:
= (1 − p) × 200 + p × 8,000,000
Demand paging can affect the performance of a computer system.
The effective access time for a demand-paged memory is given by = 200 + 7,999,800 × p.
effective access time = (1 − p) × ma + p × page fault time Effective access time is directly proportional to the page-
fault rate.
The memory-access time, denoted ma, ranges from 10 to 200
nanoseconds. P is probability of page fault
If there is no page fault then the effective access time is equal to
the memory access time.

VINUTHA M S,Dr AIT, DEPT OF CSE 5


PAGE REPLACEMENT
NEED FOR PAGE REPLACEMENT:
Page replacement is basic to demand paging .If a page requested
by a process is in memory, then the process can access it. If the
requested page is not in main memory, then it is page fault.
When there is a page fault the OS decides to load the pages from
the secondary memory to the main memory.
It looks for the free frame. If there is no free frame then the
pages that are not currently in use will be swapped out of the
main memory, and the desired page will be swapped into the
main memory.
The process of swapping a page out of main memory to the
swap space and swapping in the desired page into the main
memory for execution is called as Page Replacement.

VINUTHA M S,Dr AIT, DEPT OF CSE 6


• STEPS IN PAGE REPLACEMENT:
1. Find the location of the desired page on the disk.
2. Find a free frame:
 If there is a free frame, use it.
 If there is no free frame, use a page-replacement algorithm to select a victim frame.
 Write the victim frame to the disk; change the page and frame tables accordingly.
3. Read the desired page into the newly freed frame; change the page and frame tables.
4. Continue the user process from where the page fault occurred

If no frames are free, two page transfers (one out and one in) are required. This situation
effectively doubles the page-fault service time and increases the effective access time
accordingly.
This overhead can be reduced by using a modify bit (or dirty bit).
When this scheme is used, each page or frame has a modify bit associated with it in the
hardware.

MODIFY BIT: The modify bit for a page is set by the hardware whenever any byte in the
page is written into, indicating that the page has been modified.
When we select a page for replacement, we examine its modify bit. If the bit is set, we
know that the page has been modified since it was read in from the disk. In this case, we
must write the page to the disk.
If the modify bit is not set, however, the page has not been modified since it was read into
memory. In this case, we need not write the memory page to the disk: it is already there.

VINUTHA M S,Dr AIT, DEPT OF CSE 7


PAGE REPLACEMENT ALGORITHMS:
If we have multiple processes in memory, we must decide how many frames to allocate to each process; and when page
replacement is required, we must select the frames that are to be replaced.
The string of memory references made by a process is called a reference string. There are many different page-replacement
algorithms that includes
• FIFO page Replacement
• Optimal Page Replacement
• LRU Page Replacement
• LRU Approximation page Replacement algorithm
• Counting Based Page Replacement Algorithm
• Page Buffering Algorithm

VINUTHA M S,Dr AIT, DEPT OF CSE 8


FIFO PAGE REPLACEMENT:
The simplest page-replacement algorithm is a first-in, first-out • Initially, all 4 slots are empty, so when 1, 2, 3, 4 came they are
allocated to the empty slots in order of their arrival. This is
(FIFO) algorithm. page fault as 1, 2, 3, 4 are not available in memory.
A FIFO replacement algorithm replaces the oldest page that • When 5 comes, it is not available in memory so page fault
occurs and it replaces the oldest page in memory, i.e., 1.
was brought into main memory.
• When 1 comes, it is not available in memory so page fault
occurs and it replaces the oldest page in memory, i.e., 2.
• When 3,1 comes, it is available in the memory, i.e., Page Hit,
so no replacement occurs.
• When 6 comes, it is not available in memory so page fault
occurs and it replaces the oldest page in memory, i.e., 3.
• When 3 comes, it is not available in memory so page fault
occurs and it replaces the oldest page in memory, i.e., 4.
• When 2 comes, it is not available in memory so page fault
occurs and it replaces the oldest page in memory, i.e., 5.
• When 3 comes, it is available in the memory, i.e., Page Hit, so
no replacement occurs.
• Page Fault ratio = [page fault/(hit+miss)]
9/12 i.e. total miss/total possible cases
• Hit=3
• Page fault=Miss=9

VINUTHA M S,Dr AIT, DEPT OF CSE 9


EXAMPLE: Consider the Reference string 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1 for a memory with three frames.
The three frames are empty initially.
The first three references (7, 0, 1) cause page faults and are brought into these empty frames.
The algorithm has 15 faults

Page 0 is the next reference and 0 is already in memory, we have no fault for this reference.
The first reference to 3 results in replacement of page 0, since it is now first in line.
Because of this replacement, the next reference, to 0, will fault. Page 1 is then replaced by page 0. The process continues until all
the pages are referenced.
Advantages: The FIFO page-replacement algorithm is easy to understand and program
Disadvantages: The Performance is not always good.
It Suffers from Belady’s Anomaly.
BELADY’S ANOMALY: The page fault increases as the number of allocated memory frame increases. This unexpected
result is called as Belady’s Anomaly.

VINUTHA M S,Dr AIT, DEPT OF CSE 10


Initially, all 4 slots are empty, so when 1, 2, 3, 4 came they are allocated to
the empty slots in order of their arrival.
• Least Recently Used page replacement algorithm keeps track of This is page fault as 1, 2, 3, 4 are not available in memory.
page usage over a short period of time. It works on the idea that When 5 comes, it is not available in memory so page fault occurs and it
the pages that have been most heavily used in the past are most replaces 1 which is the least recently used page.
likely to be used heavily in the future too. When 1 comes, it is not available in memory so page fault occurs and it
• In LRU, whenever page replacement happens, the page which has replaces 2.
not been used for the longest amount of time is replaced. When 3,1 comes, it is available in the memory, i.e., Page Hit, so no
replacement occurs.
When 6 comes, it is not available in memory so page fault occurs and it
replaces 4.
When 3 comes, it is available in the memory, i.e., Page Hit, so no
replacement occurs.
When 2 comes, it is not available in memory so page fault occurs and it
replaces 5.
When 3 comes, it is available in the memory, i.e., Page Hit, so no
replacement occurs.
Page Fault ratio = 8/12

Advantages
•Efficient.
•Doesn't suffer from Belady’s Anomaly.
Disadvantages
•Complex Implementation.
•Expensive.
•Requires hardware support.

VINUTHA M S,Dr AIT, DEPT OF CSE 11


VINUTHA M S,Dr AIT, DEPT OF CSE 12
STACK ALGORITHM: A stack algorithm is an algorithm for which it can be shown that the set of pages in memory for n
frames is always a subset of the set of pages that would be in memory with n + 1 frames.

VINUTHA M S,Dr AIT, DEPT OF CSE 13


OPTIMAL PAGE REPLACEMENT
Total Page Fault = 6
• Optimal Page Replacement algorithm is the best page replacement algorithm as it Initially, all 4 slots are empty, so when 1, 2, 3, 4 came they are allocated to
gives the least number of page faults. It is also known as OPT, clairvoyant the empty slots in order of their arrival. This is page fault as 1, 2, 3, 4 are
replacement algorithm, or Belady’s optimal page replacement policy. not available in memory.
• In this algorithm, pages are replaced which would not be used for the longest When 5 comes, it is not available in memory so page fault occurs and it
duration of time in the future, i.e., the pages in the memory which are going to be replaces 4 which is going to be used farthest in the future among 1, 2, 3, 4.
referred farthest in the future are replaced. When 1,3,1 comes, they are available in the memory, i.e., Page Hit, so no
replacement occurs.
• This algorithm was introduced long back and is difficult to implement because it
requires future knowledge of the program behavior. However, it is possible to When 6 comes, it is not available in memory so page fault occurs and it
implement optimal page replacement on the second run by using the page reference replaces 1.
information collected on the first run. When 3, 2, 3 comes, it is available in the memory, i.e., Page Hit, so no
replacement occurs.
Page Fault ratio = 6/12

Advantages
•Easy to Implement.
•Simple data structures are used.
•Highly efficient.

Disadvantages
•Requires future knowledge of the program.
•Time-consuming.

VINUTHA M S,Dr AIT, DEPT OF CSE 14


VINUTHA M S,Dr AIT, DEPT OF CSE 15
VINUTHA M S,Dr AIT, DEPT OF CSE 16
VINUTHA M S,Dr AIT, DEPT OF CSE 17
VINUTHA M S,Dr AIT, DEPT OF CSE 18
VINUTHA M S,Dr AIT, DEPT OF CSE 19
ALLOCATION OF FRAMES:

VINUTHA M S,Dr AIT, DEPT OF CSE 20


VINUTHA M S,Dr AIT, DEPT OF CSE 21
VINUTHA M S,Dr AIT, DEPT OF CSE 22
d) Non-Uniform Memory Access
• In some systems, a given CPU can access some sections • The goal is to have memory frames allocated ―as close
of main memory faster than it can access others. These as possible‖ to the CPU on which the process is running
performance differences are caused by how CPUs and so that the memory access can be faster.
memory are interconnected in the system. • In NUMA systems the scheduler tracks the last CPU on
• A system is made up of several system boards, each which each process ran. If the scheduler tries to schedule
containing multiple CPUs and some memory. each process onto its previous CPU, and the memory-
management system tries to allocate frames for the
• The CPUs on a particular board can access the memory process close to the CPU on which it is being scheduled,
on that board with less delay than they can access then improved cache hits and decreased memory access
memory on other boards in the system. times will result
• The systems in which the memory access time are
uniform is called as Uniform memory access.
• Systems in which memory access times vary significantly
are known collectively as non-uniform memory access
(NUMA) systems, and they are slower than systems in
which memory and CPUs are located on the same
motherboard.
• Managing which page frames are stored at which
locations can significantly affect performance in NUMA
systems.
• If we treat memory as uniform in such a system, CPUs
may wait significantly longer for memory access.

VINUTHA M S,Dr AIT, DEPT OF CSE 23


THRASHING:
If the process does not have the number of frames it needs to support pages in active use, it will quickly page-fault. At this point, it must
replace some page. If all its pages are in active use, it must replace a page that will be needed again right away. So it quickly faults again,
and again, and again, replacing pages that it must bring back in immediately. This high paging activity is called thrashing. A process is
thrashing if it is spending more time paging than executing.

Causes of Thrashing:
The operating system monitors CPU utilization If CPU utilization is too low; we increase the degree of multiprogramming by introducing a
new Process to the system.
Now suppose that a process enters a new phase in its execution and needs more frames. It starts faulting and taking frames away from
other processes.
A global page-replacement algorithm is used; it replaces pages without regard to the process to which they belong.
These processes need those pages, however, and so they also fault, taking frames from other processes. These faulting processes must
use the paging device to swap pages in and out. As processes wait for the paging device, CPU utilization decreases.
The CPU scheduler sees the decreasing CPU utilization and increases the degree of multiprogramming as a result. The new process tries to
get started by taking frames from running processes, causing more page faults and a longer queue for the paging device.
As a result, CPU utilization drops even further, and the CPU scheduler tries to increase the degree of multiprogramming even more.
Thrashing has occurred, and system throughput plunges.

VINUTHA M S,Dr AIT, DEPT OF CSE 24


• At this point, to increase CPU utilization and stop thrashing, we must decrease the degree of Multi programming.
• We can limit the effects of thrashing by using a local replacement algorithm. With local replacement, if one process starts
thrashing, it cannot steal frames from another process, so the page fault of one process does not affect the other process.
• To prevent thrashing, we must provide a process with as many frames as it needs. The Os need to know how many frames are
required by the process.
• The working-set strategy starts by looking at how many frames a process is actually using. This approach defines the locality
model of process execution.
• A locality is a set of pages that are actively used together. A program is generally composed of several different localities, which
may overlap.
• Suppose we allocate enough frames to a process to accommodate its current locality. It will fault for the pages in its locality
until all these pages are in memory; then, it will not fault again until it changes localities.
• If we do not allocate enough frames to accommodate the size of the current locality, the process will thrash, since it cannot
keep in memory all the pages that it is actively using.
Working-Set Model
• The working-set model is based on the assumption of locality.
• This model uses a parameter Δ to define the working-set window.
• The idea is to examine the most recent Δ page references. The set of pages in the most recent Δ page references is the working
set.
• If a page is in active use, it will be in the working set.
• If it is no longer being used, it will drop from the working
VINUTHA setAIT,
M S,Dr Δ DEPT
timeOFunits
CSE after its last reference 25

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy