UNIT-3 What Does It Mean by Memory Management Unit?
UNIT-3 What Does It Mean by Memory Management Unit?
UNIT-3 What Does It Mean by Memory Management Unit?
• Compiler and Assembler generate an object file (containing code and data
segments) from each source file.
• Linker combines all the object files for a program into a single executable object file,
which is complete and self-sufficient.
• Loader (part of OS) loads an executable object file into memory at locations
determined by the operating system.
FUNCTIONS OF MMU:
ADDRESS BINDING:
• The Address Binding refers to the mapping of computer instructions and data to
physical memory locations.
• Both logical and physical addresses are used in computer memory.
• It assigns a physical memory region to a logical pointer by mapping a physical address
to a logical address known as a virtual address.
• It is also a component of computer memory management that the OS performs on
behalf of applications that require memory access.
• If the compiler is responsible for performing address binding then it is called compile-
time address binding.
• It will be done before loading the program into memory.
• The compiler requires interacts with an OS memory manager to perform compile-
time address binding.
Load time Address Binding:
• It will be done after loading the program into memory.
• This type of address binding will be done by the OS memory manager i.e loader.
• Keep in memory only those instructions and data that are needed at any given time.
o Dynamic linking
o Dynamic loading
DYNAMIC LINKING:
• Standard swapping involves moving processes between main memory and a backing
store.
• The backing store is commonly a fast disk.
• The system maintains a ready queue consisting of all processes whose memory
images are on the backing store or in memory and are ready to run.
• The actual transfer of the 100-MB process to or from main memory takes
100 MB/50 MB per second = 2 seconds
• The swap time is 200 milliseconds. Since we must swap both out and in, the total
swap time is about 4,000 milliseconds.
• Standard swapping is not used in modern operating systems. It requires too much
swapping time and provides too little execution time to be a reasonable memory-
management solution.
FUNCTIONS OF MEMORY MANAGEMENT:
• The concurrent residency of more than one program in the main memory is referred
as multiprogramming
• Since multiple programs are resident in the memory, as soon as the currently
executing program finishes its execution, the next program is dispatched for its
consumption.
• The main objective of multiprogramming is:
o Maximum CPU utilization
o Efficient management of the main memory
CONTIGUOUS MEMORY:
• In Partition Allocation, when there is more than one partition freely available to
accommodate a process’s request, a partition must be selected.
• To choose a particular partition, a partition allocation method is needed.
• When it is time to load a process into the main memory and if there is more than one
free block of memory of sufficient size then the OS decides which free block to
allocate.
DYNAMIC STORAGE ALLOCATION:
Dynamic Storage-Allocation methods:
• FIRST FIT
• BEST FIT
• WORST FIT
• This method keeps the free/busy list of jobs organized by memory location, low-
ordered to high-ordered memory.
• In this method, first job claims the first available memory with space more than or
equal to its size.
• The operating system doesn’t search for appropriate partition but just allocate the
job to the nearest memory partition available with sufficient size.
• Memory partition: 150kb,220kb,500kb,350kb,700kb.
• Process: p1 -200kb, p2-160kb, p3 - 450kb, p4-500kb.
• It is fast in processing.
• As the processor allocates the nearest available memory partition to the job, it is
very fast in execution.
• Disadvantages of First-Fit Memory Allocation:
• It wastes a lot of memory.
• The processor ignores if the size of partition allocated to the job is very large as
compared to the size of job or not. It just allocates the memory.
• As a result, a lot of memory is wasted and many jobs may not get space in the
memory, and would have to wait for another job to complete.
BEST FIT ALGORITHM:
• Allocate the smallest hole that is big enough. We must search the entire list, unless
the list is ordered by size.
• This strategy produces the smallest leftover hole.
• memory partition: 150kb,220kb,500kb,350kb,700kb
• Process: p1 -200kb, p2-160kb, p3 - 450kb, p4-500kb
• All the process are allocated to this scenario.
• Memory Efficient.
• The operating system allocates the job minimum possible space in the memory,
making memory management very efficient.
• To save memory from getting wasted, it is the best method.
Disadvantages of Best-Fit Allocation:
• It is a Slow Process.
• Checking the whole memory for each job makes the working of the operating system
very slow.
• It takes a lot of time to complete the work.
WORST FIT:
• As processes are loaded and removed from memory, the free memory space is
broken into little pieces.
• It happens after sometimes that processes cannot be allocated to memory blocks
considering their small size and memory blocks remains unused. This problem is
known as Fragmentation.
• Types of Fragmentation:
o Internal Fragmentation
o External Fragmentation
Internal fragmentation: Memory block assigned to process is bigger. Some portion of
memory is left unused, as it cannot be used by another process.
External fragmentation: Total memory space is enough to satisfy a request or to reside a
process in it, but it is not contiguous, so it cannot be used.
BOTH FIRST FIT AND BEST FIT:
One solution to the problem of external fragmentation is COMPACTION.
The goal is to shuffle the memory contents so as to place all free memory together in one
large block. (Costly).
Another possible solution to the external-fragmentation problem is to permit the logical
address space of the processes to be non-contiguous, thus allowing a process to be allocated
physical memory wherever such memory is available.
SEGMENTATION:
PAGING:
• A computer can address more memory than the amount physically installed on the
system.
• This extra memory is actually called virtual memory and it is a section of a hardware
that's set up to match the computer's RAM.
• Paging technique plays an important role in implementing virtual memory.
• Paging is a storage mechanism that allows OS to retrieve processes from the
secondary storage into the main memory in the form of pages.
• Paging is a memory management technique in which process address space is
broken into blocks of the same size called pages.
• The size of the process is measured in the number of pages.
• The Memory Management Unit (MMU) is responsible for converting logical
addresses to physical addresses.
• The physical address refers to the actual address of a frame in which each page will
be stored, whereas the logical address refers to the address that is generated by the
CPU for each page.
• When the CPU accesses a page using its logical address, the OS must first collect the
physical address in order to access that page physically. There are two elements to
the logical address:
▪ Page number
▪ Offset
• The OS’s memory management unit must convert the page numbers to the frame
numbers.
• The address generated by the CPU (Logical Address) is divided into the following:
Page offset(d): It refers to the number of bits necessary to represent a certain word
on a page, page size in Logical Address Space, or page word number or page offset.
Page number(p): It is the number of bits needed to represent the pages in the
Logical Address Space or the page number.
• The Physical Address is divided into the following:
Frame offset(d): It refers to the number of bits necessary to represent a certain word
in a frame, or the Physical Address Space frame size, the word number of a frame, or
the frame offset.
Frame number(f): It’s the number of bits needed to indicate a frame of the Physical
Address Space or a frame number.
STRUCTURE OF PAGE TABLE:
• A common approach for handling address spaces larger than 32 bits is to use a
hashed page table, with the hash value being the virtual page number.
• Each element consists of three fields:
(1) the virtual page number,
(2) the value of the mapped page frame, and
(3) a pointer to the next element in the linked list.
The algorithm works as follows: The virtual page number in the virtual address is
hashed into the hash table
INVERTED PAGE TABLE:
• An inverted page table has one entry for each real page (or frame) of memory.
• Each entry consists of the virtual address of the page stored in that real memory
location, with information about the process that owns the page.
• Thus, only one page table is in the system, and it has only one entry for each page of
physical memory.
• IBM was the first major company to use inverted page tables, starting with the IBM
System 38 and continuing through the RS/6000 and the current IBM Power CPUs.
• For the IBM RT, each virtual address in the system consists of a triple:
<process-id, page-number, offset>
• Each inverted page-table entry is a pair where the process-id assumes the role of the
address-space identifier.
DEMAND PAGING:
• We are faced with three major components of the page-fault service time:
1. Service the page-fault interrupt.
2. Read in the page.
3. Restart the process.