0% found this document useful (0 votes)
0 views

OS_CHAP6_alok

Uploaded by

Kiran Mani
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
0 views

OS_CHAP6_alok

Uploaded by

Kiran Mani
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

CHAPTER 6: MEMORY MANAGEMENT

WHAT IS MAIN MEMORY?


• Main memory is central to the OS
• Main memory is the place where programs and information are kept when the processor
is effectively utilizing them.
• Main memory is also known as RAM and it is volatile.

FUNCTIONS OF MEMORY MANAGEMENT:


• Memory manager is used to keep track of the status of memory locations, whether it is
free or allocated. It addresses primary memory by providing abstractions so that
software perceives a large memory is allocated to it.
• Memory manager permits computers with a small amount of main memory to execute
programs larger than the size or amount of available memory. It does this by moving
information back and forth between primary memory and secondary memory by using
the concept of swapping.
• The memory manager is responsible for protecting the memory allocated to each
process from being corrupted by another process. If this is not ensured, then the system
may exhibit unpredictable behavior.
• Memory managers should enable sharing of memory space between processes. Thus,
two programs can reside at the same memory location although at different times.

BASIC HARDWARE FOR MEMORY MANAGEMENT:


• The CPU can only access its registers and main memory. It cannot, for example, make
direct access to the hard drive, so any data stored there must first be transferred into the
main memory chips before the CPU can work with it.
• User processes must be restricted so that they only access memory locations that
"belong" to that particular process. This is usually implemented using a base register
and a limit register for each process, as shown in figures below.
• Every memory access made by a user process is checked against these two registers,
and if a memory access is attempted outside the valid range, then a fatal error is
generated.

Alok Upadhyay-Department Of Computer Science 1


• The OS obviously has access to all existing memory locations, as this is necessary to
swap users' code and data in and out of memory. It should also be obvious that changing
the contents of the base and limit registers is a privileged activity, allowed only to the
OS kernel.

ADDRESS BINDING:
• User programs typically refer to memory addresses with symbolic names such as "i",
"count", and "temp" etc. These symbolic names must be mapped or bound to physical
memory addresses, which typically occurs in several stages:

Alok Upadhyay-Department Of Computer Science 2


• Compile Time - If it is known at compile time where a program will reside in physical
memory, then absolute code can be generated by the compiler, containing actual
physical addresses. However if the load address changes at some later time, then the
program will have to be recompiled. DOS .COM programs use compile time binding.
• Load Time - If the location at which a program will be loaded is not known at compile
time, then the compiler must generate re-locatable code, which references addresses
relative to the start of the program. If that starting address changes, then the program
must be reloaded but not recompiled.
• Execution Time - If a program can be moved around in memory during the course of
its execution, then binding must be delayed until execution time. This requires special
hardware, and is the method implemented by most modern OS

.
LOGICAL VERSUS PHYSICAL ADDRESS SPACE:
• The address generated by the CPU is a logical address, whereas the address actually
seen by the memory hardware is a physical address.

Alok Upadhyay-Department Of Computer Science 3


• Addresses bound at compile time or load time have identical logical and physical
addresses.
• Addresses created at execution time, however, have different logical and physical
addresses.
 In this case the logical address is also known as a virtual address.
 The set of all logical addresses used by a program composes the logical address
space, and the set of all corresponding physical addresses composes the physical
address space.
• The run time mapping of logical to physical addresses is handled by the
memorymanagement unit, MMU. The MMU can take on many forms. The base register
is now termed a relocation register, whose value is added to every memory request at
the hardware level.

The set of all logical addresses generated by a program is referred to as a logical


address space.
The set of all physical addresses corresponding to these logical addresses is referred to
as a physical address space.

STATIC & DYNAMIC LOADING:


 At the time of loading, with static loading, the absolute program (and data) is loaded
into memory in order for execution to start.
 At the time of dynamic loading, dynamic routines of the library are stored on a disk in
relocatable form and are loaded into memory only when they are needed by the
program.
STATIC & DYNAMIC LINKING:
 When static linking is used, the linker combines all other modules needed by a program
into a single executable program to avoid any runtime dependency.

Alok Upadhyay-Department Of Computer Science 4


 When dynamic linking is used, it is not required to link the actual module or library
with the program, rather a reference to the dynamic module is provided at the time of
compilation and linking. Dynamic Link Libraries (DLL) in Windows and Shared
Objects in Unix are good examples of dynamic libraries.

SWAPPING:
• A process must be loaded into memory in order to execute.
• Swapping is a method in which the process should be swapped temporarily from the
main memory to the backing store. It will be later brought back into the memory for
continue execution.
• Backing store is a hard disk or some other secondary storage device that should be big
enough in order to accommodate copies of all memory images for all users. It is also
capable of offering direct access to these memory images.

Benefits of Swapping
Here, are major benefits/pros of swapping:

• It offers a higher degree of multiprogramming.


• Allows dynamic relocation. For example, if address binding at execution time is being
used, then processes can be swap in different locations. Else in case of compile and load
time bindings, processes should be moved to the same location.
• It helps to get better utilization of memory.
• Minimum wastage of CPU time on completion so it can easily be applied to a priority-
based scheduling method to improve its performance.

What is Memory allocation?

Alok Upadhyay-Department Of Computer Science 5


Memory allocation is a process by which computer programs are assigned memory or space.

Here, main memory is divided into two types of partitions

 Low Memory – Operating system resides in this type of memory.


 High Memory– User processes are held in high memory.

CONTIGUOUS MEMORY ALLOCATION:


• In a Contiguous memory management scheme, each program occupies a single
contiguous block of storage locations, i.e., a set of memory locations with consecutive
addresses.
• Single contiguous memory management schemes:. In this scheme, the main memory
is divided into two contiguous areas or partitions. The operating systems reside
permanently in one partition, generally at the lower memory, and the user process is
loaded into the other partition.
• Multiple Partitioning: The single Contiguous memory management scheme is
inefficient as it limits computers to execute only one program at a time resulting in
wastage in memory space and CPU time. To switch between two processes, the
operating systems need to load both processes into the main memory. The operating
system needs to divide the available main memory into multiple parts to load multiple
processes into the main memory. Thus multiple processes can reside in the main
memory simultaneously.
• The multiple partitioning schemes can be of two types:
 Fixed size Partitioning
 Dynamic size Partitioning

Alok Upadhyay-Department Of Computer Science 6


Fixed Partitioning

The main memory is divided into several fixed-sized partitions in a fixed partition memory
management scheme or static partitioning. These partitions can be of the same size or different
sizes. Each partition can hold a single process. The number of partitions determines the degree
of multiprogramming, i.e., the maximum number of processes in memory. These partitions are
made at the time of system generation and remain fixed after that.

Dynamic Partitioning

The dynamic partitioning was designed to overcome the problems of a fixed partitioning
scheme. In a dynamic partitioning scheme, each process occupies only as much memory as
they require when loaded for processing. Requested processes are allocated memory until the
entire physical memory is exhausted or the remaining space is insufficient to hold the
requesting process. In this scheme the partitions used are of variable size, and the number of
partitions is not defined at the system generation time.

STRATEGIES USED FOR CONTIGUOUS MEMORY ALLOCATION


INPUT:
• First Fit: In this type fit, the partition is allocated, which is the first sufficient block
from the beginning of the main memory.
• Best Fit: It allocates the process to the partition that is the first smallest partition among
the free partitions.
• Worst Fit: It allocates the process to the partition, which is the largest sufficient freely
available partition in the main memory.
FRAGMENTATION:
When processes are moved to and from the main memory, the available free space in primary
memory is broken into smaller pieces. This happens when memory cannot be allocated to
processes because the size of available memory is less than the amount of memory that the
process requires. Such blocks of memory stay unused. This issue is called fragmentation.

Fragmentation is of the following two types:

1. External Fragmentation:

The total amount of free available primary is sufficient to reside a process, but can not be used
because it is non-contiguous. External fragmentation can be decreased by compaction or
shuffling of data in memory to arrange all free memory blocks together and thus form one
larger free memory block.

Alok Upadhyay-Department Of Computer Science 7


As you can see in the illustration mentioned above , there is sufficient memory space (55 KB)
in order to execute process 07 (mandated 50 KB). But here, the storage (fragment) isn’t
adjacent. Thus, to use the empty room to run a procedure, one can use paging, compression, or
segmentation strategies.

2. Internal Fragmentation:

Internal fragmentation occurs when the memory block assigned to the process is larger than
the amount of memory required by the process. In such a situation a part of memory is left
unutilized because it will not be used by any other process. Internal fragmentation can be
decreased by assigning the smallest partition of free memory that is large enough for allocating
to a process.

Alok Upadhyay-Department Of Computer Science 8


Let us look into another example. Assume that the memory allocation is done in RAM with the
help of fixed partitioning. It means that memory blocks have fixed sizes. Here, 8MB, 4MB,
4MB, and 2MB are the sizes available. The OS uses a portion of this RAM.

Now let’s suppose a P1 process of 3MB size arrives, and it is given a 4MB memory block.
Thus, as a result, the 1MB free space in this very block stays unused, and it can’t be used for
the allocation of memory to another process. It is called internal fragmentation.

NON- CONTIGUOUS MEMORY ALLOCATION:

PAGING:
• Paging is a fixed size partitioning scheme.

Alok Upadhyay-Department Of Computer Science 9


• In paging, secondary memory and main memory are divided into equal fixed size
partitions.
• The partitions of secondary memory are called as pages.
• The partitions of main memory are called as frames.
• Each process is divided into parts where size of each part is same as page size.
• The size of the last part may be less than the page size.
• The pages of process are stored in the frames of main memory depending upon their
availability.

• CPU generates a logical address consisting of two parts-


 Page Number: Page Number specifies the specific page of the process from which
CPU wants to read the data.
 Page Offset: Page Offset specifies the specific word on the page that CPU wants
to read.

• For the page number generated by the CPU, Page Table provides the corresponding
frame number (base address of the frame) where that page is stored in the main
memory.The frame number combined with the page offset forms the required physical
address.

Alok Upadhyay-Department Of Computer Science 10


 Frame number specifies the specific frame where the required page is stored.
 Page Offset specifies the specific word that has to be read from that page.

SEGMENTATION:
• Like Paging, Segmentation is another non-contiguous memory allocation technique.
• In segmentation, process is not divided blindly into fixed size pages.
• Rather, the process is divided into modules for better visualization.

Characteristics-

• Segmentation is a variable size partitioning scheme.


• In segmentation, secondary memory and main memory are divided into partitions of
unequal size.
• The size of partitions depend on the length of modules.
• The partitions of secondary memory are called as segments.

Alok Upadhyay-Department Of Computer Science 11


Consider a program is divided into 5 segments as-

Segment Table: Segment table is a table that stores the information about each segment
of the process. It has two columns.
 First column stores the size or length of the segment.
 Second column stores the base address or starting address of the segment in the
main memory.
 Segment table is stored as a separate segment in the main memory.
 Segment table base register (STBR) stores the base address of the segment table.

For the above illustration, consider the segment table is-

Alok Upadhyay-Department Of Computer Science 12


Here,

 Limit indicates the length or size of the segment.


 Base indicates the base address or starting address of the segment in the main
memory.

• CPU generates a logical address consisting of two parts-


 Segment Number: Segment Number specifies the specific segment of the process
from which CPU wants to read the data.
 Segment Offset: Segment Offset specifies the specific word in the segment that
CPU wants to read.
• For the generated segment number, corresponding entry is located in the segment
table. Then, segment offset is compared with the limit (size) of the segment. Now, two
cases are possible-

Case-01: Segment Offset >= Limit

If segment offset is found to be greater than or equal to the limit, a trap is generated.

Case-02: Segment Offset < Limit

 If segment offset is found to be smaller than the limit, then request is treated as a
valid request.
 The segment offset must always lie in the range [0, limit-1],
 Then, segment offset is added with the base address of the segment.
 The result obtained after addition is the address of the memory location storing the
required word.

Alok Upadhyay-Department Of Computer Science 13


DIFFERENCE BETWEEN PAGING AND SEGMENTATION:
Paging Segmentation
A page is of the fixed block size. A segment is of variable size.
It may lead to internal fragmentation. It may lead to external fragmentation.
In Paging, the hardware decides the page
The segment size is specified by the user.
size.
A process address space is broken into A process address space Is broken in
fixedsized blocks, which is called pages. differing sized blocks called sections.
The paging technique is faster for memory
Segmentation is slower than paging method.
access.
Segmentation table stores the segmentation
Page table stores the page data
data.
Paging does not facilitate any sharing of Segmentation allows for the sharing of
procedures. procedures.
Paging fails to distinguish and secure Segmentation can be able to separate secure
procedures and data separately. procedures and data.
In segmentation, there is the availability of
Paging address space is one dimensional
many independent address spaces
In paging, the user just provides a single In the segmentation method, the user
integer as the address, that is divided by the specifies the address in two quantities
hardware into a page number and offset. 1)segment number 2)offset.

SEGMENTED PAGING:
• Segmented paging is a scheme that implements the combination of segmentation and
paging.
• Process is first divided into segments and then each segment is divided into pages.
• These pages are then stored in the frames of main memory.
• A page table exists for each segment that keeps track of the frames storing the pages of
that segment.
• Each page table occupies one frame in the main memory.
• Number of entries in the page table of a segment = Number of pages that segment is
divided.
• A segment table exists that keeps track of the frames storing the page tables of segments.
• Number of entries in the segment table of a process = Number of segments that process
is divided.
• The base address of the segment table is stored in the segment table base register.

Alok Upadhyay-Department Of Computer Science 14


Alok Upadhyay-Department Of Computer Science 15

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy