SET 6 Memory Management

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 25

Main Memory Management

Memory Manager
• Responsible for:
– Allocating main memory to processes
– Retrieving and storing contents to and from main
memory when requested
– Effective sharing of main memory
– Minimizing memory access time
Memory Management
• Uniprogramming systems divide
memory into OS area and user area
• In a multiprogramming system, user
memory is subdivided per process
• That memory needs to be allocated
efficiently to pack as many processes
into memory as possible
• Main memory and registers are the
only storage areas the CPU can
access directly
Memory Management
Requirements
• Relocation
– Programmers do not know where
programs will be placed in memory when
executed
– While a program is executing, it may be
swapped to disk and returned to main
memory at a different location
(relocated)
– Memory references in program code must
be translated to physical memory
addresses
Memory Management
Requirements
• Protection
– Processes should not be able to reference
memory locations in another process without
permission
– More importantly, processes should not access
the portion of memory where the OS is resident
– Since programs may be relocated, it is
impossible to check absolute addresses in
programs at compile time
– Must be checked at run-time
Memory Management
Requirements
• Sharing
– Should be possible for several processes
to access the same portion of memory
– e.g., two instances of the same program
might share access to one copy of the
program code in memory
Memory Management
Requirements
• Logical Organization
– Programs often composed as modules
that can be written and compiled
independently
– Different degrees of protection are given
to modules (read-only, execute-only)
– Share modules among processes
Physical Organization
• How Does a Program Start Running? The flow of
information between main and secondary memory:
• Step I) OS (loader) copies a program from
permanent storage into RAM
• Step 2) CPU’s Program Counter is then set to the
starting address of the program and the program
begins execution
• Question? what if the program is too big? Memory
available for a program+data may be insufficient
• One solution: programmer breaks code into pieces
that fit into main memory (RAM )
Logical vs. Physical Address Space
• The concept of a logical address space that is bound to a
separate physical address space is central to proper memory
management
– Logical address – generated by the CPU; also referred to as
virtual address
– Physical address – address seen by the memory unit
• Logical and physical addresses are the same in compile-time and
load-time address-binding schemes; logical (virtual) and physical
addresses differ in execution-time address-binding scheme
• Logical address space is the set of all logical addresses
generated by a program
• Physical address space is the set of all physical addresses
generated by a program
Swapping
• A process can be swapped temporarily out of memory to a
backing store, and then brought back into memory for
continued execution
– Total physical memory space of processes can exceed
physical memory
• Backing store – fast disk large enough to accommodate
copies of all memory images for all users; must provide direct
access to these memory images
• Roll out, roll in – swapping variant used for priority-based
scheduling algorithms; lower-priority process is swapped out
so higher-priority process can be loaded and executed
• Major part of swap time is transfer time; total transfer time is
directly proportional to the amount of memory swapped
• System maintains a ready queue of ready-to-run processes
which have memory images on disk
Schematic View of Swapping
Multiple-partition allocation

• Multiple-partition allocation
– Degree of multiprogramming limited by the number of
partitions
– Hole – block of available memory; holes of various size are
scattered throughout memory
– When a process arrives, it is allocated memory from a hole
large enough to accommodate it
– Process exiting frees its partition, adjacent free partitions are
combined in Compaction
– Operating system maintains information about:
a) allocated partitions b) free partitions (hole)
Multiple-partition allocation
Fixed
Partitioning
Placement Algorithm with Fixed Partitions

• Equal-size partitions
– arbitrary
• Unequal-size partitions
– Strategy 1: assign each process
to the smallest partition it will fit
in
• queue for each partition
• processes are assigned in such a way as to
minimize wasted memory within a partition

– Strategy 2: prefer available


partitions, even if they are too
big
Dynamic Partitioning
• Partitions are of variable length and
number
• Process is allocated exactly as much
memory as required
• Eventually get holes in the memory.
This is called external fragmentation
• Must use compaction to shift
processes so they are contiguous and
all free memory is in one block
Dynamic Storage-Allocation Problem
How to satisfy a request of size n from a list of
free holes?
• First-fit: Allocate the first hole that is big enough

• Best-fit: Allocate the smallest hole that is big enough; must search
entire list, unless ordered by size
– Produces the smallest leftover hole

• Worst-fit: Allocate the largest hole; must also search entire list
– Produces the largest leftover hole
– First-fit and best-fit better than worst-fit in terms of speed and
storage utilization
• Next fit: begins to scan memory from the location of the last
placement and chooses the next available block that is large
enough
Dynamic Partitioning Placement
Algorithms
• Operating system must decide how to
assign memory to processes
• Best-fit algorithm
– Chooses the block that is closest in size to
the request
– Worst performer overall
– Since smallest block is found for process,
the smallest amount of fragmentation is
left memory compaction must be done
more often
Dynamic Partitioning
Placement Algorithms
• First-fit algorithm
– Usually the fastest and best
– May have many process loaded in the
front end of memory that must be
searched over when trying to find a free
block
Dynamic Partitioning
Placement Algorithm
• Next-fit
– begin from location of last placement
– slightly worse than first-fit
– The largest block of memory gets broken
up into smaller blocks
– Compaction is required to obtain a large
block at the end of memory
Research: Buddy System
• Entire space available is treated as a
single block of 2U
• If a request of size s such that 2U-1 < s
<= 2U, entire block is allocated
– Otherwise block is split into two equal
buddies
– Process continues until smallest block
greater than or equal to s is generated
Fragmentation

• External Fragmentation – arises when free


memory is separated into small blocks and is
interspersed by allocated memory
• total memory space exists to satisfy a request,
but it is not contiguous
• Internal Fragmentation – allocated memory
may be slightly larger than requested memory;
this size difference is memory internal to a
partition, but not being used
Fragmentation (Cont.)

• Reduce external fragmentation by


compaction
– Shuffle memory contents to place all free
memory together in one large block
– Compaction is possible only if relocation
is dynamic, and is done at execution time

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy