0% found this document useful (0 votes)
0 views

Opereting System

An Operating System (OS) serves as an interface between users and computer hardware, managing tasks such as memory, process, device, and file management. The document outlines the evolution of operating systems from batch processing in the 1950s to modern multitasking and real-time systems, highlighting various types and their advantages and disadvantages. It also discusses process management, including process states, control blocks, and scheduling mechanisms essential for efficient CPU utilization.

Uploaded by

dyoballianyks
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
0 views

Opereting System

An Operating System (OS) serves as an interface between users and computer hardware, managing tasks such as memory, process, device, and file management. The document outlines the evolution of operating systems from batch processing in the 1950s to modern multitasking and real-time systems, highlighting various types and their advantages and disadvantages. It also discusses process management, including process states, control blocks, and scheduling mechanisms essential for efficient CPU utilization.

Uploaded by

dyoballianyks
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 149

UNIT 1

INTRODUCTION
 An Operating System (OS) is an interface between
a computer user and computer hardware. An
operating system is a software which performs all
the basic tasks like file management, memory
management, process management, handling input
and output, and controlling peripheral devices
such as disk drives and printers.
 Some popular Operating Systems include Linux
Operating System, Windows Operating System.
 Memory Management
 Process Management
 Device Management
 File Management
 Security
 Coordination between other software and
users
 The first operating system was introduced in the
early 1950's, it was called GMOS and was created
by General Motors for IBM's machine.

 Operating systems in the 1950's were called


single-stream batch processing systems because
the data was submitted in groups. These new
machines were called mainframes, and they were
used by professional operators in large computer
rooms.
 By the late 1960's operating systems designers were
able to develop the system of multiprogramming in
which a computer program will be able to perform
multiple jobs at the same time.

 The introduction of multiprogramming was a major


part in the development of operating systems because
it allows a CPU to be busy nearly 100 percent of the
time that it was in operation.
1. Batch Processing Operating System
2. Multiprogramming Operating System
3. Multitasking(Time Sharing) Operating
System
4. Real Time Operating System
5. Multiprocessor Operating System
 The users of a batch operating system do not
interact with the computer directly. Each user
prepares his job on an off-line device like
punch cards and submits it to the computer
operator.
 To speed up processing, jobs with similar
needs are batched together and run as a group.
The programmers leave their programs with
the operator and the operator then sorts the
programs with similar requirements into
batches.
 Advantages

1. Batch processing takes much of the work of


the operator to the computer.
2. Increased performance as a new job get
started as soon as the previous job is finished,
without any manual intervention.
 Disadvantages

1. Lack of interaction between the user and the


job.
2. CPU is often idle, because the speed of the
mechanical I/O devices is slower than the
CPU.
3. Difficult to provide the desired priority.
4. Less throughput.
 When two or more programs reside in memory
at the same time, is referred to
as multiprogramming. Multiprogramming
assumes a single shared processor.
Multiprogramming increases CPU utilization
by organizing jobs so that the CPU always has
one to execute.
An OS does the following activities related to
multiprogramming.

1. The operating system keeps several jobs in memory at


a time.
2. This set of jobs is a subset of the jobs kept in the job
pool.
3. The operating system picks and begins to execute one
of the jobs in the memory.
4. Multiprogramming operating systems monitor the state
of all active programs and system resources using
memory management programs to ensures that the
CPU is never idle, unless there are no jobs to process.
Advantages

1. High and efficient CPU utilization.


2. User feels that many programs are allotted
CPU almost simultaneously.

 Disadvantages

1. CPU scheduling is required.


2. To accommodate many jobs in memory,
memory management is required.
In Multitasking OS multiple jobs are
executed by the CPU simultaneously by
switching between them. Switches occur
so frequently that the users may interact
with each program while it is running.
Multitasking Operating Systems are also
known as Time-sharing systems.
An OS does the following activities related to
multitasking −

1. The user gives instructions to the operating


system or to a program directly, and receives
an immediate response.
2. The OS handles multitasking in the way that
it can handle multiple operations/executes
multiple programs at a time.
 These Operating Systems were developed to
provide interactive use of a computer system at a
reasonable cost.
A time-shared operating system uses the concept
of CPU scheduling and multiprogramming to
provide each user with a small portion of a time-
shared CPU.
 Each
user has at least one separate program in
memory.
1. Since interactive I/O typically runs at slower speeds, it
may take a long time to complete. During this time, a
CPU can be utilized by another process.

2. The operating system allows the users to share the


computer simultaneously. Since each action or
command in a time-shared system tends to be short,
only a little CPU time is needed for each user.

3. As the system switches CPU rapidly from one


user/program to the next, each user is given the
impression that he/she has his/her own CPU, whereas
actually one CPU is being shared among many users.
1. Problem of reliability.
2. Question of security and integrity of user
programs and data.
Real Time operating System

Real-time systems are used when there are rigid time


requirements on the operation of a processor or the
flow of data and real-time systems can be used as a
control device in a dedicated application. A real-time
operating system must have well-defined, fixed time
constraints, otherwise the system will fail. For
example, Scientific experiments, medical imaging
systems, industrial control systems, weapon systems,
robots, air traffic control systems, etc.
There are two types of real-time operating systems.

1. Hard real-time systems


Hard real-time systems guarantee that critical
tasks complete on time..
2. Soft real-time systems
Soft real-time systems are less restrictive. A
critical real-time task gets priority over other tasks
and retains the priority until it completes. Soft
real-time systems have limited utility than hard
real-time systems. For example, multimedia,
virtual reality.
 Most computer systems are single processor
systems i.e. they only have one processor.
However, multiprocessor or parallel systems
have multiple processors working in parallel
that share the computer clock, memory, bus,
peripheral devices etc.
 There are mainly two types of multiprocessors

1. symmetric
2. asymmetric multiprocessors.
1. Symmetric Multiprocessors

In these types of systems, each processor contains a


similar copy of the operating system and they
all communicate with each other. All the
processors are in a peer to peer relationship i.e. no
master - slave relationship exists between them.

2. Asymmetric Multiprocessors
In asymmetric systems, each processor is given a
predefined task. There is a master processor that
gives instruction to all the other processors.
Asymmetric multiprocessor system contains a
master slave relationship..
 Reliable
 Enhanced throughput

Disadvantage

 IncreasedExpense
 Complicated Operating System Required
 Large Main Memory Required
 Spooling is an acronym for simultaneous peripheral operations
on line.

 Spooling refers to putting data of various I/O jobs in a buffer.


This buffer is a special area in memory or hard disk which is
accessible to I/O devices.

 Spooling works like a typical request queue or spool where


data, instructions and processes from multiple sources are
accumulated for execution later on. Generally, the spool is
maintained on the computer’s physical memory, buffers or the
I/O device-specific interrupts. The spool is processed in
ascending order, working on the basis of a FIFO (first in, first
out) algorithm.
 Themost common implementation of spooling can be
found in typical input/output devices such as the
keyboard, mouse and printer.

 For example, in printer spooling, the documents/files


that are sent to the printer are first stored in the
memory or printer spooler. Once the printer is ready,
it fetches the data from that spool and prints it.
 The spooling operation uses a disk as a very
large buffer.
 Spooling is capable of overlapping I/O
operation for one job with processor operations
for another job.
 CONVENIENCE(EASE OF USE): From the
user’s point of view operating system should
provide an environment where user can easily
learn and perform its task. OS should maximize
the work that user performs.
 EFFICIENT RESOURCE UTILIZATION: From
the system’s point of view OS is a resource
allocator. It efficiently allocates resources(h/w and
s/w)so that performance of system will be
increased.
Unit-2
Process Management
A process is basically a program in execution. The
execution of a process must progress in a
sequential fashion.
A process is defined as an entity which represents
the basic unit of work to be implemented in the
system.
 To put it in simple terms, we write our computer
programs in a text file and when we execute this
program, it becomes a process which performs all the
tasks mentioned in the program.
 When a program is loaded into the memory and it
becomes a process, it can be divided into four sections
─ stack, heap, text and data. The following image
shows a simplified layout of a process inside main
memory −
1. Stack: The process Stack contains the temporary data
such as method/function parameters, return address and
local variables.
2. Heap: This is dynamically allocated memory to a
process during its run time.
3. Text: This includes the current activity represented by
the value of Program Counter and the contents of the
processor's registers.
4. Data: This section contains the global and static
variables.
Process State
 When a process executes, it passes through different
states. These stages may differ in different operating
systems, and the names of these states are also not
standardized.

 Ingeneral, a process can have one of the following


five states at a time.
1. Start: This is the initial state when a process is first
started/created.

2. Ready: The process is waiting to be assigned to a processor.


Ready processes are waiting to have the processor allocated to
them by the operating system so that they can run. Process may
come into this state after Start state or while running it by but
interrupted by the scheduler to assign CPU to some other process.
3. Running: Once the process has been assigned to a processor by
the OS scheduler, the process state is set to running and the
processor executes its instructions.
4. Waiting: Process moves into the waiting state if it needs to wait
for a resource, such as waiting for user input, or waiting for a file
to become available.
5. Terminated or Exit: Once the process finishes its execution, or it
is terminated by the operating system, it is moved to the terminated
state where it waits to be removed from main memory.
A Process Control Block is a data structure maintained by the
Operating System for every process. The PCB is identified by
an integer process ID (PID). A PCB keeps all the information
needed to keep track of a process.
 Process State: The current state of the process i.e., whether
it is ready, running, waiting, or whatever.

 Process privileges: This is required to allow/disallow access


to system resources.

 Process ID: Unique identification for each of the process in


the operating system.
• Pointer: A pointer to parent process.

 Program Counter: Program Counter is a pointer to the


address of the next instruction to be executed for this process.

 CPU registers: Various CPU registers where process


need to be stored for execution for running state.

 CPU Scheduling Information: Process priority and


other scheduling information which is required to schedule
the process.
 Memory management information: This includes
the information of page table, memory limits,
Segment table depending on memory used by the
operating system.
 Accounting information: This includes the amount
of CPU used for process execution, time limits,
execution ID etc.
 IO status information: This includes a list of I/O
devices allocated to the process.
The PCB is maintained for a process throughout its lifetime, and is deleted once the
process terminates.
 Process Scheduling handles the selection of a process for the
processor on the basis of a scheduling algorithm and also the
removal of a process from the processor. It is an important part
of multiprogramming operating system.

 There are many scheduling queues that are used in process


scheduling. When the processes enter the system, they are put
into the job queue. The processes that are ready to execute in
the main memory are kept in the ready queue. The processes
that are waiting for the I/O device are kept in the I/O device
queue.
The different schedulers that are used for process
scheduling are −

1. Long Term Scheduler: The job scheduler or long-term


scheduler selects processes from the storage pool in the
secondary memory and loads them into the ready queue in the
main memory for execution.
The long-term scheduler controls the degree of
multiprogramming. It must select a careful mixture of I/O
bound and CPU bound processes to yield optimum system
throughput. If it selects too many CPU bound processes then
the I/O devices are idle and if it selects too many I/O bound
processes then the processor has nothing to do.
The job of the long-term scheduler is very
important and directly affects the system for a long time.
2. Short Term Scheduler: The short-term scheduler selects one
of the processes from the ready queue and schedules them for
execution .A scheduling algorithm is used to decide which
process will be scheduled for execution next.
The short-term scheduler executes
much more frequently than the long-term scheduler as a
process may execute only for a few milliseconds.
The choices of the short term scheduler
are very important. If it selects a process with a long burst
time, then all the processes after that will have to wait for a
long time in the ready queue. This is known as starvation and
it may happen if a wrong decision is made by the short-term
scheduler.
3.Medium Term Scheduler : The medium-term scheduler swaps
out a process from main memory. It can again swap in the
process later from the point it stopped executing. This can also
be called as suspending and resuming the process. This is
helpful in reducing the degree of multiprogramming. Swapping
is also useful to improve the mix of I/O bound and CPU bound
processes in the memory.
 Context Switching involves storing the context or
state of a process so that it can be reloaded when
required and execution can be resumed from the same
point as earlier.
 Save the context of the process that is currently running on the
CPU. Update the process control block and other important
fields.
 Move the process control block of the above process into the
relevant queue such as the ready queue, I/O queue etc.
 Select a new process for execution.
 Update the process control block of the selected process. This
includes updating the process state to running.
 Update the memory management data structures as required.
 Restore the context of the process that was previously running
when it is loaded again on the processor. This is done by
loading the previous values of the process control block and
registers.
There are three major triggers for context switching. These are
given as follows −

 Multitasking: In a multitasking environment, a process is


switched out of the CPU so another process can be run. The
state of the old process is saved and the state of the new
process is loaded. On a pre-emptive system, processes may be
switched out by the scheduler.

 Interrupt Handling: The hardware switches a part of the


context when an interrupt occurs. This happens automatically.
Only some of the context is changed to minimize the time
required to handle the interrupt.

 User and Kernel Mode Switching: A context switch may take


place when a transition between the user mode and kernel
mode is required in the operating system.
A dispatcher is a special program which comes
into play after the scheduler. When the scheduler
completes its job of selecting a process, it is the
dispatcher which takes that process to the desired
state/queue. The dispatcher is the module that
gives a process control over the CPU after it has
been selected by the short-term scheduler. This
function involves the following:
 Switching context
 Switching to user mode
 Jumping to the proper location in the user
program to restart that program
CPU Scheduling is a process of determining which process will
own CPU for execution while another process is on hold. The
main task of CPU scheduling is to make sure that whenever
the CPU remains idle, the OS at least select one of the processes
available in the ready queue for execution.
 Preemptive Scheduling is a CPU scheduling
technique that works by dividing time slots of CPU to
a given process. The time slot given might be able to
complete the whole process or might not be able to it.
When the burst time of the process is greater than
CPU cycle, it is placed back into the ready queue and
will execute in the next chance. This scheduling is
used when the process switch to ready state.
 Algorithms that are backed by preemptive Scheduling
are round-robin (RR), priority, SRTF (shortest
remaining time first).
 Non-preemptive Scheduling is a CPU scheduling
technique the process takes the resource (CPU time)
and holds it till the process gets terminated or is
pushed to the waiting state. No process is interrupted
until it is completed, and after that processor switches
to another process.
 Algorithms that are based on non-preemptive
Scheduling are FCFS, non-preemptive priority, and
shortest Job first.
1. Processor utilization: processor utilization is the
average fraction of time during which the processor
is busy. Being busy refers to the processor not being
idle and includes both time spent executing user
programs and executing the operating system .
scheduling algorithms that utilizes CPU very much
is considered better. CPU can be utilized from 0-
100%.
2. Throughput:- Throughput refers to the amount of
work completed in a unit of time . One way to express
throughput is by means of number of user jobs
executed in a unit of time.

3.Turnaround time:-turn around time is defined as time


that elapses from the moment a program or a job is
submitted until it is completed by a system. It is the
time spent in a system , and it may be expressed as a
sum of the job service time(execution time) and
waiting time.
4. Waiting time:-time that a process spends waiting for
resource allocation due to contentions with others in
multiprogramming system. It is the sum of the
periods spent waiting in the ready queue. waiting
time may be expressed as turnaround time less the
actual execution time.

waiting time= turnaround time-execution time(burst


time)
5.Response time:-A process can produce some output
fairly early and can continue computing new results
while results are being output to the user . Thus
another measure is the time from the submission of a
request until the first response is produced. This
measure is called response time ,the amount of time it
takes to start responding.

It is desirable to maximize CPU Utilization


,throughput and minimize turnaround time , waiting
time , response time.
 It is non-preemptive type of scheduling algorithm.
 It is the simplest CPU scheduling algorithm.
 Job which comes first in ready queue is executed first
by the CPU.
 In this policy ready queue is considered as FIFO
queue , ie , the job comes in ready queue is added at
the tail of the queue and the CPU takes the process
from the head of the queue for execution.
Ready queue in FCFS

TAIL HEAD

CPU takes
New
process
process
from here
added
here
 It is simple scheduling algorithm.

Disadvantages
 The average waiting time is often long.
 Not suitable for time sharing system.
 It may cause of convoy effect, where all other processes
wait for a long process to get off the CPU.
 Let all the process arrive at the time 0.

 Turn around time for process P1= 21-0


 Turn around time for process P2= 24-0
 Turn around time for process P3= 30-0
 Turn around time for process P4= 32-0

 averageturn around
time=(21+24+30+32)/4=26.75 ms
 Waiting time for process P1=0
 Waiting time for process P2=21
 Waiting time for process P3=24
 Waiting time for process P4=30

 Averagewaiting
time=(0+21+24+30)/4=18.75 ms
 This is very good CPU Scheduling algorithm.
 The idea is that the job that has the smallest next cpu
burst is allocated to the CPU for execution first.
 Where more than one jobs have same length of next
CPU burst the FCFS scheduling is used.
 This scheduling should be known as shortest next
CPU burst rather than shortest job first.
 Itis optimal cpu scheduling algorithm in connection
with waiting time.

Disadvantage
It can not be implemented for the short term scheduler
since there is no way to find the length of the next
CPU burst.
Process CPU Burst Time

P1 7

P2 5

P3 2

P3 P2 P1

0 2 7 14
 Turn around time for process P1=14-0
 Turn around time for process P2=7-0
 Turn around time for process P3=2-0
 Average turn around time =(14+7+2)/3=7.6 ms
 Waiting time for process P1=14-7=7
 Waiting time for process P2=7-5=2
 Waiting time for process P3=2-2=0
 Average waiting time=(7+2+0)/3=3 ms
 In the Non Preemptive Priority scheduling, The
Processes are scheduled according to the priority
number assigned to them. Once the process gets
scheduled, it will run till the completion. Generally,
the lower the priority number, the higher is the
priority of the process. Priority can be decided based
on memory requirements, time requirements or any
other resource requirement.
 If processes have equal priority then FCFS is used.
 There may be a number of criteria for the priority
scheduling . It is very useful scheduling.

Disadvantage
 Starvation is the major problem with priority
scheduling ie some low priority processes are waiting
indefinitely for the cpu.
Process Arrival time Burst time Priority
P1 0 5 2
P2 0 2 3
P3 0 6 1

P3 P1 P2

0 6 11 13
 Turn Around time for P1=11-0
 Turn Around time for P2=13-0
 Turn Around time for P3=6-0

 AverageTurn Around Time=(11+13+6)/3


ie 10 ms
 Waiting time for P1= 11-5 => 6
 Waiting time for P2= 13-2 => 11
 Waiting time for P3= 6-6 => 0

 Average waiting time= (6+11+0)/3=>5.6


ms
 The idea of Round Robin scheduling is just like FCFS
but it is preemptive.
 Each process is provided a fix time to execute, it is
called a time quantum or time slice.
 Once a process is executed for a given time period, it
is preempted and other process executes for a given
time period.
 Here ready queue is treated like a circular queue.
 New process is added to the tail of the queue .The
process is picked for the execution from the head of
the queue.
 The Timer is set to interrupt after one time slice.
 There may be cases as follows:
1. In first case, process CPU burst time is less than one
time slice, then it will be completely executed and
will release CPU and next process is executed.
2. In second case, an interrupt occurs at one time slice.
A context switch will be made and process will be
put at the tail of ready queue . The next process is
executed.
 Performance of RR is totally depends on size of time
slice.
 If time slice is too large then RR is just like FCFS.
 If time slice is too small then RR is just like process
sharing, ie process will take much time to wait.
TQ=3MS

Process Arrival time Burst Time


P1 0 5
P2 1 6
P3 2 3
P4 3 1
P5 4 5
P6 6 4

READY QUEUE P2,P3,P4, P1,P5,P6,P2,P5,P6

P1 P2 P3 P4 P1 P5 P6 P2 P5 P6

0 3 6 9 10 12 15 18 21 23 24
 Turn Around time for P1=12-0 ie 12
 Turn Around time for P2=21-1 ie 20
 Turn Around time for P3=9-2 ie 7
 Turn Around time for P4=10-3 ie 7
 Turn Around time for P5=23-4 ie 19
 Turn Around time for P6=24-6 ie 18
 Average turn around time=
(12+20+7+7+19+18)/6=>13.8 ms
 Waiting time for P1=12-5 => 7
 Waiting time for P2= 20-6 => 14
 Waiting time for P3= 7-3 =>4
 Waiting time for P4= 7-1 => 6
 Waiting time for P5= 19-5 => 14
 Waiting time for P6= 18-4 => 14

Average waiting time=


(7+14+4+6+14+14)/6=>9.8 ms
 This Algorithm is the preemptive version of SJF scheduling. In
SRTF, the execution of the process can be stopped after certain
amount of time. At the arrival of every process, the short term
scheduler schedules the process with the least remaining burst
time among the list of available processes and the running
process.
 Once all the processes are available in the ready queue, No
preemption will be done and the algorithm will work as SJF
scheduling. The context of the process is saved in the Process
Control Block when the process is removed from the execution
and the next process is scheduled. This PCB is accessed on
the next execution of this process.
Example
Process Arrival time Burst time
P1 0 8
P2 2 5
P3 3 9
P4 4 3

P1 P2 P4 P1 P3

0 2 7 10 16 25
 Turn Around time for P1=16-0 ie 16
 Turn Around time for P2=7-2 ie 5
 Turn Around time for P3=25-3 ie 22
 Turn Around time for P4=10-4 ie 6

 Average
Turn Around time=
(16+5+22+6)/4=>12.25 ms
 Waiting time for P1=16-8 => 8
 Waiting time for P2= 5-5=> 0
 Waiting time for P3= 22-9 =>13
 Waiting time for P4= 6-3 => 3

Average waiting time= (8+0+13+3)/4=>6


ms
 Multi-level queue are not an independent scheduling
algorithm. They make use of other existing algorithms
to group and schedule jobs with common
characteristics.
 Multiple queues are maintained for processes with
common characteristics.
 Each queue can have its own scheduling algorithms.
 Priorities are assigned to each queue.
 For example, CPU-bound jobs can be scheduled in
one queue and all I/O-bound jobs in another queue.
The Process Scheduler then alternately selects jobs
from each queue and assigns them to the CPU based
on the algorithm assigned to the queue.
 More efficient scheduling algorithm since each
process is handled differently according to its
property.

Disadvantages
 The process can not move between queues that
provide inflexibility.
 If a process that waits too long time in a lower priority
queue may cause starvation.
 One of the major disadvantage of multilevel queue
scheduling is indefinite postponement of lower
priority processes. To eliminate this problem the
multilevel feedback queue allows to move processes
between queues.
 Here idea is that processes are prioritized according to
the CPU time, so that I/O bound processes found
higher priority than CPU bound processes. So I/O
bound processes are put in upper queues and CPU
bound processes are put in lower queues.
 Processes that wait to much time in lower
queue may be moved to higher priority
queue . This is called aging.
 Aging is solution of starvation problem
Process Burst time Arrival time
P1 0 0
P2 29 19 9 0 0
P3 0 0
P4 0 0
P5 12 2 0 0

TQ=10

READY QUEUE P1 P2 P3 P4 P5 P2 P5 P2
P1 P2 P3 P4 P5 P2 P5 P2

0 10 20 23 30 40 50 52 61
UNIT-3
Memory Management
 Memory management is the functionality of an
operating system which handles or manages primary
memory and moves processes back and forth between
main memory and disk during execution. Memory
management keeps track of each and every memory
location, regardless of either it is allocated to some
process or it is free. It checks how much memory is
to be allocated to processes. It decides which process
will get memory at what time. It tracks whenever
s om e m e m o r y g e t s fre e d o r u n a l l o c a t e d a n d
correspondingly it updates the status.
 Each process has a separate memory space. Separate
per process memory space protects the process from
each other and is fundamental to having multiple
processes loaded in memory for concurrent execution.
 To separate memory space we need the ability to
determine the range of legal addresses that the process
may access and to ensure that the process can access
only these legal addresses.
 We can provide this protection by using two registers
– base register and limit register.
 Base
register- holds the smallest legal physical
memory address.

 Limit register- specifies the size of the range.


 Thebase and limit registers can be loaded only by the
operating system.
 A user program goes through several steps before being executed.
 Compile time:- if you know at compile time where
the process will reside in memory then absolute code
can be generated.
 Load time:- if it is not known at compile time where
the process will reside in memory then compiler must
generate relocatable code .
 Execution time:- if the process can be moved during
its execution from one memory segment to another
then binding must be delayed until run time.
 An address generated by the CPU is referred to as
logical address.
 An address seen by the memory unit i.e. the one
loaded into the memory address register of the
memory is referred to as a physical address.
 The compile time and load time address binding
methods generate same logical and physical addresses.
However the execution time address binding results in
different logical and physical addresses . In this case
the logical address is referred to as virtual address.
 The set of all logical addresses generated by a
program is called “logical address space” .
 The set of all physical addresses corresponding to
these logical addresses is called “Physical address
space” .
 The run time mapping from virtual to physical
address is done by a hardware device called memory
management unit(MMU)
 Simple MMU scheme is generalization of base
register scheme.
 The base register is now called a relocation register.
 The value in relocation register is added to every
address generated by user process at a time the
address is sent to memory.
 The user program never sees the real physical address.
The program creates a pointer to the location 346(in
above example).
 The user program deals with logical addresses.
 Moving processes from main memory to disk and back is
called swapping.
 When a process is brought in memory it is called swap in
and when a process is brought back .in disk called swap
out.
 Swapping is usually employed in memory management
system with contiguous allocation such as fixed and
dynamically partitioned memory and segmentation.
 It is the work of swapper that which process will brought
back in disk when there is no space in memory and a new
process want to execute. Generally priority based
scheduling is used by swapper to swap in and swap out
processes.
 The priority given to each process may depend on
process size or the time required for its execution.
 The process that has highest priority is put first by the
swapper in memory and the process that has lowest
priority is put out by the swapper from memory.
 The time required to swap out a process and then
swap in another process is referred to as swap time.

 Swap time=time to swap out a process + time to swap


in other process.
 Swap time highly depends on transfer rate and size of
process. If process size is small and transfer rate is
high then swap time will be very small.
 Contiguous memory Allocation
 Non-contiguous memory allocation
 The memory is usually divided into two partition one
for resident OS and one for the processes.

 Incontiguous memory allocation each process is


contained in a single section of memory that is
contiguous to the section containing the next process.
The protection of operating system from user processes and
protection of user processes from OS are the two main issues of
memory protection.
 The simplest method for allocating memory is to
divide memory into several fixed sized partitions.
 Each partition may contain exactly one process.
 Thus the degree of multiprogramming is restricted by
the number of partitions.
 In this multi-partition method , when partition is free,
one process is selected from the input queue and is
loaded into the free partition.
 When the process terminates the partition is available
for another process.
 There are various cons of using this technique.

1. Internal Fragmentation: If the size of the process is lesser


then the total size of the partition then some size of the
partition get wasted and remain unused. This is wastage of the
memory and called internal fragmentation.
 As shown in the image below, the 4 MB partition is used to
load only 3 MB process and the remaining 1 MB got wasted.

2. External Fragmentation : The total unused space of various


partitions cannot be used to load the processes even though
there is space available but not in the contiguous form.
 As shown in the image below, the remaining 1 MB space of
each partition cannot be used as a unit to store a 4 MB process.
Despite of the fact that the sufficient space is available to load
the process, process will not be loaded.
3. Limitation on the size of the process
 If the process size is larger than the size of maximum
sized partition then that process cannot be loaded into
the memory. Therefore, a limitation can be imposed
on the process size that is it cannot be larger than the
size of the largest partition.
4. Degree of multiprogramming is less
 By Degree of multi programming, we simply mean
the maximum number of processes that can be loaded
into the memory at the same time. In fixed
partitioning, the degree of multiprogramming is fixed
and very less due to the fact that the size of the
partition cannot be varied according to the size of
processes.
 In this scheme the OS keeps a table indicating which part of
memory is available and which is occupied.
 Initially , all memory is available for user processes and is
considered one large block of available memory, a hole.
 Eventually , memory contains a set of holes of various sizes.
 Dynamic partitioning tries to overcome the problems caused by
fixed partitioning. In this technique, the partition size is not
declared initially. It is declared at the time of process loading.
 The first partition is reserved for the operating system. The
remaining space is divided into parts. The size of each partition
will be equal to the size of the process. The partition size varies
according to the need of the process so that the internal
fragmentation can be avoided.
1. No Internal Fragmentation: Given the fact that the
partitions in dynamic partitioning are created according to the
need of the process, It is clear that there will not be any internal
fragmentation because there will not be any unused remaining
space in the partition.
2. No Limitation on the size of the process: In Fixed
partitioning, the process with the size greater than the size of
the largest partition could not be executed due to the lack of
sufficient contiguous memory. Here, In Dynamic partitioning,
the process size can't be restricted since the partition size is
decided according to the process size.
3. Degree of multiprogramming is dynamic: Due to the
absence of internal fragmentation, there will not be any unused
space in the partition hence more processes can be loaded in
the memory at the same time.
 External Fragmentation: Absence of internal fragmentation
doesn't mean that there will not be external fragmentation.
 Let's consider three processes P1 (1 MB) and P2 (3 MB) and
P3 (1 MB) are being loaded in the respective partitions of the
main memory.
 After some time P1 and P3 got completed and their assigned
space is freed. Now there are two unused partitions (1 MB and
1 MB) available in the main memory but they cannot be used
to load a 2 MB process in the memory since they are not
contiguously located.
 The rule says that the process must be contiguously present in
the main memory to get executed. We need to change this rule
to avoid external fragmentation.
 Complex Memory Allocation: In Fixed partitioning, the list of
partitions is made once and will never change but in dynamic
partitioning, the allocation and deallocation is very complex
since the partition size will be varied every time when it is
assigned to a new process. OS has to keep track of all the
partitions.
 Due to the fact that the allocation and deallocation are done
very frequently in dynamic memory allocation and the partition
size will be changed at each time, it is going to be very difficult
for OS to manage everything.
 As processes are loaded and removed from memory,
the free memory space is broken into little pieces .
 It happens after sometimes that processes cannot be
allocated to memory blocks considering their small
size and memory blocks remains unused. This
problem is known as Fragmentation.

 There are two types of fragmentation


 Internal fragmentation
 External fragmentation
1. First fit: Allocates the first hole, that is
big enough.

2. Best fit: Allocates the smallest hole that


is big enough.

3. Worst fit: Allocates the largest hole.


Request comes from the Process P4 (size 3kb)
 One solution to the external fragmentation is compaction.
 The goal is to shuffle memory contents to place all the free
memory memory together in one large block.
 Compaction is not always possible .If relocation is static and is
done at load time then compaction is not possible.
 It is possible only if relocation is dynamic and is done at
execution time.
 If addresses are relocated dynamically , relocation requires
only moving the program and data then changing the base
register to reflect the new base address.
 The simplest compaction algorithm is to move all the
processes toward one end of memory ,producing one large
block of available memory.
 This scheme can be expensive.
 Paging is a memory-management scheme that permits
the phys ical addres s space of a process to be
noncontiguous. Paging avoids external fragmentation
and the need for compaction.
 The basic method for implementing paging involves
breaking physical memory into fixed-sized blocks
called frames and breaking logical memory into
blocks of the same size called pages. When a process
is to be executed, its pages are loaded into any
available memory frames from their source (a file
system or the backing store).
Hardware support for paging
 Every address generated by the CPU is divided
into two parts: a page number (p) and a page
offset(d) .
 The page number is used as an index into a page
table .
 The page table contains the base address(frame
number) of each page in physical memory.
 This base address is combined with the page
offset to define the physical memory address that
is sent to the memory unit.
 The page size (like the frame size) is defined by the
hardware.
 The size of a page is typically a power of 2, varying
between 512 bytes and 16 MB per page, depending on
the computer architecture.
 The selection of a power of 2 as a page size makes the
translation of a logical address into a page number and
page offset particularly easy.
 If the size of the logical address space is 2m, and a page
size is 271 addressing units (bytes or words then the
high-order m- n bits of a logical address designate the
page number, and the n low-order bits designate the
page offset. Thus, the logical address is as follows:
Page number Page offset
p d

M-n n

where p is an index into the page table and d is the displacement


within the page.
You may have noticed that paging itself is a form of dynamic
relocation. Every logical address is bound by the paging
hardware to some physical address. Using paging is similar to
using a table of base (or relocation) registers, one for each frame
of memory.
 Paging reduces external fragmentation, but still suffer
from internal fragmentation.
 Paging is simple to implement and assumed as an
efficient memory management technique.
 Due to equal size of the pages and frames, swapping
becomes very easy.
 Page table requires extra memory space, so may not
be good for a system having small RAM.
 Segmentation is a memory-management scheme that supports
user view of memory.
 A logical address space is a collection of segments.
 Each segment has a name and a length.
 The addresses specify both the segment name and the offset
within the segment.
 The user therefore specifies each address by two quantities: a
segment name and an offset. (Contrast this scheme with the
paging scheme, in which the user specifies only a single
address, which is partitioned by the hardware into a page
number and an offset, all invisible to the programmer.) For
simplicity of implementation, segments are numbered and are
referred to by a segment number, rather than by a segment
name.
 Thus, a logical address consists of a two tuple :
 <segment number, offset>
 Normally, the user program is compiled, and the compiler
automatically constructs segments reflecting the input program.
C compiler might create separate segments for the following:
 The code
 Global variables
 The heap, from which memory is allocated
 The stacks used by each thread
 The standard C library
 Libraries that are linked in during compile time might be
assigned separate segments. The loader would take all these
segments and assign them segment numbers.
 Although the user can now refer to objects in the program by a
two-dimensional address, the actual physical memory is still,
of course, a one-dimensional sequence of bytes.
 Thus, we must define an implementation to map two
dimensional user-defined addresses into one-dimensional
physical addresses. This mapping is effected by a segment
table.
 Each entry in the segment table has a segment base and a
segment limit.
 The segment base contains the starting physical address where
the segment resides in memory, and the segment limit specifies
the length of the segment.
 The use of a segment table is illustrated in Figure- A logical
address consists of two parts: a segment number, s, and an
offset into that segment, d.
 The segment number is used as an index to the segment table.
The offset d of the logical address must be between 0 and the
segment limit. If it is not, we trap to the operating system
(logical addressing attempt beyond end of segment).
 When an offset is legal, it is added to the segment base to
produce the address in physical memory of the desired byte.
The segment table is thus essentially an array of base-limit
register pairs.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy