Unit 2 - Process Management
Unit 2 - Process Management
• Process Concept
• Process Scheduling
• Operation on Processes
• Cooperating Processes
• Interprocess Communication
Process Concept
• Objective of multiprogramming is :
• to have some process running at all
times, maximize CPU utilization time
• objective of time sharing is :
• To switch the CPU among processes so
frequently that users can interact with
each program while it is running
Process Scheduling Queues
• Job queue:
– Whenever a process enter the system
– set of all processes in the system.
• Ready queue – set of all processes residing in main memory,
ready and waiting to execute.
• Device queues – set of processes waiting for an I/O device.
( memory, mouse and key board
• Process migration between the various queues.
Queuing diagram Representation of
Process Scheduling
Schedulers
• Address space
– Child duplicate of parent.
– Child has a program loaded into it.
• UNIX examples
– fork system call creates new process
– execve system call used after a fork to replace the process’
memory space with a new program.
Process Termination
• Basic Concepts
• Scheduling Criteria
• Scheduling Algorithms
• Multiple-Processor Scheduling
• Real-Time Scheduling
• Algorithm Evaluation
• First Come First Serve, is just like FIFO(First in First out) Queue
data structure, where the data element which is added to the
queue first, is the one who leaves the queue first.
• This is used in Batch Systems.
• It's easy to understand and implement programmatically, using
a Queue data structure, where a new process enters through
the tail of the queue, and the scheduler selects process from
the head of the queue.
• A perfect real life example of FCFS scheduling is buying tickets
at ticket counter.
• Burst time: the time CPU taking control of process
P1 P2 P3
0 24 27 30
• Waiting time for P1 = 0; P2 = 24; P3 = 27
• Average waiting time: (0 + 24 + 27)/3 = 17
P2 P3 P1
0 3 6 30
• Problems:
1. It is non-preemptive
2. Improper process Scheduling
3. Resource unitization in parallel is not possible, which
leads to convey effect and hence poor resource
utilization
– convey effect: in which whole os slows down due to few slow
process.
• Associate with each process the length of its next CPU burst.
Use these lengths to schedule the process with the shortest time.
• Two schemes:
– nonpreemptive – once CPU given to the process it cannot
be preempted until completes its CPU burst.
– Preemptive – if a new process arrives with CPU burst length
less than remaining time of current executing process,
preempt. This scheme is know as the
Shortest-Remaining-Time-First (SRTF).
• SJF is optimal – gives minimum average waiting time for a given
set of processes.
P1 P3 P2 P4
0 3 7 8 12 16
P1 P2 P3 P2 P4 P1
0 2 4 5 7 11 16
P1 P2 P3 P4 P1 P3 P4 P1 P3 P3
• Three queues:
– Q0 – FCFS
– Q1 – time quantum 8 milliseconds
– Q2- time quantum 16 milliseconds
• Scheduling
– A new job enters queue Q0 which is served FCFS. When it
gains CPU, job receives 8 milliseconds. If it does not finish
in 8 milliseconds, job is moved to queue Q1.
– At Q1 job is again served FCFS and receives 16 additional
milliseconds. If it still does not complete, it is preempted
and moved to queue Q2.
Process Synchronization
• Background
• The Critical-Section Problem
• Synchronization Hardware
• Semaphores
• Classical Problems of Synchronization
• Critical Regions
• Monitors
• Uniprocessor :
– could disable interrupts
– Currently running code would execute without preemption.
– Generally too inefficient on multi processor system
Modern machine provide special atomic h/w instruction
Atomic – non interruptible
- either test memory word & set value
- Or Swap contents of two memory words
• Semaphore
Semaphore is an integer variable S, that is initialized with the number of resources
present in the system and is used for process synchronization. It uses two functions
to change the value of S i.e. wait() and signal(). Both these functions are used to
modify the value of semaphore but the functions allow only one process to change
the value at a particular time i.e. no two processes can change the value of
semaphore simultaneously. There are two categories of semaphores i.e. Counting
semaphores and Binary semaphores.
• Shared variables
– var mutex : semaphore
– initially mutex = 1
• Process Pi
repeat
wait(mutex);
critical section
signal(mutex);
remainder section
until false;
Bounded-Buffer Problem
– N buffer , each can hold one item.
– Constraints : full :=0; empty := n; mutex :=1;
• Shared data
var mutex, wrt: semaphore (=1);
readcount : integer (=0);
• Writer process
wait(wrt);
…
writing is performed
…
signal(wrt);
• Reader process
wait(mutex);
readcount := readcount +1;
if readcount = 1 then wait(wrt);
signal(mutex);
…
reading is performed
…
wait(mutex);
readcount := readcount – 1;
if readcount = 0 then signal(wrt);
signal(mutex):
• Shared data
var chopstick: array [0..4] of semaphore;
(=1 initially)
• Philosopher i:
repeat
wait(chopstick[i])
wait(chopstick[i+1 mod 5])
…
eat
…
signal(chopstick[i]);
signal(chopstick[i+1 mod 5]);
…
think
…
until false;
end;
begin
for i := 0 to 4
do state[i] := thinking;
end.
Operating System Concepts
Process Managent: Deadlocks
• System Model
• Deadlock Characterization
• Methods for Handling Deadlocks
• Deadlock Prevention
• Deadlock Avoidance
• Deadlock Detection
• Recovery from Deadlock
• Combined Approach to Deadlock Handling
P0 P1
wait (A); wait(B)
wait (B); wait(A)
Bridge Crossing Example
Directed graph
A set of vertices V and a set of edges E.
• V is partitioned into two types:
– P = {P1, P2, …, Pn}, the set consisting of all the processes in
the system.
• Process
• Pi requests instance of Rj
Pi
Rj
• Pi is holding an instance of Rj
Pi
Rj
• No Preemption –
– If a process that is holding some resources requests
another resource that cannot be immediately allocated to it,
then all resources currently being held are released.
– Preempted resources are added to the list of resources for
which the process is waiting.
– Process will be restarted only when it can regain its old
resources, as well as the new ones that it is requesting.
• Circular Wait – impose a total ordering of all resource types, and
require that each process requests resources in an increasing
order of enumeration.
• Multiple instances.
• Each process must a priori claim maximum use.
• When a process requests a resource it may have to wait.
• When a process gets all its resources it must return them in a
finite amount of time.