Chapter Two Process Managment
Chapter Two Process Managment
Chapter-TWO:
PROCESS MANAGEMENT
Samuel G
Chapter Contents
1. Introduction To Process
2. Process Scheduling
3. CPU Scheduling
5. Deadlock
2.1 Introduction To Process
⚫ Process vs. Program
⚫ Program
o It is a sequence of instructions defined to perform
some task.
o It is a passive entity.
⚫ Process
o It is a program in execution.
o It is an instance of a program running on a
computer.
o It is an active entity.
⚫A processor performs the actions defined by a
process. A process includes:
⚫program counter, stack, data section
Continued…
There are two types of processes
1. Sequential Processes: Execution progresses in a sequential
fashion, i.e. one after the other; At any point in time, at
most one process is being executed.
2. Concurrent Processes: There are two types of concurrent
processes.
1. True Concurrency (Multiprocessing)
⚫ Two or more processes are executed simultaneously in a
multiprocessor environment. Supports real
parallelism.
2.2 Apparent Concurrency (Multiprogramming)
⚫ Two or more processes are executed in parallel in a
uniprocessor environment by switching from one process
Continued…
⚫ Example: Consider a computer scientist who is baking a
birthday cake for her daughter and who is interrupted by her
daughter’s bleeding accident.
⚫ Analysis
√ Processes Baking Cake First Aid
√ Processor Computer Scientist Computer Scientist
√ Recipe First Aid
Program Ingredients Book First Aid
√ Input Cake Kit
√ Output Highe First Aid
√ Priority r Running, Idle Service
Running,
√ States Idle Lower
Continued …
PROCESS STATES:
⚫ During its lifetime, a process passes through a number of
states. The most important states are, New, Ready, Running,
blocked (waiting) and terminate.
Continued …
⚫ New:
⚫ A process that has just been created but has not yet been
admitted to the pool of executable processes by the operating
system.
⚫ Ready:
⚫ A process that is not currently executing but that is ready to
be executed as soon as the operating system dispatches it.
⚫ The process is in main memory and is available for execution.
⚫ Running:
⚫ A process that is currently being executed.
⚫ Blocked (Waiting):
⚫ A process that is waiting for the completion of some event,
such as
I/ O operation.
⚫ Exit (Terminated):
Continued …
PROCESS CONTROL BLOCK:
⚫ Each process is represented in the operating system by a
process control block (PCB)-also called a task control block. It
contains many pieces of information associated with a specific
process
Continued …
⚫ Process state: new, ready, running and waiting, halted
⚫ Program counter: indicates the address of the next
instruction to be executed for this process.
⚫ CPU registers: it varies in number and type, depending on
the computer architecture. They include accumulators, index
registers, stack pointers, and general-purpose registers, plus
any condition-code information.
⚫ CPU-Schedule information: includes a process
priority, pointers to scheduling queues, and any other
scheduling parameters.
⚫ Memory-management information: include such information
as the value of the base and limit registers, the page tables, or
the segment tables, depending on the memory system used by
the operating system.
THREADS
⚫ A thread is a dispatchable unit of work (lightweight
process) that has independent context, state and stack.
⚫ Aprocess is a collection of one or more threads
and associated system resources.
⚫ Traditional operating systems are single-threaded systems.
Two types of threads
1. User threads are supported above the kernel and
are implemented by a thread library at the user
level.
2. Kernel threads are supported directly by the operating
system; the kernel performs thread creation,
scheduling, and management in kernel space.
THREADS(Cont’d)
THREADS(Cont’d)
Multithreading:
⚫ It is a technique in which a process, executing an application,
is divided into threads that can run concurrently.
⚫ Modern operating systems are multithreaded systems.
⚫ The benefits of multithreaded programming can be
broken down into four major categories:
Responsiveness: Multithreading an interactive application may
allow a program to continue running even if part of it is blocked
or is performing a lengthy operation, thereby increasing
responsiveness to the user.
Resource sharing: By default, threads share the memory and
the resources of the process to which they belong.
THREADS(Cont’d)
Economy: Allocating memory and resources for process
creation is costly. Alternatively, because threads share resources
of the process to which they belong, it is more economical to
create and context switch threads
Utilization of multiprocessor architectures:
⚫ Multithreading can be greatly increased in a multiprocessor
architecture, where each thread may be running in parallel on
a different processor.
⚫ A single-threaded process can only run on one CPU
⚫ Multithreading on a multi-CPU machine increases concurrency.
In single-processor architecture, the CPU generally moves
between each thread so quickly as to create an illusion of
parallelism, but in reality only one thread is running at a time.
2.2 Inter-process Communication
⚫ Mechanism for processes to communicate and to synchronize
their actions.
⚫ Message system – processes communicate with each other
without resorting to shared variables.
⚫ IPC facility provides two operations:
send(message) – message size fixed or variable
receive(message)
⚫ If P and Q wish to communicate, they need to:
Establish a communication link between them
Exchange messages via send/receive
⚫ Implementation of communication link
Physical (e.g., shared memory, system bus)
Logical (e.g., logical properties)
Cont’d
…
DIRECT COMMUNICATION:
⚫ Processes must name each other explicitly:
⚫ send (P, message) – send a message to process P
⚫ receive(Q, message) – receive a message from process Q
⚫ Properties of communication link
⚫ Links are established automatically.
⚫ A link is associated with exactly one pair of communicating
processes.
⚫ Between each pair there exists exactly one link.
⚫ The link may be unidirectional, but is usually bi-directional.
INDIRECT COMMUNICATION:
⚫ Messages are directed and received from mailboxes (also referred to as
ports).
⚫ Each mailbox has a unique id.
⚫ Processes can communicate only if they share a mailbox.
Cont’d…
⚫ Properties of communication link
⚫ Link established only if processes share a common mailbox
⚫ A link may be associated with many processes.
⚫ Each pair of processes may share several communication links.
⚫ Link may be unidirectional or bi-directional.
⚫ Operations
⚫ create a new mailbox, send and receive messages through mailbox,
destroy a mailbox
⚫ Mailbox sharing
⚫ P1, P2, and P3 share mailbox A, P1, sends; P2 and P3 receive, Who gets
the
message?
⚫ Solutions
⚫ Allow a link to be associated with at most two processes.
⚫ Allow only one process at a time to execute a receive operation.
⚫ Sender is notified who the receiver was.
Cont’d
Race Conditions: …
⚫ Race condition: The situation where several processes access –
and manipulate shared data concurrently.
⚫ The final value of the shared data depends upon which process
finishes last.
⚫ Where two or more processes are reading or writing some
shared data and final result depends on who runs precisely are
called race conditions.
⚫ To prevent race conditions, concurrent processes must be
synchronized.
Continued…
The average waiting time is 8.2 milliseconds if arrival time for all processes is
0.
Cont’d.
4. Round Robin: .
⚫ Each process gets a small unit of CPU time (time quantum),
usually
10-100 milliseconds.
⚫ After this time has elapsed, the process is preempted and added to
the end of the ready queue.
⚫ Implements the FIFO concepts which can be treated as a
circular queue.
⚫ If the processes have burst time greater than the time quantum, it
is possible to execute equally with the size of the quantum and
seeks CPU for the next time containing the remaining prompted
milliseconds
⚫ It is larger average weighting time but ensures fairness.
Cont’d
… Quantum
Example of RR with Time
=4
AverageW(T)=
Cont’d
5. …
Multilevel Queue Scheduling:
Ready queue is partitioned into separate queues:
⚫ foreground (interactive)
⚫ background (batch)
⚫ Each queue has its own scheduling algorithm,
⚫ foreground – RR
⚫ background – FCFS
⚫ Scheduling must be done between the queues.
✦ Fixed priority scheduling; (i.e., serve all from foreground then
from background). Possibility of starvation.
✦ Time slice – each queue gets a certain amount of CPU time which it
can schedule amongst its processes; i.e., 80% to foreground in RR
✦ 20% to background in FCFS
Cont’d
…
Cont’
6. Multilevel Feedback-queuedScheduling:
⚫ A process can move between the various queues; aging can be
implemented this way.
⚫ Multilevel-feedback-queue scheduler defined by the following
parameters:
✦ number of queues
✦ scheduling algorithms for each queue
✦ method used to determine when to upgrade a process
✦ method used to determine when to demote a process
✦ method used to determine which queue a process will
enter when that process needs service
Cont’
Examples d
⚫ Three queues:
⚫ Q0 – time quantum 8 milliseconds
⚫ Q1 – time quantum 16 milliseconds
⚫ Q2 – FCFS
⚫ Scheduling
✦ A new job enters queue Q0 which is served FCFS. When it gains
CPU, job receives 8 milliseconds. If it does not finish in 8 milliseconds,
job is moved to queue Q1.
⚫ At Q1 job is again served FCFS and receives 16 additional
milliseconds. If it still does not complete, it is preempted and
moved to queue Q2.
Cont’d
…
Deadlock
⚫ It is a situation where a group of processes are permanently blocked
as a result of each process having acquired a subset of the resources
needed for its completion and waiting for release of the remaining
resource held by others in the same group.
⚫ Deadlock occurs when two or more processes are each waiting for
an event that will never occur, since it can only be generated by
another process in that set.
⚫ It happens when processes have been granted exclusive access
to resources such as devices, files, and so forth.
⚫ A resource is anything that can only be used by a single process at
any instance.
Cont’d
⚫ There are two types of resources
Preemptable: Can be taken away from a process.
Non-preemptable: (Cannot be taken away from its current owner
without causing the computation to fail). If a process has begun to
burn a CD-ROM, suddenly taking the CD recorder away from it and
giving it to another process will result in a garbled
(distorted/confused/corrupted) CD.
⚫ In general deadlock involves non-preemptable resources.
Example: Suppose that process A asks for CD-ROM drive and gets
it. After a moment, process B asks for the flatbed plotter and gets it,
too. Now process A asks for the plotter and blocks waiting for it.
Finally process B asks for CD-ROM drive and also blocks. This
situation is called a deadlock.
Cont’d
⚫ Example 1: Deadlock real time examples:-Bridge
Crossing
Cont’d
⚫ Example 2:Traffic Gridlock
Cont’d
⚫ Continued … traffic deadlock
System Model
Two general categories of resources can be distinguished.
⚫ Reusable Sources: It can be safely used by only one process at a
time and is not depleted or reduced or run down by that use.
√ It includes:- Processors, I/O channels, I/O devices, primary and
secondary memory, files, databases, semaphores etc.
√ Consumable Sources: It can be created and destroyed. It
include interrupts, signals, messages and information in I/O
buffers.
⚫ A process may utilize a resource in only in the following
sequence.
1. Request: If the request cannot be granted immediately, then the
requesting process must wait until it can acquire or get the
resources.
2. Use: The process can operate the resource.
3. Release: The process releases the resources.
Cont’d
⚫How deadlock
happens??
(a)
(b)
⚫ This situation makes circular waiting which can cause
for
deadlock.
Deadlock Characterization
Deadlock can arise if four conditions hold simultaneously.
1. Mutual exclusion: only one process at a time can use a resource.
2. Hold and wait: a process holding at least one resource is waiting
to
acquire additional resources held by other processes.
3. No preemption: a resource can be released only voluntarily by the
process holding it, after that process has completed its task.
4. Circular wait: there exists a set {P0, P1, …, P0} of
waiting processes such that P0 is waiting for a resource
that is held by P1, P1 is waiting for a resource that is held
by P2, …, Pn–1 is waiting for a resource that is held by
Pn, and Pn is waiting for a resource that is held by P0.
Deadlock Strategies
⚫ Various strategies have been followed by different
Operating Systems to deal with the problem of a
deadlock. These are listed below.
ꝏIgnore it.
ꝏDetect it.
ꝏRecover from
it. ꝏPrevent it.
ꝏAvoid it.
Cont’
1. Ignore a Deadlock:
⚫ The simplest strategy.
d
⚫ Pretend/imagine as if you are totally unaware of it.
⚫ Ostrich algorithm is the best method.
Firstly, the deadlock detection, recovery and prevention algorithms
are complex to write, test and debug.
Secondly, they slow down the system considerably. If a deadlock occurs
very rarely, it may have to restart the jobs but then the time may be lost
quite infrequently and may not be significantly large.
Cont’d
2. Detect a Deadlock:
The graphs (DRAG) provide good help for
detection.
(a) (b)
Cont’d
The Operating System, does the following to detect a deadlock.
1. Number all processes as p0, P1… PN.
2. Number each resource separately using a meaningful
coding scheme.
For instance, “R” denoting a resource.
The second character could denote the resource type (0 =
tape, 1
= Printer, etc.) and the third character could denote the resource
number or an instance within the type, e.g. R00, R01, R02,…
could be different tape drives of the same type; R10, R11, R12…
could be different printers of the same type with the assumption
that
The resources
Operatingbelonging
System tocould
the same
picktype
up are
anyinterchangeable.
of the
resources within a given type and allocate it available
difference. without
any
Cont’d
3. Resource wise table and process wise table
The resource wise table contains for each resource, its type,
allocation status, the process to which it is allocated and the
processes that are waiting for it.
Cont’d
Another table is a process wise table giving, for each process,
the resources held by it and the resources it is waiting for.
Cont’d
4. Whenever a process requests the OS for a resource, the request is
obviously for a resource belonging to a resource type.
⚫ The OS then goes through the resource wise table to see if there is
any free resource of that type, and if there is any, allocates it to the
process. After this, it updates both these tables appropriately.
⚫ If no free resource of that type is available, the OS keeps that
process waiting on one of the resources for that type.
5. At any time, the Operating System can use these tables to detect a
circular wait or a deadlock.
Cont’d
3. Recover from Deadlock:
i. Suspend/Resume a Process:
⚫ A process is selected based on a variety of criteria (low priority,
for
instance) and it is suspended for a long time.
⚫ The resources are reclaimed from the process and then allocated
ii. to Kill a
⚫ other processes that are waiting for them.
Process:
The OS decides to kill one or more processes and reclaim all
their resources after ensuring that such action will solve the
Deadlock.
⚫ The OS can use the DRAG and deadlock detection algorithms to
ensure that after killing a specific process, there will not be a
deadlock. This solution is simple, but involves loss of at least one
process.
Cont’d
⚫ There are two methods to implement this approach. First and
crude method is to kill all the deadlocked processes.
⚫ The method looks very simple and effective, but the operation of
killing a number of processes and starting them all over again is
very costly.
⚫ Second method involves the following steps:
1. Kill one process
2. Check the state of the system.
3. If the deadlock is resolved, exit. Otherwise go
back to step 1.
Cont’d
4. Prevent a Deadlock:
If any one of the deadlock prerequisite or characteristics is not met,
there cannot be a deadlock.
i. Mutual Exclusion Condition
If every resource in the system were sharable by multiple processes,
deadlocks would never occur.
ii. Wait for Condition:
The OS must make a process requesting for some resources to give up
the already held resources first and then try for the requested
resources.
iii. No Preemption Condition:
iv. Circular Wait Condition: Only the last one remains, One way
in which this can be achieved is to force a process to hold only
one resource at a time.
Cont’d
5. Avoid a Deadlock:
It is concerned with starting an environment where a deadlock is
theoretically possible (it is not prevented), but by some algorithm in the
OS, it is ensured before allocating any resource that after allocating it, a
deadlock can be avoided.
Banker’s Algorithm:
It was proposed by Dijkstra in 1965 for deadlock avoidance.
It is known as bankers algorithm starting from this year due to its
similarity in solving a problem of a banker wanting to distribute loans
to various customers within limited resources.
This algorithm in the OS makes calculation in advance before a
resource is allocated to a process, whether it can lead to a deadlock
(“unsafe state”) or it can certainly manage to avoid it (“safe state”).
Cont’d
Banker’s algorithm maintains two matrices on a dynamic basis. Matrix A
consists of the resources allocated to different processes at a given time.
Matrix B maintains the resources still needed by different processes at the
same time. These resources could be needed one after the other or
simultaneously.
Process Tape Drive Printers Plotters Process Tape Drive Printers Plotters
P0 2 0 0 P0 1 0 0
P1 0 1 0 P1 1 1 0
P2 1 2 1 P2 2 1 1
P3 1 0 1 P3 1 1 1
Matrix A Matrix B
Resources assigned vector Resources still
Total Resources (T) = required
543 Held Resources (H)
= 432 Free Resources (F)
= 111
Cont’d
Free available=T-H=111
This means that the resources available to the Operating System for
further allocation are; 1 tape drive, 1 printer and 1 plotter at that
juncture.
⚫ Example of Banker’s Algorithm
Cont’d
Example (Cont.): P1 requests (1,0,2)
Check that Request <=Available that
is,
(1, 0, 2) < = (3, 3, 2) is true.