0% found this document useful (0 votes)
20 views73 pages

Chapter Two Process Managment

The document provides an overview of process management in operating systems, detailing concepts such as processes, process scheduling, inter-process communication, and CPU scheduling. It explains the differences between processes and programs, the states of processes, and the importance of synchronization to avoid race conditions. Additionally, it covers various CPU scheduling algorithms and their criteria for efficient process management.

Uploaded by

yyordanoszerihun
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views73 pages

Chapter Two Process Managment

The document provides an overview of process management in operating systems, detailing concepts such as processes, process scheduling, inter-process communication, and CPU scheduling. It explains the differences between processes and programs, the states of processes, and the importance of synchronization to avoid race conditions. Additionally, it covers various CPU scheduling algorithms and their criteria for efficient process management.

Uploaded by

yyordanoszerihun
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 73

ADDIS ABABA UNIVERSITY

COLLEGE OF NATURAL AND


COMPUTATIONAL SCIENECES
Department of Computer Science

Chapter-TWO:
PROCESS MANAGEMENT

Samuel G
Chapter Contents

1. Introduction To Process

2. Process Scheduling

3. CPU Scheduling

4. Inter Process Communication(IPC)

5. Deadlock
2.1 Introduction To Process
⚫ Process vs. Program
⚫ Program
o It is a sequence of instructions defined to perform
some task.
o It is a passive entity.
⚫ Process
o It is a program in execution.
o It is an instance of a program running on a
computer.
o It is an active entity.
⚫A processor performs the actions defined by a
process. A process includes:
⚫program counter, stack, data section
Continued…
There are two types of processes
1. Sequential Processes: Execution progresses in a sequential
fashion, i.e. one after the other; At any point in time, at
most one process is being executed.
2. Concurrent Processes: There are two types of concurrent
processes.
1. True Concurrency (Multiprocessing)
⚫ Two or more processes are executed simultaneously in a
multiprocessor environment. Supports real
parallelism.
2.2 Apparent Concurrency (Multiprogramming)
⚫ Two or more processes are executed in parallel in a
uniprocessor environment by switching from one process
Continued…
⚫ Example: Consider a computer scientist who is baking a
birthday cake for her daughter and who is interrupted by her
daughter’s bleeding accident.
⚫ Analysis
√ Processes Baking Cake First Aid
√ Processor Computer Scientist Computer Scientist
√ Recipe First Aid
Program Ingredients Book First Aid
√ Input Cake Kit
√ Output Highe First Aid
√ Priority r Running, Idle Service
Running,
√ States Idle Lower
Continued …
PROCESS STATES:
⚫ During its lifetime, a process passes through a number of
states. The most important states are, New, Ready, Running,
blocked (waiting) and terminate.
Continued …
⚫ New:
⚫ A process that has just been created but has not yet been
admitted to the pool of executable processes by the operating
system.
⚫ Ready:
⚫ A process that is not currently executing but that is ready to
be executed as soon as the operating system dispatches it.
⚫ The process is in main memory and is available for execution.
⚫ Running:
⚫ A process that is currently being executed.
⚫ Blocked (Waiting):
⚫ A process that is waiting for the completion of some event,
such as
I/ O operation.
⚫ Exit (Terminated):
Continued …
PROCESS CONTROL BLOCK:
⚫ Each process is represented in the operating system by a
process control block (PCB)-also called a task control block. It
contains many pieces of information associated with a specific
process
Continued …
⚫ Process state: new, ready, running and waiting, halted
⚫ Program counter: indicates the address of the next
instruction to be executed for this process.
⚫ CPU registers: it varies in number and type, depending on
the computer architecture. They include accumulators, index
registers, stack pointers, and general-purpose registers, plus
any condition-code information.
⚫ CPU-Schedule information: includes a process
priority, pointers to scheduling queues, and any other
scheduling parameters.
⚫ Memory-management information: include such information
as the value of the base and limit registers, the page tables, or
the segment tables, depending on the memory system used by
the operating system.
THREADS
⚫ A thread is a dispatchable unit of work (lightweight
process) that has independent context, state and stack.
⚫ Aprocess is a collection of one or more threads
and associated system resources.
⚫ Traditional operating systems are single-threaded systems.
Two types of threads
1. User threads are supported above the kernel and
are implemented by a thread library at the user
level.
2. Kernel threads are supported directly by the operating
system; the kernel performs thread creation,
scheduling, and management in kernel space.
THREADS(Cont’d)
THREADS(Cont’d)
Multithreading:
⚫ It is a technique in which a process, executing an application,
is divided into threads that can run concurrently.
⚫ Modern operating systems are multithreaded systems.
⚫ The benefits of multithreaded programming can be
broken down into four major categories:
 Responsiveness: Multithreading an interactive application may
allow a program to continue running even if part of it is blocked
or is performing a lengthy operation, thereby increasing
responsiveness to the user.
 Resource sharing: By default, threads share the memory and
the resources of the process to which they belong.
THREADS(Cont’d)
 Economy: Allocating memory and resources for process
creation is costly. Alternatively, because threads share resources
of the process to which they belong, it is more economical to
create and context switch threads
 Utilization of multiprocessor architectures:
⚫ Multithreading can be greatly increased in a multiprocessor
architecture, where each thread may be running in parallel on
a different processor.
⚫ A single-threaded process can only run on one CPU
⚫ Multithreading on a multi-CPU machine increases concurrency.
In single-processor architecture, the CPU generally moves
between each thread so quickly as to create an illusion of
parallelism, but in reality only one thread is running at a time.
2.2 Inter-process Communication
⚫ Mechanism for processes to communicate and to synchronize
their actions.
⚫ Message system – processes communicate with each other
without resorting to shared variables.
⚫ IPC facility provides two operations:
send(message) – message size fixed or variable
receive(message)
⚫ If P and Q wish to communicate, they need to:
 Establish a communication link between them
 Exchange messages via send/receive
⚫ Implementation of communication link
 Physical (e.g., shared memory, system bus)
 Logical (e.g., logical properties)
Cont’d

DIRECT COMMUNICATION:
⚫ Processes must name each other explicitly:
⚫ send (P, message) – send a message to process P
⚫ receive(Q, message) – receive a message from process Q
⚫ Properties of communication link
⚫ Links are established automatically.
⚫ A link is associated with exactly one pair of communicating
processes.
⚫ Between each pair there exists exactly one link.
⚫ The link may be unidirectional, but is usually bi-directional.
INDIRECT COMMUNICATION:
⚫ Messages are directed and received from mailboxes (also referred to as
ports).
⚫ Each mailbox has a unique id.
⚫ Processes can communicate only if they share a mailbox.
Cont’d…
⚫ Properties of communication link
⚫ Link established only if processes share a common mailbox
⚫ A link may be associated with many processes.
⚫ Each pair of processes may share several communication links.
⚫ Link may be unidirectional or bi-directional.
⚫ Operations
⚫ create a new mailbox, send and receive messages through mailbox,
destroy a mailbox
⚫ Mailbox sharing
⚫ P1, P2, and P3 share mailbox A, P1, sends; P2 and P3 receive, Who gets
the
message?
⚫ Solutions
⚫ Allow a link to be associated with at most two processes.
⚫ Allow only one process at a time to execute a receive operation.
⚫ Sender is notified who the receiver was.
Cont’d
Race Conditions: …
⚫ Race condition: The situation where several processes access –
and manipulate shared data concurrently.
⚫ The final value of the shared data depends upon which process
finishes last.
⚫ Where two or more processes are reading or writing some
shared data and final result depends on who runs precisely are
called race conditions.
⚫ To prevent race conditions, concurrent processes must be
synchronized.
Continued…

Conditions required to avoid race condition


1. No two processes may be simultaneously inside their critical regions.
2. No assumptions may be made about speeds or the number of CPUs.
3. No process running outside its critical region may block other
processes.
4. No process should have to wait forever to enter its critical region.
Cont’d
CRITICAL SECTION:…
⚫ n processes all competing to use some shared data.
⚫ Each process has a code segment, called critical section,
in which the shared data is accessed.
⚫ Problem – ensure that when one process is executing in
its critical section, no other process is allowed to execute
in its critical section.
Solution?
Cont’d

Solution to Critical Section:
1. Mutual Exclusion. If process Pi is executing in its critical
section, then no other processes can be executing in their
critical sections.
2. Progress. If no process is executing in its critical section
and there exist some processes that wish to enter their
critical section, then the selection of the processes that
will enter the critical section next cannot be postponed
indefinitely.
3. Bounded Waiting. A bound must exist on the number of
times that other processes are allowed to enter their critical
sections after a process has made a request to enter its
critical section and before that request is granted.
Cont’d

Cont’d
SEMAPHORES: …
Cont’d

⚫ Classical problems of synchronization:
⚫Readersand Writers Problem
⚫Dining-Philosophers Problem
Dining Philosopher algorithm:
2.3 Process Scheduling
 Making the CPU as busy as possible based on time sharing
scheduling techniques.
 A uniprocessor system can have only one running process,
If more processes exist, the rest must wait until the CPU is
free and can be rescheduled.
Cont’d
 Operations on Processes …
 Process Creation
⚫ A process may create several new processes, via a create-process system
call, during the course of execution.
⚫ The creating process is called a parent process, whereas the new
processes are called the children of that process. Each of these new
processes may in turn create other processes, forming a tree of
processes.
⚫ When a process creates a new process, two possibilities exist in terms
of execution:
1. The parent continues to execute concurrently with its children.
2. The parent waits until some or all of its children have terminated.
⚫ There are also two possibilities in terms of the address space of the
new
process:
Cont’d….
 Process Termination
⚫ A process terminates when it finishes executing its final statement and
asks the operating system to delete it by using the exit system call. At
that point, the process may return data (output) to its parent process
(via the wait system call).
⚫ All the resources of the process-including physical and virtual
memory, open files, and I/O buffers-are de-allocated by the
operating system.
⚫ A parent may terminate the execution of one of its children for a
variety of reasons, such as these.
 The child has exceeded its usage of some of the resources that it
has been allocated. This requires the parent to have a mechanism to
inspect the state of its children.
 The task assigned to the child is no longer required.
Cont’d
 Cooperating Processes …
 The concurrent processes executing in the operating system may
be
either independent processes or cooperating processes.
 A process is independent if it cannot affect or be affected by the
other
processes executing in the system.
 Clearly, any process that does not share any data (temporary
or
persistent) with any other process is independent.
 On the other hand, a process is cooperating if it can affect or be
affected by the other processes executing in the system. Clearly, any
process that shares data with other processes is a cooperating process.
An environment that allows process cooperation for several reasons:
 Information sharing: Since several users may be interested in the
same piece of information (for instance, a shared file), we must
Cont’d
√ Computation speedup:…If we want a particular task to run
faster, we must break it into subtasks, each of which will be
executing in parallel with the others.
√ Such a speedup can be achieved only if the computer has
multiple processing elements (such as CPUS or I/O
channels).
√ Modularity: We may want to construct the system in a modular
fashion, dividing the system functions into separate processes or
threads.
√ Convenience: Even an individual user may have many tasks
on which to work at one time. For instance, a user may be
editing, printing, and compiling in parallel.
√ Concurrent execution of cooperating processes requires
mechanisms that allow processes to communicate with
one another and to synchronize their actions.
2.4 CPU Scheduling
⚫ It is the basis of multi-programmed operating
systems. objective of
⚫ multiprogramming
By switching theis CPU
to haveamong
some process
processes,
running at all times,
the in order to maximize CPU utilization.
⚫ It is a fundamental operating-system function.
CPU Scheduler:-
⚫ It selects from among the processes in memory that are ready
to
execute, and allocates the CPU to one of them.
⚫ CPU scheduling decisions may take place when a process:
1. Switches from running to waiting state.
2. Switches from running to ready state.
3. Switches from waiting to ready.
4. Terminates.
Cont’d
Dispatcher:- …
Dispatcher module gives control of the CPU to the process selected
by
the short-term scheduler; this involves:
 switching context
 switching to user mode
 jumping to the proper location in the user program to restart
that program
Dispatch latency – time it takes for the dispatcher to stop one
process and start another running.
Scheduling Criteria:
⚫ Different CPU-scheduling algorithms have different properties
and may favor one class of processes over another.
Cont’d
The criteria include: …
 CPU utilization – keep the CPU as busy as possible.
 Throughput – the number of processes that complete their
execution per time unit.
 Turnaround time – amount of time to execute a
particular process
Waiting time – amount of time a process has been waiting
in the ready queue.
 Response time – amount of time it takes from when a
request was submitted until the first response is produced, not
output (for time-sharing environment).
Cont’d

SCHEDULING ALGORITHMS:
1. First-come First-serve(FCFS)
⚫ It is the simplest CPU-scheduling algorithm.
⚫ The process that requests the CPU first is allocated
the CPU first
⚫ Implements the FIFO queue principle.
⚫ The code for FCFS scheduling is simple to write
and understand.
⚫ When the CPU is free, it is allocated to the process at
the head of the queue
Cont’d
1. Consider the following set…
of processes that arrive at time 0, with
the length of the CPU burst given in milliseconds:
 If the processes arrive in the order P1 with BT=24, P2 with
BT=3, P3 with BT=3, and are served in FCFS order, we get
the result shown in the following.
Cont’d
2. …
Shortest-job-first Scheduling:
⚫ It associates with each process the length of the process's next CPU
burst.
⚫ When the CPU is available, it is assigned to the process that has the
smallest next CPU burst.
⚫ If the next CPU bursts of two processes are the same, FCFS
scheduling is used to break the tie.
Two schemes:
✦ nonpreemptive – once CPU given to the process it cannot be
preempted until completes its CPU burst.
✦ preemptive – if a new process arrives with CPU burst length less
than remaining time of current executing process, preempt. This scheme
is know as the Shortest-Remaining-Time-First (SRTF).
⚫ SJF is optimal – gives minimum average waiting time for a given set of
processes.
Cont’d

Example of Non-Preemptive
SJF
Cont’d
Example of Preemptive …
SJF
Cont’d.
3. .
Priority Scheduling Algorithm:
 A priority number (integer) is associated with each process
 The CPU is allocated to the process with the highest priority
(smallest
integer ≡ highest priority).
 Preemptive
 nonpreemptive
 SJF is a priority scheduling where priority is the predicted next CPU
burst time.
͢ Problem =Starvation – low priority processes may never execute.
͢ Solution =Aging – as time progresses increase the priority of the
process is increased.
CPU Scheduling(Cont’d)

The average waiting time is 8.2 milliseconds if arrival time for all processes is
0.
Cont’d.
4. Round Robin: .
⚫ Each process gets a small unit of CPU time (time quantum),
usually
10-100 milliseconds.
⚫ After this time has elapsed, the process is preempted and added to
the end of the ready queue.
⚫ Implements the FIFO concepts which can be treated as a
circular queue.
⚫ If the processes have burst time greater than the time quantum, it
is possible to execute equally with the size of the quantum and
seeks CPU for the next time containing the remaining prompted
milliseconds
⚫ It is larger average weighting time but ensures fairness.
Cont’d
… Quantum
Example of RR with Time
=4

AverageW(T)=The average waiting time is 17/3 =


5.66 milliseconds.
Cont’d
Example of …Time Quantum =
RR with
20

AverageW(T)=
Cont’d
5. …
Multilevel Queue Scheduling:
Ready queue is partitioned into separate queues:
⚫ foreground (interactive)
⚫ background (batch)
⚫ Each queue has its own scheduling algorithm,
⚫ foreground – RR
⚫ background – FCFS
⚫ Scheduling must be done between the queues.
✦ Fixed priority scheduling; (i.e., serve all from foreground then
from background). Possibility of starvation.
✦ Time slice – each queue gets a certain amount of CPU time which it
can schedule amongst its processes; i.e., 80% to foreground in RR
✦ 20% to background in FCFS
Cont’d

Cont’
6. Multilevel Feedback-queuedScheduling:
⚫ A process can move between the various queues; aging can be
implemented this way.
⚫ Multilevel-feedback-queue scheduler defined by the following
parameters:
✦ number of queues
✦ scheduling algorithms for each queue
✦ method used to determine when to upgrade a process
✦ method used to determine when to demote a process
✦ method used to determine which queue a process will
enter when that process needs service
Cont’
Examples d
⚫ Three queues:
⚫ Q0 – time quantum 8 milliseconds
⚫ Q1 – time quantum 16 milliseconds
⚫ Q2 – FCFS
⚫ Scheduling
✦ A new job enters queue Q0 which is served FCFS. When it gains
CPU, job receives 8 milliseconds. If it does not finish in 8 milliseconds,
job is moved to queue Q1.
⚫ At Q1 job is again served FCFS and receives 16 additional
milliseconds. If it still does not complete, it is preempted and
moved to queue Q2.
Cont’d

Deadlock
⚫ It is a situation where a group of processes are permanently blocked
as a result of each process having acquired a subset of the resources
needed for its completion and waiting for release of the remaining
resource held by others in the same group.
⚫ Deadlock occurs when two or more processes are each waiting for
an event that will never occur, since it can only be generated by
another process in that set.
⚫ It happens when processes have been granted exclusive access
to resources such as devices, files, and so forth.
⚫ A resource is anything that can only be used by a single process at
any instance.
Cont’d
⚫ There are two types of resources
 Preemptable: Can be taken away from a process.
 Non-preemptable: (Cannot be taken away from its current owner
without causing the computation to fail). If a process has begun to
burn a CD-ROM, suddenly taking the CD recorder away from it and
giving it to another process will result in a garbled
(distorted/confused/corrupted) CD.
⚫ In general deadlock involves non-preemptable resources.
Example: Suppose that process A asks for CD-ROM drive and gets
it. After a moment, process B asks for the flatbed plotter and gets it,
too. Now process A asks for the plotter and blocks waiting for it.
Finally process B asks for CD-ROM drive and also blocks. This
situation is called a deadlock.
Cont’d
⚫ Example 1: Deadlock real time examples:-Bridge
Crossing
Cont’d
⚫ Example 2:Traffic Gridlock
Cont’d
⚫ Continued … traffic deadlock
System Model
Two general categories of resources can be distinguished.
⚫ Reusable Sources: It can be safely used by only one process at a
time and is not depleted or reduced or run down by that use.
√ It includes:- Processors, I/O channels, I/O devices, primary and
secondary memory, files, databases, semaphores etc.
√ Consumable Sources: It can be created and destroyed. It
include interrupts, signals, messages and information in I/O
buffers.
⚫ A process may utilize a resource in only in the following
sequence.
1. Request: If the request cannot be granted immediately, then the
requesting process must wait until it can acquire or get the
resources.
2. Use: The process can operate the resource.
3. Release: The process releases the resources.
Cont’d
⚫How deadlock
happens??

⚫PA must release TapeDrive first and PB must


release Printer after using then after a request will
be granted for TapeDrive as well as Printer other
wise it can be blocked for several times.
Resource Allocation Graph
Cont’d
Resource and process representation.

• Square boxes as resources named R1 and R2. Similarly, processes shown


as hexagons are named P1 and P2. The arrows show the relationship.
For instance, in part (a) of the figure, resource R1 is allocated to process
P1, or in other words, P1 holds R1. In part (b) of the figure, process P2
wants resource R2, but it has not yet got it. It is waiting for it.
• It is called Directed Resource Allocation Graphs (DRAG)
Directed Resource Allocation Graphs (DRAG)
⚫ P1 holds R1 but demands
R2.
⚫ P2 holds R2 but demands
R1.

(a)
(b)
⚫ This situation makes circular waiting which can cause
for
deadlock.
Deadlock Characterization
Deadlock can arise if four conditions hold simultaneously.
1. Mutual exclusion: only one process at a time can use a resource.
2. Hold and wait: a process holding at least one resource is waiting
to
acquire additional resources held by other processes.
3. No preemption: a resource can be released only voluntarily by the
process holding it, after that process has completed its task.
4. Circular wait: there exists a set {P0, P1, …, P0} of
waiting processes such that P0 is waiting for a resource
that is held by P1, P1 is waiting for a resource that is held
by P2, …, Pn–1 is waiting for a resource that is held by
Pn, and Pn is waiting for a resource that is held by P0.
Deadlock Strategies
⚫ Various strategies have been followed by different
Operating Systems to deal with the problem of a
deadlock. These are listed below.
ꝏIgnore it.
ꝏDetect it.
ꝏRecover from
it. ꝏPrevent it.
ꝏAvoid it.
Cont’
1. Ignore a Deadlock:
⚫ The simplest strategy.
d
⚫ Pretend/imagine as if you are totally unaware of it.
⚫ Ostrich algorithm is the best method.
 Firstly, the deadlock detection, recovery and prevention algorithms
are complex to write, test and debug.
 Secondly, they slow down the system considerably. If a deadlock occurs
very rarely, it may have to restart the jobs but then the time may be lost
quite infrequently and may not be significantly large.
Cont’d
2. Detect a Deadlock:
The graphs (DRAG) provide good help for
detection.

(a) (b)
Cont’d
The Operating System, does the following to detect a deadlock.
1. Number all processes as p0, P1… PN.
2. Number each resource separately using a meaningful
coding scheme.
 For instance, “R” denoting a resource.
 The second character could denote the resource type (0 =
tape, 1
= Printer, etc.) and the third character could denote the resource
number or an instance within the type, e.g. R00, R01, R02,…
could be different tape drives of the same type; R10, R11, R12…
could be different printers of the same type with the assumption
 that
The resources
Operatingbelonging
System tocould
the same
picktype
up are
anyinterchangeable.
of the
resources within a given type and allocate it available
difference. without
any
Cont’d
3. Resource wise table and process wise table
The resource wise table contains for each resource, its type,
allocation status, the process to which it is allocated and the
processes that are waiting for it.
Cont’d
 Another table is a process wise table giving, for each process,
the resources held by it and the resources it is waiting for.
Cont’d
4. Whenever a process requests the OS for a resource, the request is
obviously for a resource belonging to a resource type.
⚫ The OS then goes through the resource wise table to see if there is
any free resource of that type, and if there is any, allocates it to the
process. After this, it updates both these tables appropriately.
⚫ If no free resource of that type is available, the OS keeps that
process waiting on one of the resources for that type.
5. At any time, the Operating System can use these tables to detect a
circular wait or a deadlock.
Cont’d
3. Recover from Deadlock:
i. Suspend/Resume a Process:
⚫ A process is selected based on a variety of criteria (low priority,
for
instance) and it is suspended for a long time.
⚫ The resources are reclaimed from the process and then allocated
ii. to Kill a
⚫ other processes that are waiting for them.
Process:
The OS decides to kill one or more processes and reclaim all
their resources after ensuring that such action will solve the
Deadlock.
⚫ The OS can use the DRAG and deadlock detection algorithms to
ensure that after killing a specific process, there will not be a
deadlock. This solution is simple, but involves loss of at least one
process.
Cont’d
⚫ There are two methods to implement this approach. First and
crude method is to kill all the deadlocked processes.
⚫ The method looks very simple and effective, but the operation of
killing a number of processes and starting them all over again is
very costly.
⚫ Second method involves the following steps:
1. Kill one process
2. Check the state of the system.
3. If the deadlock is resolved, exit. Otherwise go
back to step 1.
Cont’d
4. Prevent a Deadlock:
 If any one of the deadlock prerequisite or characteristics is not met,
there cannot be a deadlock.
i. Mutual Exclusion Condition
If every resource in the system were sharable by multiple processes,
deadlocks would never occur.
ii. Wait for Condition:
The OS must make a process requesting for some resources to give up
the already held resources first and then try for the requested
resources.
iii. No Preemption Condition:
iv. Circular Wait Condition: Only the last one remains, One way
in which this can be achieved is to force a process to hold only
one resource at a time.
Cont’d
5. Avoid a Deadlock:
 It is concerned with starting an environment where a deadlock is
theoretically possible (it is not prevented), but by some algorithm in the
OS, it is ensured before allocating any resource that after allocating it, a
deadlock can be avoided.
Banker’s Algorithm:
 It was proposed by Dijkstra in 1965 for deadlock avoidance.
 It is known as bankers algorithm starting from this year due to its
similarity in solving a problem of a banker wanting to distribute loans
to various customers within limited resources.
 This algorithm in the OS makes calculation in advance before a
resource is allocated to a process, whether it can lead to a deadlock
(“unsafe state”) or it can certainly manage to avoid it (“safe state”).
Cont’d
 Banker’s algorithm maintains two matrices on a dynamic basis. Matrix A
consists of the resources allocated to different processes at a given time.
Matrix B maintains the resources still needed by different processes at the
same time. These resources could be needed one after the other or
simultaneously.
Process Tape Drive Printers Plotters Process Tape Drive Printers Plotters

P0 2 0 0 P0 1 0 0
P1 0 1 0 P1 1 1 0
P2 1 2 1 P2 2 1 1
P3 1 0 1 P3 1 1 1

Matrix A Matrix B
Resources assigned vector Resources still
Total Resources (T) = required
543 Held Resources (H)
= 432 Free Resources (F)
= 111
Cont’d
 Free available=T-H=111
 This means that the resources available to the Operating System for
further allocation are; 1 tape drive, 1 printer and 1 plotter at that
juncture.
⚫ Example of Banker’s Algorithm
Cont’d
Example (Cont.): P1 requests (1,0,2)
 Check that Request <=Available that
is,
 (1, 0, 2) < = (3, 3, 2) is true.

Executing safety algorithm shows that sequence <P1, P3, P4,


P0,
P2> satisfies safety requirement.
 Can request for (3,3,0) by P4 be granted?
 Can request for (0,2,0) by P0 be granted?
The End
1. Least at least four deadlock strategies?
2. Four process p1,p2,p3 and p4 wants to utilize 4 resource types
of
A(5 instances),B(2
Process Allocatedinstances),C(4
Need instances),D(3
Max Totalinstances).
Available
ABCD ABCD ABCD ABCD ABCD
P1 2011 1100 5243
P2 0100 0112
P3 1011 3100
P4 1101 0010
Total Allocated=4223
3. Is a request by P3(3100)
1. Fill the Max matrices is
granted?
2. What is the Available amount.
4.What is the possible safe
sequence?

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy