OS Module 3 CEC
OS Module 3 CEC
SYNCHRONIZATION
What is synchronization?
Synchronization is the method which ensures the orderly execution of cooperating processes that
share logical address space (ie: code and data) or share data through files or messages through
threads so that data consistency is maintained.
The concurrent-access to shared-data may result in data-inconsistency. To maintain data-
consistency: the orderly execution of co-operating processes is necessary.
Suppose that we wanted to provide a solution to producer-consumer problem that fills
all full buffers. We can do so by having a variable counter that keeps track of the no. of
full buffers
Initially, counter=0.
counter is incremented by the producer after it produces a new item to buffer.
counter is decremented by the consumer after it consumes an item from buffer.
Shared-data:
Consider this execution interleaving with counter = 5 initially: The value of counter may be either
4 or 6, where the correct result should be 5. This is an example for race condition. To prevent
race conditions, concurrent-processes must be synchronized.
Illustrate with examples the Peterson’s solution for critical section problem and prove that
mutual exclusion property is preserved.
OR
Discuss an efficient algorithm which can meet all the requirements to solve the critical
section problem.
Where variable turn indicates whose turn is to enter its critical-section. i.e., if turn== i, then
process Pi is allowed to execute in its critical-section. The flag array is used to indicate if a
process is ready (interested) to enter its critical-section. i.e. if flag[i]=true, then Pi is ready to enter
its critical-section.
while (true)
{
flag[i] = TRUE;
turn = j;
while ( flag[j] && turn == j);
CRITICAL SECTION
flag[i] = FALSE;
REMAINDER SECTION
}
To prove Mutual exclusion property, let us consider two processes Pi = P0 and Pj = P1: we note
that process Pi (P0) enters its critical section only if either flag[j] = = flag[1]= = false or turn = = i
= 0 ( P1 is not interested or next turn to enter CS is P0)
If both processes can be executing in their critical sections at the same time, then flag[0] = =
flag[1] = = true.
These two observations imply that P0 and P1 could not have successfully executed their while
statements at about the same time, since the value of turn can be either 0 or 1 but cannot be both.
Hence one of the process say Pi (P0) must have successfully executed the while statement and
enters into critical section. Whereas Pj (P1) has to execute at least one additional statement turn =
= i (0). However, since at that time, falg[i] = flag[0] =true, and turn = = i (0), this condition(P1 is
in trap state) will persist as long as P0 is in critical section. Thus Mutual Exclusion is preserved.
To prove Progress and bounded wait property, let us consider two processes Pi = P0 and Pj = P1
TestAndSet( ): instruction is used to test & modify the content of a word atomically. An atomic-
operation is an operation that completes in its entirety without interruption.
The definition of TestAndSet( ) is:
boolean TestAndSet (boolean *target)
{
boolean rv = *target;
*target = TRUE;
return rv:
}
Critical Section
// remainder section
}
Suppose P0 is the process interested to enter CS. Initially lock =false, so that while(TestAndSet
(&lock )) statement results in false. P0 enters CS. When P0 is inside CS, P1 attempts to enter CS,
but P1 is blocked in the entry section of CS itself, since the value of lock = true.
SWAP( )
Definition of Swap( ) is as follows:
void swap (boolean *a, boolean *b)
{
boolean temp = *a;
*a = *b;
*b = temp:
}
This instruction is executed atomically. If the machine supports the Swap(), then mutual-
Advantages of semaphore
What are the advantages of semaphore?
1. Semaphores allow only one process into the critical section.
2. Semaphores follow mutual exclusion principle strictly.
3. Semaphores are much more efficient than some other methods of synchronization
SEMAPHORE IMPLEMENTATION
The main disadvantage of semaphore is Busy waiting.
What is busy waiting in critical section concept?
OR
What is spinlock?
While a process is in its critical-section, any other process that tries to enter its critical-
section must loop continuously in the entry-code, and this situation in critical section
problem is called busy waiting.
Busy waiting wastes CPU cycles that some other process might be able to useproductively.
This type of semaphore is also called a spinlock (because the process "spins" while waiting
for the lock).
IMPLEMENTATION OF SEMAPHORE
Explain implementation of semaphore
To overcome busy waiting, we can modify the definition of the wait() and signal() as follows:
When a process executes the wait() and finds that the semaphore-value is not positive, it must
wait. However, rather than engaging in busy waiting, the process can blockitself.
A process that is blocked (waiting on a semaphore S) should be restarted when someother process
executes a signal(). The process is restarted by awakeup().
We assume 2 simple operations:
1) block() suspends the process that invokes it.
2) wakeup(P) resumes the execution of a blocked process P.
We define a semaphore as follows:
Implementation of wait( )
wait (S)
{
value--;
if (value <0)
{
add this process to waiting queue
block();
}
}
Implementation of signal( )
signal(S)
{
value++;
if (value <=0)
{
Remove a process P from the waiting queue
wakeup(P);
}
}
NOTE:
The (critical-section) problem can be solved in two ways:
1) In a uni-processor environment
¤ Inhibit interrupts when the wait and signal operations execute.
¤ Only current process executes, until interrupts are re-enabled & the
scheduler regains control.
2) In a multi-processor environment
¤ Inhibiting interrupts doesn't work.
¤ Use the hardware / software solutions described above.
Suppose that Po executes wait(S) and then P1 executes wait(Q). When Po executes wait(Q), it
must wait until P1 executes signal(Q). Similarly, when P1 executes wait(S), it must wait until Po
executes signal(S). Since these signal() operations cannot be executed, Po & P1 are deadlocked.
Starvation (indefinite blocking) is another problem related to deadlocks.
Starvation is a situation in which processes wait indefinitely within the semaphore. Indefinite
blocking may occur if we remove processes from the list associated with a semaphore in LIFO
(last-in, first-out)order.
BOUNDED-BUFFER PROBLEM
Give a solution to the bounded buffer problem using semaphores. Write the structure of
producer and consumer processes
The bounded-buffer problem is related to the producer consumer problem. There is a pool of n
buffers, each capable of holding one item.
Shared data:
The mutex semaphores provide mutual exclusion for access to the buffer pool and is
initialized to the value 1
empty and full semaphores are used to count the number of empty and full items in buffer
respectively.
Initially empty = n; full=0
The symmetry between the producer and the consumer is ensured by the producer producing full
items in buffers for the consumer and the consumer produces empty buffers for the producer.
The structure of the producer process
while (true)
{
// produce an item
wait (empty);
wait (mutex);
// add the item to the buffer
signal (mutex);
signal (full);
}
The structure of the consumer process
while (true)
{
wait (full);
wait (mutex);
// remove an item from buffer
signal (mutex);
signal (empty);
// consume the removed item
}
The actual problem in Readers Writers problem is, if 2 readers can access the shared-DB
simultaneously without any problems. However, if a writer & other process (either a reader or a
writer) access the shared-DB simultaneously, problems may arise.
Solution: The writers must have exclusive access to the shared-DB while writing to the DB.
Shared-data
Where,
mutex is used to ensure mutual-exclusion when the variable readcount is updated.
wrt is common to both reader and writer processes.
wrt is used as a mutual-exclusion semaphore for the writers. Also wrt is used by the
first/last reader that enters/exits the critical-section.
readcount counts the number of processes currently reading the object.
Initialization
mutex = 1, wrt = 1, readcount = 0
The structure of a writer process
while (true)
{
wait (wrt) ;
// writing is performed
signal (wrt) ;
}
Dept. of CSE, CEC, Mangaluru Page 14
OPERATING SYSTEMS MODULE 3
The table has a bowl of rice in the center and 5 single chopsticks.
From time to time, a philosopher gets hungry and tries to pick up the 2
chopsticks that are closest to them.
A philosopher may pick up only one chopstick at a time.
Obviously, one cannot pick up a chopstick that is already in the hand of a neighbor
philosopher.
When hungry philosopher has both her chopsticks at the same time, she eats
without releasing her chopsticks.
When she is finished eating, she puts down both of her chopsticks and starts thinking
again.
Problem objective: To allocate several resources among several processes in a deadlock-free &
starvation-free manner.
Solution using semaphore:
Represent each chopstick with a semaphore; chopstick[5]
A philosopher tries to grab a chopstick by executing a wait( ) on the semaphore.
The philosopher releases her chopsticks by executing the signal( ) on the semaphores.
This solution guarantees that no two neighbors are eating simultaneously.
Shared-data:
semaphore chopstick[5];
Initialization
chopstick[5]={1,1,1,1,1}
2) Allow a philosopher to pick up her chopsticks only if both chopsticks are available.
3) Use an asymmetric solution; i.e. an odd philosopher picks up first her left
chopstick and then her right chopstick, whereas an even philosopher picks up her
right chopstick and then her left chopstick.
2.15 MONITORS
Need for Monitors
When programmers use semaphores incorrectly, following types of errors may occur:
1. Suppose that a process interchanges the order in which the wait() and signal() operations on the
semaphore mutex are executed, resulting in the following execution:
formal-parameters. Similarly, the local-variables of a monitor can be accessed by only the local-
procedures.
Only one process at a time is active within the monitor. To allow a process to wait within the
monitor, a condition variable must be declared, as
Example:
Suppose when the x.signal() operation is invoked by a process P, there exists a suspended
process Q associated with condition x. Both processes can conceptually continue with
their execution. Two possibilities exist:
1) Signal and wait
P either waits until Q leaves the monitor or waits for other condition.
2) Signal and continue
Q either waits until P leaves the monitor or waits for other condition
DEADLOCKS
3.1 DEADLOCKS
Define deadlock.
Deadlock is a situation where a set of processes are blocked because each process is holding a resource
and waiting for another resource held by some other process.
Similar situation occurs in operating systems when there are two or more processes hold some
resources and wait for resources held by other(s).
3.2 SYSTEM MODEL
A system consists of finite number of resources. (For ex: memory, printers, CPUs). These resources are
distributed among number of processes. A process must request a resource before using it and release
the resource after using it. The process can request any number of resources to carry out a given task.
The total number of resource requested must not exceed the total number of resources available.
If the request cannot be granted immediately (for ex: the resource is being used by another
process), then the requesting-process must wait for acquiring the resource.
The process uses the resource. For example: prints to the printer or reads from the file.
3. Release
The process releases the resource. So that, the resource becomes available for other processes.
A set of processes is deadlocked when every process in the set is waiting for a resource that is
currently allocated to another process in the set. Deadlock may involve different types of
resources.
As shown in below figure , Both processes P1 & P2 need resources to continue execution.
Dept. of CSE, CEC, Mangaluru Page 22
OPERATING SYSTEMS MODULE 3
Multithread programs are good candidates for deadlock because they compete for shared resources.
A process must be simultaneously holding at least one resource and waiting to acquire additional
resources held by the other process.
3. No Preemption
Once a process is holding a resource ( i.e. once its request has been granted ), then that resource
cannot be taken away from that process until the process voluntarily releases it.
4. Circular Wait
A set of processes {P0, P1, P2, . . ., PN } must exist such that P0 is waiting for a resource that is held
by P1. P1 is waiting for a resource that is held by P2. Pn–1 is waiting for a resource that is held by Pn,
and Pn is waiting for a resource that is held by P0.
RESOURCE-ALLOCATION-GRAPH (RAG)
The resource-allocation-graph (RAG) is a directed graph that can be used to describe the deadlock
situation.
RAG consists of a
i. Set of vertices (V) and
ii. Set of edges (E).
NOTE:
Suppose that process Pi requests resource Rj.; Here, the request for Rj from Pi can be granted only if the
converting request-edge to assignment-edge do not form a cycle in the resource-allocation graph.
1.) RAG with a deadlock 2). With a cycle and deadlock 3). With cycle but no deadlock
CONCLUSION:
1) If a graph contains no cycles, then the system is not deadlocked.
2) If the graph contains a cycle and if only one instance per resource type is used, then
deadlock will occur.
3) If the graph contains a cycle and if several instances per resource type is used, then there
is a possibility of deadlock but not necessarily present
METHODS FOR HANDLING DEADLOCKS
2. Deadlock detection and recovery: Abort a process or preempt some resources when deadlocks
are detected. Deadlock detection is fairly straightforward, but deadlock recovery requires
either aborting processes or preempting resources.
3. Ignore the problem all together and pretend that deadlocks never occur in the system: This
solution is used by most operating system
3.4 DEADLOCK-PREVENTION
Deadlocks can be eliminated by preventing (making False) at least one of the four required conditions:
1) Mutual exclusion
2) Hold-and-wait
3) No preemption
4) Circular-wait.
Mutual Exclusion
This condition must hold for non-sharable resources. For example: A printer cannot be
simultaneously shared by several processes.
On the other hand, shared resources do not lead to deadlocks. For example: Simultaneous
access can be granted for read-only file.
A process never waits for accessing a sharable resource.
In general, we cannot prevent deadlocks by denying the mutual-exclusion condition
because some resources are non-sharable by default.
Hold and Wait:
To prevent this condition it must be ensure that, whenever a process requests a resource, it does not hold
any other resources.
A process must request a resource only when the process has none allocated to it. A process
may request some resources and use them. Before it can request any additional resources,
however, it must release all the resources that it is currently allocated
No Preemption
The general idea behind deadlock avoidance is to prevent deadlocks from ever happening. Deadlocks are
requiring additional information about how resources are to be requested.
Deadlock-avoidance algorithm
Requires more information about each process, and
Tends to lead to low device utilization.
For example:
1) In simple algorithms, the scheduler only needs to know the maximum number of
each resource that a process might potentially use.
2) In complex algorithms, the scheduler can also take advantage of the schedule of
SAFE STATE
A state is safe if the system can allocate all resources requested by all processes without entering a
deadlock state.
A state is safe if there exists a safe sequence of processes {P0, P1, P2, ..., PN} such that the requests
of each process(Pi) can be satisfied by the currently available resources.
If a safe sequence does not exist, then the system is in an unsafe state, which may lead to deadlock. All
safe states are deadlock free, but not all unsafe states lead to deadlocks.
1. When a process Pi requests a resource Rj, the claim edge Pi → Rj is converted to a request
edge.
2. Similarly, when a resource Rj is released by the process Pi, the assignment edge Rj → Pi is
reconverted as claim edge Pi → Rj.
3. The request for Rj from Pi can be granted only if the converting request edge to assignment
edge do not form a cycle in the resource allocation graph.
To apply this algorithm, each process Pi must know all its claims before it starts executing.
Conclusion:
If no cycle exists, then the allocation of the resource will leave the system in a safe state.
If cycle is found, system is put into unsafe state and may cause a deadlock.
For example: Consider a resource allocation graph shown in Figure below
Suppose P2 requests R2. Though R2 is currently free, we cannot allocate it to P2 as this action will
create a cycle in the graph as shown in below Figure. This cycle will indicate that the system is in
unsafe state: because, if P1 requests R2 and P2 requests R1 later, a deadlock will occur.
Problem:
The resource-allocation graph algorithm is not applicable when there are multiple instances for
Dept. of CSE, CEC, Mangaluru Page 29
OPERATING SYSTEMS MODULE 3
each resource.
Solution: Use banker's algorithm.
BANKERS ALGORITHM
This algorithm is applicable to the system with multiple instances of each resource types. However, this
algorithm is less efficient then the resource-allocation-graph algorithm. When a process starts up, it
must declare the maximum number of resources that it may need. This number may not exceed the
total number of resources in the system. When a request is made, the system determines whether
granting the request would leave the system in a safe state.
else the process must wait until some other process releases enough resources.
Assumptions: Let n = number of processes in the system Let m = number of resources types.
Need [n][m]
This matrix indicates the remaining resources need of each process.
If Need[i,j]=k, then Pi may need k more instances of resource Rj to complete its task.
So, Need[i,j] = Max[i,j] - Allocation[i]
Write and explain Bankers algorithm. (explain both safety and Resource request algorithm)
This algorithm is applicable to the system with multiple instances of each resource types.
When a process starts up, it must declare the maximum number of resources that it may need. This
number may not exceed the total number of resources in the system. When a request is made, the
Dept. of CSE, CEC, Mangaluru Page 30
OPERATING SYSTEMS MODULE 3
system determines whether granting the request would leave the system in a safe state.
else the process must wait until some other process releases enough resources.
Assumptions: Let n = number of processes in the system Let m = number of resources types.
Following data structures are used to implement the banker’s algorithm
Available [m], Max [n][m], Allocation [n][m], Need [n][m]
Resource-Request Algorithm
This algorithm determines if a new request is safe, and grants it only if it is safe to do so. When a request
is made ( that does not exceed currently available resources ), pretend it has been granted, and then
see if the resulting state is a safe one. If so, grant the request, and if not, deny the request.
If Requesti [j]=k, then process Pi wants k instances of the resource type Rj.
Step 1:
If Requesti <= Needi then go to step 2
else
Raise an error condition, since the process has exceeded its maximum claim.
Step 2:
If Requesti <= Available then go to step 3
else
Pi must wait, since the resources are not available.
Step 3:
If the system wants to allocate the requested resources to process Pi then modify the
state as follows:
Available = Available – Requesti
Allocationi = Allocationi + Requesti
Needi = Needi – Requesti
Step 4:
If the resulting resource-allocation state is safe, then
i) transaction is complete and
ii) Pi is allocated its resources.
Step 5:
If the new state is unsafe,then
i) Pi must wait for Requesti and
ii) Old resource-allocation state is restored.
An Illustrative Example
P2 3 0 3 9 0 2
P3 2 1 1 2 2 2
P4 0 0 2 4 3 3
Step 1: Initialization
Work = Available ie: Work = 3 3 2
Process P0 P1 P2 P3 P4
Finish False False False False False
Step 2: For i= 0
Finish [P0] = false and Need [P0] <= Work i.e. (7 4 3) <= (3 3 2) ie: false So P0 must wait.
Step 2: For i=1
Finish [P1] = false and Need [P1] <= Work i.e. (1 2 2) <= (3 3 2) ie: True, So P1 must be kept
in safe sequence.
Step 3: Work = Work + Allocation [P1] = (3 3 2) + (2 0 0) = (5 3 2)
Process P0 P1 P2 P3 P4
Finish False True False False False
Step 2: For i=2
Finish[P2] = false and Need[P2] <= Work i.e. (6 0 0) <= (5 3 2) ie: false, So P2 must wait.
Step 2: For i=3
Finish[P3] = false and Need[P3] <= Work i.e. (0 1 1) <= (5 3 2) ie: True, So P3 must be kept in
safe sequence.
Step 3: Work = Work + Allocation [P3] = (5 3 2) + (2 1 1) = (7 4 3)
Process P0 P1 P2 P3 P4
Finish False True False True False
Step 2: For i=4
Finish[P4] = false and Need[P4] <= Work i.e. (4 3 1) <= (7 4 3) ie: True, So P4 must be kept in
safe sequence.
Step 3: Work = Work + Allocation [P4] = (7 4 3) + (0 0 2) = (7 4 5)
Process P0 P1 P2 P3 P4
Finish False True False True True
Step 2: For i=0
Finish[P0] = false and Need[P0] <= Work i.e. (7 4 3) <= (7 4 5) ie: True, So P0 must be kept in
safe sequence.
Step 3: Work = Work + Allocation [P0] = (7 4 5) + (0 1 0) = (7 5 5)
Process P0 P1 P2 P3 P4
Finish True True False True True
Step 2: For i=2
Finish[P2] = false and Need[P2] <= Work i.e. (6 0 0) <= (7 5 5) ie: True, So P2 must be kept in
safe sequence.
Step 3: Work = Work + Allocation [P2] = (7 5 5) + (3 0 2) = (10, 5 7)
Process P0 P1 P2 P3 P4
Finish True True True True True
Step 4: Finish[Pi] = True for 0 <= i<= 4
Hence, the system is currently in a safe state. The safe sequence is <P1, P3, P4, P0, P2>.
ii) Conclusion: Yes, the system is currently in a safe state.
Need
A B C
P0 7 4 3
P1 0 2 0
P2 6 0 0
P3 0 1 1
P4 4 3 1
To determine whether this new system state is safe, we again execute Safety algorithm.
Step 1: Initialization
Work = Available ie: Work = 2 3 0
Process P0 P1 P2 P3 P4
Finish False False False False False
Step 2: For i= 0
Finish [P0] = false and Need [P0] <= Work i.e. (7 4 3) <= (2 3 0) ie: false So P0 must wait.
Step 2: For i=1
Finish [P1] = false and Need [P1] <= Work i.e. (0 2 0) <= (2 3 0) ie: True, So P1 must be kept
in safe sequence.
Step 3: Work = Work + Allocation [P1] = (2 3 0) + (3 0 2) = (5 3 2)
Process P0 P1 P2 P3 P4
Finish False True False False False
Step 2: For i= 2
Dept. of CSE, CEC, Mangaluru Page 35
OPERATING SYSTEMS MODULE 3
Finish [P2] = false and Need [P2] <= Work i.e. (6 0 0) <= (5 3 2) ie: false So P2 must wait.
Step 2: For i=3
Finish [P3] = false and Need [P3] <= Work i.e. (0 1 1) <= (5 3 2) ie: True, So P3 must be kept
in safe sequence.
Step 3: Work = Work + Allocation [P3] = (5 3 2) + (2 1 1) = (7 4 3)
Process P0 P1 P2 P3 P4
Finish False True False True False
Process P0 P1 P2 P3 P4
Finish False True False True True
Step 2: For i =0
Finish [P0] = false and Need [P0] <= Work i.e. (7 4 3) <= (7 4 5) ie: True, So P0 must be kept
in safe sequence.
Step 3: Work = Work + Allocation [P0] = (7 4 5) + (0 1 0) = (7 5 5)
Process P0 P1 P2 P3 P4
Finish True True False True True
Step 2: For i =2
Finish [P2] = false and Need [P2] <= Work i.e. (6 0 0) <= (7 5 5) ie: True, So P2 must be kept
in safe sequence.
Step 3: Work = Work + Allocation [P2] = (7 5 5) + (3 0 2) = (10, 5, 7)
Process P0 P1 P2 P3 P4
Finish True True True True True
Step 4: Finish[Pi] = True for 0 <= i<= 4
Hence, the system is currently in a safe state. The safe sequence is <P1, P3, P4, P0, P2>.
Conclusion: Since the system is in safe sate, the request can be granted.
Step 1: Initialization
Work = Available ie: Work = (1 0 2)
Process P0 P1 P2 P3 P4
Finish False False False False False
Step 2: For i=0
Need[P0] <= work ie: (0 0 2) <= (1 0 2) ie: True, So P0 must be kept in safe sequence.
Step3: Therefore work = work + Allocation [P0] = (1 0 2) + (0 0 2) = (1 0 4)
Set Finish[P0] = True
Step 2: For i=1
Need[P1] <= work ie: (1 0 1) <= (1 0 4) ie: True, So P1 must be kept in safe sequence.
Step3: Therefore work = work + Allocation [P1] = (1 0 4) + (1 0 0) = (2 0 4)
Set Finish[P1] = True
A B C
P0 0 0 2
P1 1 0 1
P2 0 0 0
P3 2 1 0
P4 0 1 4
To determine whether this new system state is safe, we again execute Safety algorithm.
Step 1: Initialization
Work = Available ie: Work = (1 0 0)
Process P0 P1 P2 P3 P4
Finish False False False False False
Step 2: For i=0
Need[P0] <= work ie: (0 0 2) <= (1 0 0) ie: False, So P0 must wait.
Step 2: For i=1
Need[P1] <= work ie: (1 0 1) <= (1 0 0) ie: False So P1 must wait.
Step 2: For i=2
Need[P2] <= work ie: (0 0 0) <= (1 0 0) ie: True, So P2 must be kept in safe sequence.
Step3: Therefore work = work + Allocation [P2] = (1 0 0) + (1 3 7) = (2 3 7)
Set Finish[P2] = True
Step 2: For i=3
Need[P3] <= work ie: (2 1 0) <= (2 3 7) ie: True, So P3 must be kept in safe sequence.
Step3: Therefore work = work + Allocation [P3] = (2 3 7) + (6 3 2) = (8, 6, 9)
Set Finish[P3] = True
Step 2: For i=4
Need[P4] <= work ie: (0 1 4) <= (8, 6, 9) ie: True, So P4 must be kept in safe sequence.
Step3: Therefore work = work + Allocation [P4] = (8, 6, 9) + (1 4 3) = (9, 10, 12)
Set Finish[P4] = True
Step 2: For i=0
Need[P0] <= work ie: (0 0 2) <= (9, 10, 12) ie: True, So P0 must be kept in safe sequence.
Step3: Therefore work = work + Allocation [P0] = (9, 10, 12) + (0 0 2) = (9, 10, 14)
Set Finish[P0] = True
Step 2: For i=1
Need[P1] <= work ie: (1 0 1) <= (9, 10, 14) ie: True, So P1 must be kept in safe sequence.
Step3: Therefore work = work + Allocation [P1] = (9, 10, 14) + (1 0 0) = (10, 10, 14)
Set Finish[P1] = True
Step 4: Finish[Pi] = True for 0 <= i<= 4
Hence, the system is currently in a safe state. The safe sequence is <P2, P3, P4, P0, P2>.
Conclusion: Since the system is in safe sate, the request can be granted.
Step 1: Initialization
Work = Available ie: Work = (1 5 2 0)
Process P0 P1 P2 P3 P4
Finish False False False False False
Step 2: For i=0
Need[P0] <= work ie: (0 0 0 0) <= (1 5 2 0) ie: True, So P0 must be kept in safe sequence.
Step3: Therefore work = work + Allocation [P0] = (1 5 2 0) + (0 0 1 2) = (1 5 3 2)
Set Finish[P0] = True
Step 2: For i=1
Need[P1] <= work ie: (0 7 5 0) <= (1 5 3 2) ie: False, So P1 must wait
Step 2: For i=2
Need[P2] <= work ie: (1 0 0 2) <= (1 5 3 2) ie: True, So P2 must be kept in safe sequence.
Step3: Therefore work = work + Allocation [P2] = (1 5 3 2) + (1 3 5 4) = (2 8 8 6)
Set Finish[P2] = True
Step 2: For i=3
Dept. of CSE, CEC, Mangaluru Page 40
OPERATING SYSTEMS MODULE 3
Need[P3] <= work ie: (0 0 2 0) <= (2, 8, 8, 6) ie: True, So P3 must be kept in safe sequence.
Step3: Therefore work = work + Allocation [P3] = (2, 8, 8, 6) + (0 6 3 2) = (2, 14, 11, 8)
Set Finish[P3] = True
Step 2: For i=4
Need[P4] <= work ie: (0 6 4 2) <= (2, 14, 11, 8) ie: True, So P4 must be kept in safe sequence.
Step3: Therefore work = work + Allocation [P4] = (2, 14, 11, 8) + (0 0 1 4) = (2, 14, 12, 12)
Set Finish[P4] = True
Step 2: For i=1
Need[P1] <= work ie: (0 7 5 0) <= (2, 14, 12, 12)ie: True, So P1 must be kept in safe sequence.
Step3: Therefore work = work + Allocation [P1] = (2, 14, 12, 12) + (1 0 0 0) = (3, 14, 12, 12)
Set Finish[P1] = True
Step 4: Finish[Pi] = True for 0 <= i<= 4
Hence, the system is currently in a safe state. The safe sequence is <P0, P2, P3, P4, P0>.
Solution (iii): P1 requests (0, 4, 2, 0) i.e. Request [P1] = (0, 4, 2, 0)
To decide whether the request is granted, we use Resource Request algorithm.
Step 1: Request[P1] <= Need[P1] i.e. (0, 4, 2, 0) <= (0, 7, 5, 0) ie: true.
Step 2: Request[P1] <=Available (at time t0, when system snapshot is taken)
i.e. (0, 4, 2, 0) <= (1, 5, 2, 0) ie: true.
Step 3: Available = Available – Request [P1] = (1, 5, 2, 0) - (0, 4, 2, 0) = (1, 1, 0, 0)
Allocation[P1] = Allocation[P1] + Request[P1] = (1 0 0 0) + (0, 4, 2, 0) = (1, 4, 2, 0)
Need[P1] = Need[P1] – Request[P1] = (0, 7, 5, 0) - (0, 4, 2, 0) = (0, 3, 3, 0)
We arrive at the following new system state:
Allocation Max Available
A B C D A B C D A B C D
P0 0 0 1 2 0 0 1 2 1 1 0 0
P1 1 4 2 0 1 7 5 0
P2 1 3 5 4 2 3 5 6
P3 0 6 3 2 0 6 5 2
P4 0 0 1 4 0 6 5 6
P1 0 3 3 0
P2 1 0 0 2
P3 0 0 2 0
P4 0 6 4 2
To determine whether this new system state is safe, we again execute Safety algorithm.
Step 1: Initialization
Work = Available ie: Work = (1, 1, 0, 0)
Process P0 P1 P2 P3 P4
Finish False False False False False
Step 2: For i=0
Need[P0] <= work ie: (0 0 0 0) <= (1 1 0 0) ie: True, So P0 must be kept in safe sequence.
Step3: Therefore work = work + Allocation [P0] = (1 1 0 0) + (0 0 1 2) = (1 1 1 2)
Set Finish[P0] = True
Step 2: For i=1
Need[P1] <= work ie: (0 3 3 0) <= (1 1 1 2) ie: False, So P1 must wait
Step 2: For i=2
Need[P2] <= work ie: (1 0 0 2) <= (1 1 1 2) ie: True, So P2 must be kept in safe sequence.
Step3: Therefore work = work + Allocation [P2] = (1 1 1 2) + (1 3 5 4) = (2 4 6 6)
Set Finish[P2] = True
Step 2: For i = 3
Need[P3] <= work ie: (0 0 2 0) <= (2 4 6 6) ie: True, So P3 must be kept in safe sequence.
Step3: Therefore work = work + Allocation [P3] = (2 4 6 6) + (0 6 3 2) = (2, 10, 9, 8)
Set Finish[P3] = True
Step 2: For i=4
Need[P4] <= work ie: (0 6 4 2) <= (2, 10, 9, 8) ie: True, So P4 must be kept in safe sequence.
Step3: Therefore work = work + Allocation [P4] = (2, 10, 9, 8) + (0 0 1 4) = (2, 10, 10, 12)
Set Finish[P4] = True
Step 2: For i=1
Need[P1] <= work ie: (0 3 3 0) <= (2, 10, 10, 12) ie: True, So P1 must be kept in safe
sequence.
Step3: Therefore work = work + Allocation [P1] = (2, 10, 10, 12) + (1 4 2 0) = (3, 14, 12, 12)
Set Finish[P1] = True
Step 4: Finish[Pi] = True for 0 <= i<= 4
Hence, the system is currently in a safe state. The safe sequence is <P0, P2, P3, P4, P1>.
Conclusion: Since the system is in safe sate, the request can be granted.
A deadlock exists in the system if and only if the wait-for-graph contains a cycle. To detect
deadlocks, the system needs to 1) Maintain the wait-for-graph and 2) Periodically execute an
algorithm that searches for a cycle in the graph.
SEVERAL INSTANCES OF A RESOURCE TYPE
The wait-for-graph is applicable to only a single instance of a resource type. However, the wait-
for-graph is not applicable to a multiple instance of a resource type. The following detection-
algorithm can be used for a multiple instance of a resource type.
Assumptions:
Let ‘n’ be the number of processes in the system Let ‘m’ be the number of resources types.
Following data structures are used to implement this algorithm.
Available [m]
This vector indicates the no. of available resources of each type.
If Available[j]=k, then k instances of resource type Rj is available.
Allocation [n][m]
This matrix indicates no. of resources currently allocated to each process.
If Allocation[i,j]=k, then Pi is currently allocated k instances of Rj.
Request [n][m]
This matrix indicates the current request of each process.
If Request [i, j] = k, then process Pi is requesting k more instances of resource type Rj.
DETECTION-ALGORITHM USAGE
Dept. of CSE, CEC, Mangaluru Page 44
OPERATING SYSTEMS MODULE 3