Deadlock Detection and Its Algorithm
Deadlock Detection and Its Algorithm
Deadlock Detection and Its Algorithm
What is Deadlock?
Deadlock can be defined as a situation in which two programs sharing the same resource are effectively
preventing each other from accessing the resource resulting for both programs to malfunction
The earliest computer operating systems ran only one program at a time. All of the resources of the system
were available to this one program. Later, operating systems ran multiple programs at once, interleaving
them. Programs were required to specify in advance what resources they needed so that they could avoid
conflicts with other programs running at the same time. Eventually some operating systems offered
dynamic allocation of resources. Programs could request further allocations of resources after they had
begun running. This led to the problem of the deadlock. Here is the simplest example:
Learning to deal with deadlocks had a major impact on the development of operating systems and the
structure of databases. Data was structured and the order of requests was constrained in order to avoid
creating deadlocks.
All of the following four necessary conditions must hold simultaneously for deadlock to occur:
Mutual Exclusion
o Each resource is either available or currently assigned to exactly one process
Hold and wait
o A process holding a resource while waiting for another resource
No preemption
o Resources previously granted cannot be forcibly taken away from a process
o They must be explicitly released by the process holding them
Circular wait
o There must be a circular chain of two or more processes, each of which is waiting for a
resource held by the next member of the chain
Handling Deadlocks
To ensure that deadlocks never occur, the system can use either a deadlock-
prevention or deadlock-avoidance scheme.
Deadlock prevention provides a set of methods for ensuring that at least one of the necessary
conditions cannot hold. These methods prevent deadlocks by constraining how requests for the
resources can be made.
Deadlock avoidance requires that the operating system be given in advance additional information
concerning which resources a process will request and use during its lifetime. With this additional
knowledge, it can decide for each request whether or not the process should wait. To decide
whether the current request can be satisfied or must be delayed, the system must consider the
resources currently available, the resources currently allocated to each process, and the future
requests and releases of each processes.
If a system does not employ either a deadlock prevention or a deadlock avoidance algorithm, then
a deadlock situation may arise. In this environment, the system can provide an algorithm that
examines the state of the system to determine whether a deadlock has occurred and an algorithm
to recover from the deadlock (if a deadlock has indeed occurred).
If a system neither ensures that a deadlock will never occur nor provides a mechanism for deadlock
detection and recovery, then we may arrive at a situation where the system is in deadlocked state, yet has
no way of recognizing what has happened. In this case, the undetected deadlock will result in deterioration
of the system’s performance, because resources are being held by processes that cannot run and because
more and more processes, as they make requests for resources, will enter in a deadlocked state. Eventually,
the system will stop functioning and will need to be restarted normally. Although this method may not
seem to be viable approach to the deadlock problem, it is nevertheless used in most operating system.
Deadlock Avoidance
Most prevention algorithms have poor resource utilization, and hence result in reduced throughputs. We
can try to avoid deadlocks by making use prior knowledge about the usage of resources by processes
including resources available, resources allocated, future requests and future releases by processes. Most
deadlock avoidance algorithms need every process to tell in advance the maximum number of resources
of each type that it may need. Based on all these info we may decide if a process should wait for a resource
or not, and thus avoid chances for circular wait.
If a system is already in a safe state, we can try to stay away from an unsafe state and avoid deadlock.
Deadlocks cannot be avoided in an unsafe state. A system can be considered to be in safe state if it is not
in a state of deadlock and can allocate resources up to the maximum available. A safe sequence of
processes and allocation of resources ensures a safe state. Deadlock avoidance algorithms try not to
allocate resources to a process if it will make the system in an unsafe state. Since resource allocation is not
done right away in some cases, deadlock avoidance algorithms also suffer from low resource utilization
problem.
A resource allocation graph is generally used to avoid deadlocks. If there are no cycles in the resource
allocation graph, then there are no deadlocks. If there are cycles, there may be a deadlock. If there is only
one instance of every resource, then a cycle implies a deadlock. Vertices of the resource allocation graph
are resources and processes. The resource allocation graph has request edges and assignment edges. An
edge from a process to resource is a request edge and an edge from a resource to process is an allocation
edge. A calm edge denotes that a request may be made in future and is represented as a dashed line. Based
on calm edges we can see if there is a chance for a cycle and then grant requests if the system will again be
in a safe state.
The resource allocation graph is not much useful if there are multiple instances for a resource. In such a
case, we can use Banker’s algorithm. In this algorithm, every process must tell upfront the maximum
resource of each type it need, subject to the maximum available instances for each type. Allocation of
resources is made only, if the allocation ensures a safe state; else the processes need to wait. The Banker’s
algorithm can be divided into two parts: Safety algorithm if a system is in a safe state or not. The resource
request algorithm make an assumption of allocation and see if the system will be in a safe state. If the new
state is unsafe, the resources are not allocated and the data structures are restored to their previous state;
in this case the processes must wait for the resource.
A resource allocation graph tracks which resource is held by which process and which process is
waiting for a resource of a particular type. It is very powerful and simple tool to illustrate how interacting
processes can deadlock. If a process is using a resource, an arrow is drawn from the resource node to the
process node. If a process is requesting a resource, an arrow is drawn from the process node to the resource
node.
If there is a cycle in the Resource Allocation Graph and each resource in the cycle provides only one
instance, then the processes will deadlock. For example, if process 1 holds resource A, process 2 holds
resource B and process 1 is waiting for B and process 2 is waiting for A, then process 1 and 2 process will
be deadlocked.
Banker’s Algorithm
The Banker's algorithm, sometimes referred to as the detection algorithm, is a resource allocation and
deadlock avoidance algorithm developed by Edsger Dijkstra that tests for safety by simulating the
allocation of predetermined maximum possible amounts of all resources, and then makes an "s-state"
check to test for possible deadlock conditions for all other pending activities, before deciding whether
allocation should be allowed to continue.
The algorithm was developed in the design process for the THE operating system and originally described
(in Dutch) in EWD108. When a new process enters a system, it must declare the maximum number of
instances of each resource type that it may ever claim; clearly, that number may not exceed the total
number of resources in the system. Also, when a process gets all its requested resources it must return
them in a finite amount of time.
Safe State
A state is safe if the system can allocate all resources requested by all processes (up to their stated
maximums) without entering a deadlock state.
More formally, a state is safe if there exists a safe sequence of processes { P0, P1, P2, ..., PN } such that all
of the resource requests for Pi can be granted using the resources currently allocated to Pi and all processes
Pj where j < i. ( I.e. if all the processes prior to Pi finish and free up their resources, then Pi will be able to
finish also, using the resources that they have freed up. )
If a safe sequence does not exist, then the system is in an unsafe state, which MAY lead to deadlock. (All
safe states are deadlock free, but not all unsafe states lead to deadlocks.)
Deadlock Prevention
For a deadlock to occur, each of the four necessary conditions must hold. By ensuring that at least
one of these conditions cannot hold, we can prevent the occurrence of a deadlock. Let’s elaborate
on this approach by examining each of the four necessary conditions separately.
Mutual Exclusion
The mutual exclusion condition must hold for non-sharable resources. For example, a printer
cannot be simultaneously shared by several processes. Sharable resources, in contrast, do not
require mutually exclusive access and thus cannot involve in a deadlock. Read only files are a good
example of a sharable resource. However, in general, we cannot prevent deadlocks by denying the
mutual-exclusion condition, because some resources are intrinsically non-sharable.
Hold and Wait
To ensure that hold and wait condition never occurs in the system, we must guarantee that,
whenever a process requests a resource, it does not hold any other resources.
One protocol that can be used requires each process to request and be allocated all its resources
before it begins execution. We can implement this provision by requiring that system calls
requesting resources for a process precede all other system calls.
An alternative protocol allows a process to request resources only when it has none. A process
may request some resources and use them. Before it can request any additional resources,
however, it must release all the resources that it is currently allocated.
Both these protocols have two main disadvantage.
• Resource utilization may be low, since resources may be allocated but unused for a long
period.
• Starvation is possible. A process that needs several popular resources may have to wait
indefinitely, because at least one of the resources that it needs is always allocated to some
other process.
No preemption
To ensure that no preemption condition does not hold, we can use one of the following two
approaches:
• If a process is holding some resources and requests another resource that cannot be
immediately allocated to it, then all the resources currently being held are preempted. In
other words, these resources are implicitly released. The preempted resources are added
to the list of resources for which the process is waiting. The process will be restarted only
when it can regain its old resources, as well as the new ones that it is requesting.
• If a process requests some resources, we first check whether they are available. If they are
we allocate them. If they are not, we check whether they are allocated to some other
process that is waiting for additional resources. If so we preempt the desired resource from
the waiting process and allocate them to the requesting process. If the resources are
neither available nor held by waiting process, the requesting process must wait. While it is
waiting, some of its resources may be preempted, but only if another process requests
them. A process can be restarted only when it is allocated the new resources it is
requesting and recovers any resources that were preempted while it is waiting.
These protocol is often applied to resources whose state can be easily saved and restarted later
such as CPU registers and memory spaces. It cannot generally be applied to such resources as
printers and tape drives.
Circular Wait
To ensure that circular wait condition never holds is to impose a total ordering of all resource types
and to require that each process requests resources in an increasing order of enumeration.
Deadlock Detection
If deadlock prevention and avoidance are not done properly, as deadlock may occur and only
things left to do is to detect the recover from the deadlock.
If all resource types has only single instance, then we can use a graph called wait-for-graph, which
is a variant of resource allocation graph. Here, vertices represent processes and a directed edge
from P1 to P2 indicate that P1 is waiting for a resource held by P2. Like in the case of resource
allocation graph, a cycle in a wait-for-graph indicate a deadlock. So the system can maintain a wait-
for-graph and check for cycles periodically to detect any deadlocks.
The wait-for-graph is not much useful if there are multiple instances for a resource, as a cycle may
not imply a deadlock. In such a case, we can use an algorithm similar to Banker’s algorithm to
detect deadlock. We can see if further allocations can be made on not based on current
allocations. You can refer to any operating system text books for details of these algorithms.
Deadlock Recovery
Deadlock recovery performs when a deadlock is detected. When deadlock detected, then our
system stops working, and after the recovery of the deadlock, our system start working again.
Therefore, after the detection of deadlock, a method/way must require to recover that deadlock
to run the system again. The method/way is called as deadlock recovery.
Here are various ways of deadlock recovery that we will discuss briefly in this tutorial.
The ability to take a resource away from a process, have another process use it, and then give it
back without the process noticing. It is highly dependent on the nature of the resource.
In this case of deadlock recovery through rollback, whenever a deadlock is detected, it is easy to
see which resources are needed.
To do the recovery of deadlock, a process that owns a needed resource is rolled back to a point in
time before it acquired some other resource just by starting one of its earlier checkpoints.
This method of deadlock recovery through killing processes is the simplest way of deadlock
recovery.
Sometime it is best to kill a process that can be return from the beginning with no ill effects.