0% found this document useful (0 votes)
6 views

Deadlocks

The document discusses key concepts in operating systems, focusing on resource management, deadlocks, and their prevention and detection. It outlines the conditions necessary for deadlocks to occur, methods for handling them, and algorithms like the Banker's Algorithm for ensuring safe resource allocation. Additionally, it explains the importance of maintaining a safe state to avoid deadlocks and the implications of resource allocation graphs in detecting and resolving deadlocks.

Uploaded by

kkabhinav01
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

Deadlocks

The document discusses key concepts in operating systems, focusing on resource management, deadlocks, and their prevention and detection. It outlines the conditions necessary for deadlocks to occur, methods for handling them, and algorithms like the Banker's Algorithm for ensuring safe resource allocation. Additionally, it explains the importance of maintaining a safe state to avoid deadlocks and the implications of resource allocation graphs in detecting and resolving deadlocks.

Uploaded by

kkabhinav01
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

OPERATING SYSTEM CONCEPTS 21BCA4C12L | HR

// release lock
signal(m);
}

Explanation :

• As seen above in the code for the writer, the writer just waits on the w semaphore until it
gets a chance to write to the resource.
• After performing the write operation, it increments w so that the next writer can access
the resource.
• On the other hand, in the code for the reader, the lock is acquired whenever
the read_count is updated by a process.
• When a reader wants to access the resource, first it increments the read_count value, then
accesses the resource and then decrements the read_count value.
• The semaphore w is used by the first reader which enters the critical section and the last
reader which exits the critical section.
• The reason for this is, when the first readers enters the critical section, the writer is blocked
from the resource. Only new readers can access the resource now.
• Similarly, when the last reader exits the critical section, it signals the writer using
the w semaphore because there are zero readers now and a writer can have the chance to
access the resource.

3.2 Deadlocks – System Model


Deadlocks are a set of blocked processes each holding a resource and waiting to
acquire a resource held by another process.

In the above figure, process T0 has resource1, it requires resource2 in order to finish
its execution. Similarly, process T1 has resource2 and it also needs to acquire resource1 to
finish its execution. In this way, T0 and T1 are in a deadlock because each of them needs
the resource of others to complete their execution but neither of them is willing to give up
their resources.

50
OPERATING SYSTEM CONCEPTS 21BCA4C12L | HR

In General, a process must request a resource before using it and it must release the
resource after using it. And any process can request as many resources as it requires in
order to complete its designated task. And there is a condition that the number of
resources requested may not exceed the total number of resources available in the system.
Basically, in the Normal mode of Operation utilization of resources by a process is in the
following sequence:
1. Request: Firstly, the process requests the resource. In a case, if the request cannot be
granted immediately(e.g: resource is being used by any other process), then the requesting
process must wait until it can acquire the resource.
2. Use: The Process can operate on the resource ( e.g: if the resource is a printer then in that
case process can print on the printer).
3. Release: The Process releases the resource.

3.2.2 Deadlocks Characterization

Necessary Conditions
The deadlock situation can only arise if all the following four conditions hold
simultaneously:

1.Mutual Exclusion: According to this condition, atleast one resource should be non-shareable
(non-shareable resources are those that can be used by one process at a time.)
2. Hold and wait: According to this condition, A process is holding atleast one resource and is
waiting for additional resources.
3. NO pre-emption: Resources cannot be taken from the process because resources can be
released only voluntarily by the process holding them.
4. Circular wait: In this condition, the set of processes are waiting for each other in the circular
form. the above four conditions are not completely independent as the circular wait
condition implies the hold and wait condition. We emphasize on the fact that all four
conditions must hold for a deadlock.

3.2.3 Methods for Handing Deadlocks

• Generally speaking, there are three ways of handling deadlocks:


1. Deadlock prevention or avoidance - Do not allow the system to get into a
deadlocked state.
2. Deadlock detection and recovery - Abort a process or preempt some resources
when deadlocks are detected.
3. Ignore the problem all together - If deadlocks only occur once a year or so, it may be
better to simply let them happen and reboot as necessary than to incur the constant
overhead and system performance penalties associated with deadlock prevention or
detection. This is the approach that both Windows and UNIX take.
• In order to avoid deadlocks, the system must have additional information about all
processes. In particular, the system must know what resources a process will or may
request in the future. ( Ranging from a simple worst-case maximum to a complete resource
request and release plan for each process, depending on the particular algorithm.

51
OPERATING SYSTEM CONCEPTS 21BCA4C12L | HR

• Deadlock detection is fairly straightforward, but deadlock recovery requires either aborting
processes or preempting resources, neither of which is an attractive alternative.
• If deadlocks are neither prevented nor detected, then when a deadlock occurs the system
will gradually slow down, as more and more processes become stuck waiting for resources
currently held by the deadlock and by other waiting processes. Unfortunately this
slowdown can be indistinguishable from a general system slowdown when a real-time
process has heavy computing needs.

3.2.4 Deadlock Prevention

• Deadlocks can be prevented by preventing at least one of the four required conditions:

Mutual Exclusion

• Shared resources such as read-only files do not lead to deadlocks.


• Unfortunately some resources, such as printers and tape drives, require exclusive
access by a single process.

Hold and Wait

• To prevent this condition processes must be prevented from holding one or more
resources while simultaneously waiting for one or more others. There are several
possibilities for this:
• Require that all processes request all resources at one time. This can be wasteful of
system resources if a process needs one resource early in its execution and doesn't
need some other resource until much later.
• Require that processes holding resources must release them before requesting new
resources, and then re-acquire the released resources along with the new ones in a
single new request. This can be a problem if a process has partially completed an
operation using a resource and then fails to get it re-allocated after releasing it.
• Either of the methods described above can lead to starvation if a process requires
one or more popular resources.

No Preemption

• Preemption of process resource allocations can prevent this condition of deadlocks,


when it is possible.
• One approach is that if a process is forced to wait when requesting a new resource,
then all other resources previously held by this process are implicitly released,
( preempted ), forcing this process to re-acquire the old resources along with the new
resources in a single request, similar to the previous discussion.
• Another approach is that when a resource is requested and not available, then the
system looks to see what other processes currently have those resources and are
themselves blocked waiting for some other resource. If such a process is found, then
some of their resources may get preempted and added to the list of resources for
which the process is waiting.

52
OPERATING SYSTEM CONCEPTS 21BCA4C12L | HR

• Either of these approaches may be applicable for resources whose states are easily
saved and restored, such as registers and memory, but are generally not applicable
to other devices such as printers and tape drives.

Circular Wait

• One way to avoid circular wait is to number all resources, and to require that
processes request resources only in strictly increasing ( or decreasing ) order.
• In other words, in order to request resource Rj, a process must first release all Ri
such that i >= j.
• One big challenge in this scheme is determining the relative ordering of the
different resources

3.2.5 Deadlock Avoidance

• The general idea behind deadlock avoidance is to prevent deadlocks from ever happening,
by preventing at least one of the aforementioned conditions.
• This requires more information about each process, AND tends to lead to low device
utilization. ( I.e. it is a conservative approach. )
• In some algorithms the scheduler only needs to know the maximum number of each
resource that a process might potentially use. In more complex algorithms the scheduler
can also take advantage of the schedule of exactly what resources may be needed in what
order.
• When a scheduler sees that starting a process or granting resource requests may lead to
future deadlocks, then that process is just not started or the request is not granted.
• A resource allocation state is defined by the number of available and allocated resources,
and the maximum requirements of all processes in the system.
Safe State
• A state is safe if the system can allocate all resources requested by all processes ( up to
their stated maximums ) without entering a deadlock state.
• More formally, a state is safe if there exists a safe sequence of processes { P0, P1, P2, ...,
PN } such that all of the resource requests for Pi can be granted using the resources
currently allocated to Pi and all processes Pj where j < i. ( I.e. if all the processes prior to Pi
finish and free up their resources, then Pi will be able to finish also, using the resources
that they have freed up. )
• If a safe sequence does not exist, then the system is in an unsafe state, which MAY lead to
deadlock. ( All safe states are deadlock free, but not all unsafe states lead to deadlocks. )

Figure - Safe, unsafe, and deadlocked state spaces.

53
OPERATING SYSTEM CONCEPTS 21BCA4C12L | HR

• For example, consider a system with 12 tape drives, allocated as follows. Is this a safe
state? What is the safe sequence?
Maximum Needs Current Allocation
P0 10 5
P1 4 2
P2 9 2
• What happens to the above table if process P2 requests and is granted one more tape
drive?
• Key to the safe state approach is that when a request is made for resources, the request is
granted only if the resulting allocation state is a safe one.

Resource-Allocation Graph Algorithm

• If resource categories have only single instances of their resources, then deadlock states
can be detected by cycles in the resource-allocation graphs.
• In this case, unsafe states can be recognized and avoided by augmenting the resource-
allocation graph with claim edges, noted by dashed lines, which point from a process to a
resource that it may request in the future.
• In order for this technique to work, all claim edges must be added to the graph for any
particular process before that process is allowed to request any resources. ( Alternatively,
processes may only make requests for resources for which they have already established
claim edges, and claim edges cannot be added to any process that is currently holding
resources. )
• When a process makes a request, the claim edge Pi->Rj is converted to a request edge.
Similarly when a resource is released, the assignment reverts back to a claim edge.
• This approach works by denying requests that would produce cycles in the resource-
allocation graph, taking claim edges into effect.
• Consider for example what happens when process P2 requests resource R2:

Figure 7.7 - Resource allocation graph for deadlock avoidance


• The resulting resource-allocation graph would have a cycle in it, and so the request cannot
be granted.

54
OPERATING SYSTEM CONCEPTS 21BCA4C12L | HR

Figure 7.8 - An unsafe state in a resource allocation graph


Banker's Algorithm
• For resource categories that contain more than one instance the resource-allocation graph
method does not work, and more complex ( and less efficient ) methods must be chosen.
• The Banker's Algorithm gets its name because it is a method that bankers could use to
assure that when they lend out resources they will still be able to satisfy all their clients. ( A
banker won't loan out a little money to start building a house unless they are assured that
they will later be able to loan out the rest of the money to finish the house. )
• When a process starts up, it must state in advance the maximum allocation of resources it
may request, up to the amount available on the system.
• When a request is made, the scheduler determines whether granting the request would
leave the system in a safe state. If not, then the process must wait until the request can be
granted safely.
• The banker's algorithm relies on several key data structures: ( where n is the number of
processes and m is the number of resource categories. )
o Available[ m ] indicates how many resources are currently available of each type.
o Max[ n ][ m ] indicates the maximum demand of each process of each resource.
o Allocation[ n ][ m ] indicates the number of each resource category allocated to each
process.
o Need[ n ][ m ] indicates the remaining resources needed of each type for each
process. ( Note that Need[ i ][ j ] = Max[ i ][ j ] - Allocation[ i ][ j ] for all i, j. )
• For simplification of discussions, we make the following notations / observations:
o One row of the Need vector, Need[ i ], can be treated as a vector corresponding to
the needs of process i, and similarly for Allocation and Max.
o A vector X is considered to be <= a vector Y if X[ i ] <= Y[ i ] for all i.

3.2.6 Deadlock Detection and Recovery from Deadlock

• If deadlocks are not avoided, then another approach is to detect when they have occurred
and recover somehow.
• In addition to the performance hit of constantly checking for deadlocks, a policy /
algorithm must be in place for recovering from deadlocks, and there is potential for lost
work when processes must be aborted or have their resources preempted.
Single Instance of Each Resource Type
• If each resource category has a single instance, then we can use a variation of the resource-
allocation graph known as a wait-for graph.

55
OPERATING SYSTEM CONCEPTS 21BCA4C12L | HR

• A wait-for graph can be constructed from a resource-allocation graph by eliminating the


resources and collapsing the associated edges, as shown in the figure below.
• An arc from Pi to Pj in a wait-for graph indicates that process Pi is waiting for a resource
that process Pj is currently holding.

Figure - (a) Resource allocation graph. (b) Corresponding wait-for graph


• As before, cycles in the wait-for graph indicate deadlocks.
• This algorithm must maintain the wait-for graph, and periodically search it for cycles.
Several Instances of a Resource Type
• The detection algorithm outlined here is essentially the same as the Banker's algorithm,
with two subtle differences:
1. In step 1, the Banker's Algorithm sets Finish[ i ] to false for all i. The algorithm
presented here sets Finish[ i ] to false only if Allocation[ i ] is not zero. If the
currently allocated resources for this process are zero, the algorithm sets Finish[ i ]
to true. This is essentially assuming that IF all of the other processes can finish, then
this process can finish also. Furthermore, this algorithm is specifically looking for
which processes are involved in a deadlock situation, and a process that does not
have any resources allocated cannot be involved in a deadlock, and so can be
removed from any further consideration.
2. Steps 2 and 3 are unchanged
3. In step 4, the basic Banker's Algorithm says that if Finish[ i ] == true for all i, that
there is no deadlock. This algorithm is more specific, by stating that if Finish[ i ] ==
false for any process Pi, then that process is specifically involved in the deadlock
which has been detected.
• Note: An alternative method was presented above, in which Finish held integers instead of
Booleans. This vector would be initialized to all zeros, and then filled with increasing
integers as processes are detected which can finish. If any processes are left at zero when
the algorithm completes, then there is a deadlock, and if not, then the integers in finish
describe a safe sequence. To modify this algorithm to match this section of the text,
processes with allocation = zero could be filled in with N, N - 1, N - 2, etc. in step 1, and any
processes left with Finish = 0 in step 4 are the deadlocked processes. )
• Consider, for example, the following state, and determine if it is currently deadlocked:

56
OPERATING SYSTEM CONCEPTS 21BCA4C12L | HR

• Now suppose that process P2 makes a request for an additional instance of type C, yielding
the state shown below. Is the system now deadlocked?

Detection-Algorithm Usage
• When should the deadlock detection be done? Frequently, or infrequently?
• The answer may depend on how frequently deadlocks are expected to occur, as well as the
possible consequences of not catching them immediately. ( If deadlocks are not removed
immediately when they occur, then more and more processes can "back up" behind the
deadlock, making the eventual task of unblocking the system more difficult and possibly
damaging to more processes. )
• There are two obvious approaches, each with trade-offs:
1. Do deadlock detection after every resource allocation which cannot be immediately
granted. This has the advantage of detecting the deadlock right away, while the
minimum number of processes are involved in the deadlock. ( One might consider
that the process whose request triggered the deadlock condition is the "cause" of
the deadlock, but realistically all of the processes in the cycle are equally
responsible for the resulting deadlock. ) The down side of this approach is the
extensive overhead and performance hit caused by checking for deadlocks so
frequently.
2. Do deadlock detection only when there is some clue that a deadlock may have
occurred, such as when CPU utilization reduces to 40% or some other magic
number. The advantage is that deadlock detection is done much less frequently, but
the down side is that it becomes impossible to detect the processes involved in the
original deadlock, and so deadlock recovery can be more complicated and damaging
to more processes.
3. ( As I write this, a third alternative comes to mind: Keep a historical log of resource
allocations, since that last known time of no deadlocks. Do deadlock checks
periodically ( once an hour or when CPU usage is low?), and then use the historical
log to trace through and determine when the deadlock occurred and what
processes caused the initial deadlock. Unfortunately I'm not certain that breaking
the original deadlock would then free up the resulting log jam. )

57
OPERATING SYSTEM CONCEPTS 21BCA4C12L | HR

Deadlock Recovery

• There are three basic approaches to recovery from deadlock:


1. Inform the system operator, and allow him/her to take manual intervention.
2. Terminate one or more processes involved in the deadlock
3. Preempt resources.
Process Termination
• Two basic approaches, both of which recover resources allocated to terminated processes:
o Terminate all processes involved in the deadlock. This definitely solves the deadlock, but at
the expense of terminating more processes than would be absolutely necessary.
o Terminate processes one by one until the deadlock is broken. This is more conservative, but
requires doing deadlock detection after each step.
• In the latter case there are many factors that can go into deciding which processes to terminate
next:
o Process priorities.
o How long the process has been running, and how close it is to finishing.
o How many and what type of resources is the process holding. ( Are they easy to preempt
and restore? )
o How many more resources does the process need to complete.
o How many processes will need to be terminated
o Whether the process is interactive or batch.
o ( Whether or not the process has made non-restorable changes to any resource. )
Resource Preemption
• When preempting resources to relieve deadlock, there are three important issues to be addressed:
1. Selecting a victim - Deciding which resources to preempt from which processes involves
many of the same decision criteria outlined above.
2. Rollback - Ideally one would like to roll back a preempted process to a safe state prior to
the point at which that resource was originally allocated to the process. Unfortunately it
can be difficult or impossible to determine what such a safe state is, and so the only safe
rollback is to roll back all the way back to the beginning. ( I.e. abort the process and make it
start over. )
3. Starvation - How do you guarantee that a process won't starve because its resources are
constantly being preempted? One option would be to use a priority system, and increase
the priority of a process every time its resources get preempted. Eventually it should get a
high enough priority that it won't get preempted any more.

UNIT 04
4.1 Memory Management
Main Memory refers to a physical memory that is the internal memory to the computer.
The word main is used to distinguish it from external mass storage devices such as disk drives.
Main memory is also known as RAM. The computer is able to change only data that is in main
memory. Therefore, every program we execute and every file we access must be copied from a
storage device into main memory.

All the programs are loaded in the main memeory for execution. Sometimes complete
program is loaded into the memory, but some times a certain part or routine of the program is

58

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy