os-3
os-3
CONCURRENCY
Process Synchronization:
It is the task phenomenon of coordinating the execution of processes in such a way that no two
processes can have access to the same shared data and resources.
Race Condition
At the time when more than one process is either executing the same code or accessing the
same memory or any shared variable.
In that condition, there is a possibility that the output or the value of the shared variable is
wrong so for that purpose all the processes are doing the race to say that my output is correct.
This condition is commonly known as a race condition.
As several processes access and process the manipulations on the same data in a concurrent
manner and due to which the outcome depends on the particular order in which the access of
data takes place.
Mainly this condition is a situation that may occur inside the critical section.
Critical Section:
A critical section is a segment of code which can be accessed by a signal process at a specific point of
time. The section consists of shared data resources that required to be accessed by other processes.
A Critical Section is a code segment that accesses shared variables and has to be executed as
an atomic action.
It means that in a group of cooperating processes, at a given point of time, only one process
must be executing its critical section.
If any other process also wants to execute its critical section, it must wait until the first one
finishes.
Entry Section
In this section mainly the process requests for its entry in the critical section.
Exit Section
A solution to the critical section problem must satisfy the following three conditions:
1. Mutual Exclusion
Out of a group of cooperating processes, only one process can be in its critical section at a given point of
time.
2. Progress
If no process is in its critical section, and if one or more threads want to execute their critical section
then any one of these threads must be allowed to get into its critical section.
3. Bounded Waiting
After a process makes a request for getting into its critical section, there is a limit for how many other
processes can get into their critical section, before this process's request is granted. So after the limit is
reached, the system must grant the process permission to get into its critical section.
3. Bounded Waiting
After a process makes a request for getting into its critical section, there is a limit for how many other
processes can get into their critical section, before this process's request is granted. So after the limit is
reached, the system must grant the process permission to get into its critical section.
Solutions for the Critical Section
The critical section plays an important role in Process Synchronization so that the problem must be
solved.
Some widely used method to solve the critical section problem are as follows:
1.Peterson's Solution
This is widely used and software-based solution to critical section problems. Peterson's solution was
developed by a computer scientist Peterson that's why it is named so.
With the help of this solution whenever a process is executing in any critical state, then the other
process only executes the rest of the code, and vice-versa can happen. This method also helps to make
sure of the thing that only a single process can run in the critical section at a specific time.
Mutual Exclusion is comforted as at any time only one process can access the critical section.
Progress is also comforted, as a process that is outside the critical section is unable to block
other processes from entering into the critical section.
Bounded Waiting is assured as every process gets a fair chance to enter the Critical section.
The above shows the structure of process Pi in Peterson's solution.
Suppose there are N processes (P1, P2, ... PN) and as at some point of time every process
requires to enter in the Critical Section
A FLAG[] array of size N is maintained here which is by default false. Whenever a process
requires to enter in the critical section, it has to set its flag as true. Example: If Pi wants to enter
it will set FLAG[i]=TRUE.
Another variable is called TURN and is used to indicate the process number that is currently
waiting to enter into the critical section.
The process that enters into the critical section while exiting would change the TURN to another
number from the list of processes that are ready.
Example: If the turn is 3 then P3 enters the Critical section and while exiting turn=4 and
therefore P4 breaks out of the wait loop.
Synchronization Hardware
Synchronization hardware is a hardware-based solution to resolve the critical section problem. In our
earlier content of the critical section, we have discussed how the multiple processes sharing common
resources must be synchronized to avoid inconsistent results.
The hardware-based solution to critical section problem is based on a simple tool i.e. lock. The solution
implies that before entering into the critical section the process must acquire a lock and must release
the lock when it exits its critical section. Using of lock also prevent the race condition.
1. Mutual Exclusion: The hardware instruction must verify that at a point in time only one process
can be in its critical section.
2. Bounded Waiting: The processes interested to execute their critical section must not wait for
long to enter their critical section.
3. Progress: The process not interested in entering its critical section must not block other
processes from entering into their critical section.
The TestAndSet() hardware instruction is atomic instruction. Atomic means both the test operation and
set operation are executed in one machine cycle at once. If the two different processes are executing
TestAndSet() simultaneously each on different CPU. Still, they will be executed sequentially in some
random order.
do {
while (TestAndSet(&lock));
// critical section
lock = FALSE;
// remainder section
} while (TRUE);
Now, as a solution to the critical section, how this TestAndSet() instruction be implemented to
achieve mutual exclusion, bounded wait and progress. Let us do it one by one, first, we will try
to achieve mutual exclusion using TestAndSet() instruction and for that, you have to globally
declare a Boolean variable lock and initialize it to false.
Consider we have three processes P0, and P1 are interested to enter their critical section. So the
structure for achieving mutual exclusion is as follow.
do {
while (TestAndSet(&lock));
// critical section
lock = FALSE;
// remainder section
} while (TRUE);
Let’s say process P0 wants to enter the critical section it executes the code above which let while
loop invokes TestAndSet() instruction.
Using the TestAndSet() instruction the P0 modifies the lock value to true to acquire the lock and
enters the critical section.
Now, when P0 is already in its critical section process P 1 also wants to enter in its critical section.
So it will execute the do-while loop and invoke TestAndSet() instruction only to see that the lock
is already set to true which means some process is in the critical section which will make
P1 repeat while loop unless P0 turns the lock to false.
Once the process P0 complete executing its critical section its will turn the lock variable
to false. Then P1 can modify the lock variable to true using TestAndSet() instruction and enter its
critical section.
This is how you can achieve mutual exclusion with the do-while structure above i.e. it let only
one process to execute its critical section at a time.
Semaphores
In very simple words, the semaphore is a variable that can hold only a non-negative Integer
value, shared between all the threads, with operations wait and signal, which work as follow:
else S := S + 1;
Wait: This operation decrements the value of its argument S, as soon as it would become
non-negative(greater than or equal to 1). This Operation mainly helps you to control the
entry of a task into the critical section. In the case of the negative or zero value, no
operation is executed. wait() operation was originally termed as P; so it is also known
as P(S) operation. The definition of wait operation is as follows:
wait(S)
{
while (S<=0);//no operation
S--;
}
Note:
When one process modifies the value of a semaphore then, no other process can
simultaneously modify that same semaphore's value. In the above case the integer value
of S(S<=0) as well as the possible modification that is S-- must be executed without any
interruption.
Signal: Increments the value of its argument S, as there is no more process blocked on the
queue. This Operation is mainly used to control the exit of a task from the critical
section.signal() operation was originally termed as V; so it is also known as V(S) operation.
The definition of signal operation is as follows:
signal(S)
S++;
}
Types of Semaphores
1. Binary Semaphore:
It is a special form of semaphore used for implementing mutual exclusion, hence it is often
called a Mutex. A binary semaphore is initialized to 1 and only takes the values 0 and 1 during
the execution of a program. In Binary Semaphore, the wait operation works only if the value of
semaphore = 1, and the signal operation succeeds when the semaphore= 0. Binary Semaphores
are easier to implement than counting semaphores.
2. Counting Semaphores:
These are used to implement bounded concurrency. The Counting semaphores can range over
an unrestricted domain. These can be used to control access to a given resource that consists of
a finite number of Instances. Here the semaphore count is used to indicate the number of
available resources. If the resources are added then the semaphore count automatically gets
incremented and if the resources are removed, the count is decremented. Counting Semaphore
has no mutual exclusion.
Mutex
Mutex is a mutual exclusion object that synchronizes access to a resource. It is created with a
unique name at the start of a program. The Mutex is a locking mechanism that makes sure only
one thread can acquire the Mutex at a time and enter the critical section. This thread only
releases the Mutex when it exits the critical section.
wait (mutex);
…..
Critical Section
…..
signal (mutex);
Below are some of the classical problem depicting flaws of process synchronaization in systems
where cooperating processes are present.
There is a buffer of n slots and each slot is capable of storing one unit of data. There are two processes
running, namely, producer and consumer, which are operating on the buffer.
A producer tries to insert data into an empty slot of the buffer. A consumer tries to remove data from a
filled slot in the buffer. As you might have guessed by now, those two processes won't produce the
expected output if they are being executed concurrently.
There needs to be a way to make the producer and consumer work in an independent manner.
Here's a Solution
One solution of this problem is to use semaphores. The semaphores which will be used here are:
At any instant, a philosopher is either eating or thinking. When a philosopher wants to eat, he uses two
chopsticks - one from their left and one from their right. When a philosopher wants to think, he keeps
down both chopsticks at their original place.
From the problem statement, it is clear that a philosopher can think for an indefinite amount of time.
But when a philosopher starts eating, he has to stop at some point of time. The philosopher is in an
endless cycle of thinking and eating.
while(TRUE)
{
wait(stick[i]);
/*
*/
wait(stick[(i+1) % 5]);
/* eat */
signal(stick[i]);
signal(stick[(i+1) % 5]);
/* think */
When a philosopher wants to eat the rice, he will wait for the chopstick at his left and picks up that
chopstick. Then he waits for the right chopstick to be available, and then picks it too. After eating, he
puts both the chopsticks down. But if all five philosophers are hungry simultaneously, and each of them
pickup one chopstick, then a deadlock situation occurs because they will be waiting for another
chopstick forever. The possible solutions for this are:
A philosopher must be allowed to pick up the chopsticks only if both the left and right chopsticks
are available.
Allow only four philosophers to sit at the table. That way, if all the four philosophers pick up four
chopsticks, there will be one chopstick left on the table. So, one philosopher can start eating and
eventually, two chopsticks will be available. In this way, deadlocks can be avoided.
From the above problem statement, it is evident that readers have higher priority than writer. If a writer
wants to write to the resource, it must wait until there are no readers currently accessing that resource.
Here, we use one mutex m and a semaphore w. An integer variable read_count is used to maintain the
number of readers currently accessing the resource. The variable read_count is initialized to 0. A value
of 1 is given initially to m and w.
while(TRUE)
wait(w);
signal(w);
And, the code for the reader process looks like this:
while(TRUE)
//acquire lock
wait(m);
read_count++;
if(read_count == 1)
wait(w);
//release lock
signal(m);
/* perform the reading operation */
// acquire lock
wait(m);
read_count--;
if(read_count == 0)
signal(w);
// release lock
signal(m);
As seen above in the code for the writer, the writer just waits on the w semaphore until it gets a
chance to write to the resource.
After performing the write operation, it increments w so that the next writer can access the
resource.
On the other hand, in the code for the reader, the lock is acquired whenever the read_count is
updated by a process.
When a reader wants to access the resource, first it increments the read_count value, then
accesses the resource and then decrements the read_count value.
The semaphore w is used by the first reader which enters the critical section and the last reader
which exits the critical section.
The reason for this is, when the first readers enters the critical section, the writer is blocked
from the resource. Only new readers can access the resource now.
Similarly, when the last reader exits the critical section, it signals the writer using
the w semaphore because there are zero readers now and a writer can have the chance to
access the resource.
Monitors in Operating System
Monitors are used for process synchronization. With the help of programming languages, we can use a
monitor to achieve mutual exclusion among the processes.
Characteristics of Monitors.
2. Monitors are the group of procedures, and condition variables that are merged together in a
special type of module.
3. If the process is running outside the monitor, then it cannot access the monitor’s internal variable.
But a process can call the procedures of the monitor.
6. There is only one process that can be active at a time inside the monitor.
Components of Monitor
1. Initialization
2. Private data
3. Monitor procedure
Initialization: - Initialization comprises the code, and when the monitors are created, we use this code
exactly once.
Private Data: - Private data is another component of the monitor. It comprises all the private data, and
the private data contains private procedures that can only be used within the monitor. So, outside the
monitor, private data is not visible.
Monitor Procedure: - Monitors Procedures are those procedures that can be called from outside the
monitor.
Monitor Entry Queue: - Monitor entry queue is another essential component of the monitor that
includes all the threads, which are called procedures.
Syntax of monitor
Wait Operation
a.wait(): - The process that performs wait operation on the condition variables are suspended and
locate the suspended process in a block queue of that condition variable.
Signal Operation
a.signal() : - If a signal operation is performed by the process on the condition variable, then a chance is
provided to one of the blocked processes.
Monitors Semaphore
In monitors, wait always block the In semaphore, wait does not always block the
caller. caller.
Condition variables are present in the Condition variables are not present in the
monitor. semaphore.
Deadlocks
Deadlocks are a set of blocked processes each holding a resource and waiting to acquire a
resource held by another process.
In the above figure, process T0 has resource1, it requires resource2 in order to finish its
execution. Similarly, process T1 has resource2 and it also needs to acquire resource1 to finish its
execution. In this way, T0 and T1 are in a deadlock because each of them needs the resource of
others to complete their execution but neither of them is willing to give up their resources.
In General, a process must request a resource before using it and it must release the resource
after using it. And any process can request as many resources as it requires in order to complete
its designated task. And there is a condition that the number of resources requested may not
exceed the total number of resources available in the system.
Starvation vs Deadlock
Starvation Deadlock
When all the low priority processes got blocked, while the
Deadlock is a situation that occurs
high priority processes execute then this situation is termed
when one of the processes got blocked.
as Starvation.
Starvation is a long waiting but it is not an infinite process. Deadlock is an infinite process.
Starvation Deadlock
It is not necessary that every starvation is a deadlock. There is starvation in every deadlock.
Necessary Conditions
The deadlock situation can only arise if all the following four conditions hold simultaneously:
1.Mutual Exclusion
According to this condition, A process is holding atleast one resource and is waiting for
additional resources.
3. NO preemption
Resources cannot be taken from the process because resources can be released only voluntarily
by the process holding them.
4. Circular wait
In this condition, the set of processes are waiting for each other in the circular form.
The above four conditions are not completely independent as the circular wait condition implies
the hold and wait condition. We emphasize on the fact that all four conditions must hold for a
deadlock.
Deadlock conditions can be avoided with the help of a number of methods. Let us take a look at
some of the methods.
Methods that are used in order to handle the problem of deadlocks are as follows:
2.Deadlock Prevention
As we have discussed in the above section, that all four conditions: Mutual Exclusion, Hold and
Wait, No preemption, and circular wait if held by a system then causes deadlock to occur. The
main aim of the deadlock prevention method is to violate any one condition among the four;
because if any of one condition is violated then the problem of deadlock will never occur. As the
idea behind this method is simple but the difficulty can occur during the physical implementation
of this method in the system.
This method is used by the operating system in order to check whether the system is in a safe
state or in an unsafe state. This method checks every step performed by the operating system.
Any process continues its execution until the system is in a safe state. Once the system enters
into an unsafe state, the operating system has to take a step back.
Basically, with the help of this method, the operating system keeps an eye on each allocation,
and make sure that allocation does not cause any deadlock in the system.
With this method, the deadlock is detected first by using some algorithms of the resource-
allocation graph. This graph is mainly used to represent the allocations of various resources to
different processes. After the detection of deadlock, a number of methods can be used in order to
recover from that deadlock.
One way is preemption by the help of which a resource held by one process is provided to
another process.
The second way is to roll back, as the operating system keeps a record of the process state and it
can easily make a process roll back to its previous state due to which deadlock situation can be
easily eliminate.
The third way to overcome the deadlock situation is by killing one or more processes.
Deadlock Prevention in Operating System
Mutual Exclusion
This condition must hold for non-sharable resources. For example, a printer cannot be
simultaneously shared by several processes. In contrast, Sharable resources do not require
mutually exclusive access and thus cannot be involved in a deadlock. A good example of a
sharable resource is Read-only files because if several processes attempt to open a read-only file
at the same time, then they can be granted simultaneous access to the file.
o
Partition Allocation Methods
o Memory Management
o Virtual Memory
o File System
o Banker's Algorithm
o Secondary Storage
o Resource Allocation Graph
o System Calls
o Memory Management
o Logical and Physical Address
o Swapping in OS
o Contiguous Memory Allocation
o Paging in OS
o Page Table in OS
o Segmentation in OS
o Paging Vs Segmentation
o Contiguous Vs Non-Contiguous
o Paging Vs Swapping
o Internal Vs External Fragmentation
o Virtual Memory in OS
o Demand Paging in OS
o Copy on Write in OS
o Page Fault in OS
o Page Replacement Algorithm
o Thrashing in OS
← PrevNext →
1. Home
2. Operating System
3. Deadlock Prevention in Operating System
As we are already familiar with all the necessary conditions for a deadlock. In brief,
the conditions are as follows:
Mutual Exclusion
No preemption
Circular Wait
Mutual Exclusion
This condition must hold for non-sharable resources. For example, a printer cannot
be simultaneously shared by several processes. In contrast, Sharable resources do
not require mutually exclusive access and thus cannot be involved in a deadlock. A
good example of a sharable resource is Read-only files because if several processes
attempt to open a read-only file at the same time, then they can be granted
simultaneous access to the file.
A process need not to wait for the sharable resource. Generally, deadlocks cannot
be prevented by denying the mutual exclusion condition because there are some
resources that are intrinsically non-sharable.
Hold and Wait
Hold and wait condition occurs when a process holds a resource and is also waiting
for some other resource in order to complete its execution. Thus if we did not want
the occurrence of this condition then we must guarantee that when a process
requests a resource, it does not hold any other resource.
There are some protocols that can be used in order to ensure that the Hold and
Wait condition never occurs:
According to the first protocol; Each process must request and gets all its
resources before the beginning of its execution.
The second protocol allows a process to request resources only when it does
not occupy any resource.
No Preemption
According to the First Protocol: "If a process that is already holding some
resources requests another resource and if the requested resources cannot
be allocated to it, then it must release all the resources currently allocated to
it."
According to the Second Protocol: "When a process requests some resources,
if they are available, then allocate them.
If in case the requested resource is not available then we will check whether it is
being used or is allocated to some other process waiting for other resources. If
that resource is not being used, then the operating system preempts it from the
waiting process and allocate it to the requesting process. And if that resource is
being used, then the requesting process must wait".
Circular Wait
Practical
S.N Necessary
Approach Implementatio
o Conditions
n
In this method, the request for any resource will be granted only if the resulting state of the
system doesn't cause any deadlock in the system. This method checks every step performed
by the operating system. Any process continues its execution until the system is in a safe
state. Once the system enters into an unsafe state, the operating system has to take a step
back.
With the help of a deadlock-avoidance algorithm, you can dynamically assess the resource-
allocation state so that there can never be a circular-wait situation.
Deadlock avoidance can mainly be done with the help of Banker's Algorithm.
A state is safe if the system can allocate resources to each process( up to its maximum
requirement) in some order and still avoid a deadlock.
In an Unsafe state, the operating system cannot prevent processes from requesting resources
in such a way that any deadlock occurs. It is not necessary that all unsafe states are
deadlocks; an unsafe state may lead to a deadlock.
2. Max
3. Allocation
4. Need
1. Safety algorithm
Safety Algorithm
A safety algorithm is an algorithm used to find whether or not a system is in its safe state.
The algorithm is as follows:
1. Let Work and Finish be vectors of length m and n, respectively. Initially,
Work = Available
This means, initially, no process has finished and the number of available resources is
represented by the Available array.
2. Find an index i such that both
Finish[i] ==false
Finish[i] = true
Go to step 2.
When an unfinished process is found, then the resources are allocated and the process is
marked finished. And then, the loop is repeated to check the same for all other processes.
4. If Finish[i] == true for all i, then the system is in a safe state.
That means if all processes are finished, then the system is in safe state.
This algorithm may require an order of mxn² operations in order to determine whether a state is
safe or not.
Now the next algorithm is a resource-request algorithm and it is mainly used to determine
whether requests can be safely granted or not.
Let Requesti be the request vector for the process Pi. If Requesti[j]==k, then process Pi wants k
instance of Resource type Rj.When a request for resources is made by the process Pi, the
following are the actions that will be taken:
1. If Requesti <= Needi, then go to step 2;else raise an error condition, since the process has
exceeded its maximum claim.
2.If Requesti <= Availablei then go to step 3; else Pi must have to wait as resources are not
available.
3.Now we will assume that resources are assigned to process Pi and thus perform the following
steps:
Available= Available-Requesti;
Allocationi=Allocationi +Requesti;
If the resulting resource allocation state comes out to be safe, then the transaction is completed
and, process Pi is allocated its resources. But in this case, if the new state is unsafe, then Pi waits
for Requesti, and the old resource-allocation state is restored.
Let us consider the following snapshot for understanding the banker's algorithm:
P1 212 322
P2 401 902
P3 020 753
P4 112 112
1. calculate the content of the need matrix?
2. Check if the system is in a safe state?
3. Determine the total sum of each type of resource?
Solution:
1. The Content of the need matrix can be calculated by using the formula given below:
Available = (2, 1, 0)
Available = (2, 1, 0)
Request of P1 is granted.
= (2, 1, 0) + (2, 1, 2)
Available = (4, 2, 2)
Available = (4, 2, 2)
Available = (4, 2, 2)
= (4, 2, 2) + (1, 1, 2)
Available = (5, 3, 4)
Request of P2 is granted.
= (5, 3, 4) + (4, 0, 1)
Available = (9, 3, 5)
= Available (9, 5, 5)
The system allocates all the needed resources to each process. So, we can say that the
system is in a safe state.
3. The total amount of resources will be calculated by the following formula:
= [8 5 7] + [2 1 0] = [10 6 7]
An algorithm is used to examines the state of the system in order to determine whether
a deadlock has occurred.
An algorithm that is used to recover from the deadlock.
Thus order to get rid of deadlocks the operating system periodically checks the system for
any deadlock. After Finding the deadlock the operating system will recover from it using
recovery techniques.
Now, the main task of the operating system is to detect the deadlocks and this is done
with the help of Resource Allocation Graph.
Single Instance of Each Resource Type
If all the resources have only a single instance, then a deadlock-detection algorithm can be
defined that mainly uses the variant of the resource-allocation graph and is known as a wait-
for graph. This wait-for graph is obtained from the resource-allocation graph by removing its
resource nodes and collapsing its appropriate edges.
An edge from Pi to Pj in a wait-for graph simply implies that process Pi is basically waiting
for process Pj in order to release a resource that it needs. An edge Pi, Pj exists in a wait-for
graph if and only if the corresponding resource allocation graph contains two edges Pi,Rq
and Rq,Pj for a resource Rq.
A deadlock exists in the system if and only if there is a cycle in the wait-for graph.
The above scheme that is a wait-for graph is not applicable to the resource-allocation system
having multiple instances of each resource type. Now we will move towards a deadlock
detection algorithm that is is applicable for such systems.
This algorithm mainly uses several time-varying data structures that are similar to those used
in Banker's Algorithm and these are as follows:
1. Available
2. Allocation
3. Request
It is an n*m matrix that is used to indicate the request of each process; if Request[i][j] equals
to k, then process Pi is requesting k more instances of resource type Ri.
Allocation and Request are taken as vectors and referred to as Allocation and Request. The
Given detection algorithm is simply used to investigate every possible allocation sequence
for the processes that remain to be completed.
Work = Available
Finish[i] ==false
Finish[i] = true
Go to step 2.
4. If If Finish[i] == false for some i, 0<=i<n, then it means the system is in a deadlocked
state.
When a detection algorithm determines that a deadlock exists then there are several available
alternatives. There one possibility and that is to inform the operator about the deadlock and
let him deal with this problem manually.
Another possibility is to let the system recover from the deadlock automatically. These are
two options that are mainly used to break the deadlock.
Process Termination
In order to eliminate deadlock by aborting the process, we will use one of two methods given
below. In both methods, the system reclaims all resources that are allocated to the terminated
processes.
Aborting all deadlocked Processes Clearly, this method is helpful in breaking the cycle
of deadlock, but this is an expensive approach. This approach is not suggestable but can
be used if the problem becomes very serious. If all the processes are killed then there may
occur insufficiency in the system and all processes will execute again from starting.
Abort one process at a time until the elimination of the deadlock cycle This method
can be used but we have to decide which process to kill and this method incurs
considerable overhead. The process that has done the least amount of work is killed by
the Operating system firstly.
Resource Preemption
In order to eliminate the deadlock by using resource preemption, we will successively
preempt some resources from processes and will give these resources to some other
processes until the deadlock cycle is broken and there is a possibility that the system will
recover from deadlock. But there are chances that the system goes into starvation.