OS
OS
For example, consider a bank that stores the account balance of each customer in the
same database. Now suppose you initially have x rupees in your account. Now, you take
out some amount of money from your bank account, and at the same time, someone tries
to look at the amount of money stored in your account. As you are taking out some
money from your account, after the transaction, the total balance left will be lower than x.
But, the transaction takes time, and hence the person reads x as your account balance
which leads to inconsistent data. If in some way, we could make sure that only one
process occurs at a time, we could ensure consistent data.
In the above image, if Process1 and Process2 happen at the same time, user 2 will get the
wrong account balance as Y because of Process1 being transacted when the balance is X.
Inconsistency of data can occur when various processes share a common resource in a
system which is why there is a need for process synchronization in the operating system.
● Independent Process: The execution of one process does not affect the
execution of other processes.
● Cooperative Process: A process that can affect or be affected by other
processes executing in the system.
Process synchronization problem arises in the case of Cooperative processes also because
resources are shared in Cooperative processes.
Race Condition
When more than one process is executing the same code or accessing the same memory
or any shared variable in that condition there is a possibility that the output or the value
of the shared variable is wrong so for that all the processes doing the race to say that my
output is correct this condition known as a race condition. Several processes access and
process the manipulations over the same data concurrently, and then the outcome
depends on the particular order in which the access takes place. A race condition is a
situation that may occur inside a critical section. This happens when the result of multiple
thread execution in the critical section differs according to the order in which the threads
execute. Race conditions in critical sections can be avoided if the critical section is
treated as an atomic instruction. Also, proper thread synchronization using locks or
atomic variables can prevent race conditions.
Example:
Let’s understand one example to understand the race condition better:
Let’s say there are two processes P1 and P2 which share a common variable (shared=10),
both processes are present in – queue and waiting for their turn to be executed. Suppose,
Process P1 first come under execution, and the CPU store a common variable between
them (shared=10) in the local variable (X=10) and increment it by 1(X=11), after then
when the CPU read line sleep(1),it switches from current process P1 to process P2
present in ready-queue. The process P1 goes in a waiting state for 1 second.
Now CPU execute the Process P2 line by line and store common variable (Shared=10) in
its local variable (Y=10) and decrement Y by 1(Y=9), after then when CPU read sleep(1),
the current process P2 goes in waiting for state and CPU remains idle for some time as
there is no process in ready-queue, after completion of 1 second of process P1 when it
comes in ready-queue, CPU takes the process P1 under execution and execute the
remaining line of code (store the local variable (X=11) in common variable
(shared=11) ), CPU remain idle for sometime waiting for any process in ready-
queue,after completion of 1 second of Process P2, when process P2 comes in ready-
queue, CPU start executing the further remaining line of Process P2(store the local
variable (Y=9) in common variable (shared=9) ).
Initially Shared = 10
Process 1 Process 2
X++ Y–
sleep(1) sleep(1)
shared = X shared = Y
Requirements of Synchronization
The following three requirements must be met by a solution to the critical section
problem:
Let us now discuss some of the solutions to the Critical Section Problem.
Note: We are assuming the final value of a common variable(shared) after execution of
Process P1 and Process P2 is 10 (as Process P1 increment variable (shared=10) by 1 and
Process P2 decrement variable (shared=11) by 1 and finally it becomes shared=10). But
we are getting undesired value due to a lack of proper synchronization.
● If the order of execution of the process(first P1 -> then P2) then we will get the
value of common variable (shared) =9.
● If the order of execution of the process(first P2 -> then P1) then we will get the
final value of common variable (shared) =11.
● Here the (value1 = 9) and (value2=10) are racing, If we execute these two
processes in our computer system then sometime we will get 9 and sometime
we will get 10 as the final value of a common variable(shared). This
phenomenon is called race condition.
In the entry section, the process requests for entry in the Critical Section.
Any solution to the critical section problem must satisfy three requirements:
● Mutual Exclusion: If a process is executing in its critical section, then no other
process is allowed to execute in the critical section.
● Progress: If no process is executing in the critical section and other processes
are waiting outside the critical section, then only those processes that are not
executing in their remainder section can participate in deciding which will
enter the critical section next, and the selection cannot be postponed
indefinitely.
● Bounded Waiting: A bound must exist on the number of times that other
processes are allowed to enter their critical sections after a process has made a
request to enter its critical section and before that request is granted.
Peterson’s Solution
Peterson’s Solution is a classical software-based solution to the critical section problem.
In Peterson’s solution, we have two shared variables:
● boolean flag[i]: Initialized to FALSE, initially no one is interested in entering
the critical section
● int turn: The process whose turn is to enter the critical section.
Semaphores are just normal variables used to coordinate the activities of multiple
processes in a computer system. They are used to enforce mutual exclusion, avoid race
conditions, and implement synchronization between processes.
The process of using Semaphores provides two operations: wait (P) and signal (V). The
wait operation decrements the value of the semaphore, and the signal operation
increments the value of the semaphore. When the value of the semaphore is zero, any
process that performs a wait operation will be blocked until another process performs a
signal operation.
Semaphores are used to implement critical sections, which are regions of code that must
be executed by only one process at a time. By using semaphores, processes can
coordinate access to shared resources, such as shared memory or I/O devices.
A semaphore is a special kind of synchronization data that can be used only through
specific synchronization primitives. When a process performs a wait operation on a
semaphore, the operation checks whether the value of the semaphore is >0. If so, it
decrements the value of the semaphore and lets the process continue its execution;
otherwise, it blocks the process on the semaphore. A signal operation on a semaphore
activates a process blocked on the semaphore if any, or increments the value of the
semaphore by 1. Due to these semantics, semaphores are also called counting
semaphores. The initial value of a semaphore determines how many processes can get
past the wait operation.
1. Binary Semaphore –
This is also known as a mutex lock. It can have only two values – 0 and 1. Its
value is initialized to 1. It is used to implement the solution of critical section
problems with multiple processes.
2. Counting Semaphore –
Its value can range over an unrestricted domain. It is used to control access to a
resource that has multiple instances.
First, look at two operations that can be used to access and change the value of the
semaphore variable.
This way mutual exclusion is achieved. Look at the below image for details which is a
Binary semaphore.
Deadlock
Deadlocks are a set of blocked processes each holding a resource and waiting
to acquire a resource held by another process.
In the above figure, process T0 has resource1, it requires resource2 in order
to finish its execution. Similarly, process T1 has resource2 and it also needs
to acquire resource1 to finish its execution. In this way, T0 and T1 are in a
deadlock because each of them needs the resource of others to complete their
execution but neither of them is willing to give up their resources.
1. Request:
Firstly, the process requests the resource. In a case, if the request
cannot be granted immediately(e.g: resource is being used by any other
process), then the requesting process must wait until it can acquire the
resource.
2. Use:
The Process can operate on the resource ( e.g: if the resource is a
printer then in that case process can print on the printer).
3. Release:
The Process releases the resource.
Starvation vs Deadlock
Starvation Deadlock
When all the low priority processes got Deadlock is a situation that
blocked, while the high priority processes occurs when one of the
execute then this situation is termed as processes got blocked.
Starvation.
Now in the next section, we will cover the conditions that are
required to cause deadlock.
Necessary Conditions
The deadlock situation can only arise if all the following four
conditions hold simultaneously:
1.Mutual Exclusion
3. NO preemption
4. Circular wait
2.Deadlock Prevention
Safe State
When a process requests an available resource, system must
decide if immediate allocation leaves the system in a safe
state
•System is in safe state if there exists a sequence <P1, P2,
…, Pn> of ALL the processes is the systems such that for
each Pi, the resources that Pi can still request can be
satisfied by currently available resources + resources held by
all the Pj, with j < i
•That is:
•If Pi resource needs are not immediately available, then Pi
can wait until all Pj have finished
•When Pj is finished, Pi can obtain needed resources,
execute, return allocated resources, and terminate
•When Pi terminates, Pi +1 can obtain its needed resources,
and so on
Basic Facts
•If a system is in safe state Þ no deadlocks
Avoidance algorithms
•Single instance of a resource type
•Use a resource-allocation graph
Resource-Allocation Graph
Unsafe State In Resource-Allocation Graph
Banker’s Algorithm
•Multiple instances
•Each process must declare its maximum resource use (a
priori)
•When a process requests a resource it may have to wait
•When a process gets all its resources it must return them in
a finite amount of time
Safety Algorithm