0% found this document useful (0 votes)
0 views27 pages

OS

Process synchronization in operating systems is essential for managing processes that share memory, ensuring data consistency by allowing only one process to modify shared data at a time. Various methods, such as semaphores and mutex locks, are employed to prevent race conditions and ensure mutual exclusion, progress, and bounded waiting in critical sections. Deadlocks can occur when processes hold resources while waiting for others, and can be managed through strategies like deadlock prevention and avoidance.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
0 views27 pages

OS

Process synchronization in operating systems is essential for managing processes that share memory, ensuring data consistency by allowing only one process to modify shared data at a time. Various methods, such as semaphores and mutex locks, are employed to prevent race conditions and ensure mutual exclusion, progress, and bounded waiting in critical sections. Deadlocks can occur when processes hold resources while waiting for others, and can be managed through strategies like deadlock prevention and avoidance.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 27

CH 3 Process Synchronization

Processes Synchronization or Synchronization is the way by which processes that share


the same memory space are managed in an operating system. It helps maintain the
consistency of data by using variables or hardware so that only one process can make
changes to the shared memory at a time. There are various solutions for the same such as
semaphores, mutex locks, synchronization hardware, etc.

What is Process Synchronization in OS?


An operating system is software that manages all applications on a device and basically
helps in the smooth functioning of our computer. Because of this reason, the operating
system has to perform many tasks, sometimes simultaneously. This isn't usually a
problem unless these simultaneously occurring processes use a common resource.

For example, consider a bank that stores the account balance of each customer in the
same database. Now suppose you initially have x rupees in your account. Now, you take
out some amount of money from your bank account, and at the same time, someone tries
to look at the amount of money stored in your account. As you are taking out some
money from your account, after the transaction, the total balance left will be lower than x.
But, the transaction takes time, and hence the person reads x as your account balance
which leads to inconsistent data. If in some way, we could make sure that only one
process occurs at a time, we could ensure consistent data.

In the above image, if Process1 and Process2 happen at the same time, user 2 will get the
wrong account balance as Y because of Process1 being transacted when the balance is X.
Inconsistency of data can occur when various processes share a common resource in a
system which is why there is a need for process synchronization in the operating system.

How Process Synchronization in OS Works?


Let us take a look at why exactly we need Process Synchronization. For example, If a
process1 is trying to read the data present in a memory location while another process2 is
trying to change the data present at the same location, there is a high chance that the data
read by the process1 will be incorrect.

● Independent Process: The execution of one process does not affect the
execution of other processes.
● Cooperative Process: A process that can affect or be affected by other
processes executing in the system.

Process synchronization problem arises in the case of Cooperative processes also because
resources are shared in Cooperative processes.

Let us look at different elements/sections of a program:

● Entry Section: The entry Section decides the entry of a process.


● Critical Section: The Critical section allows and makes sure that only one process
is modifying the shared data.
● Exit Section: The entry of other processes in the shared data after the execution of
one process is handled by the Exit section.
● Remainder Section: The remaining part of the code which is not categorized as
above is contained in the Remainder section.

Race Condition

When more than one process is executing the same code or accessing the same memory
or any shared variable in that condition there is a possibility that the output or the value
of the shared variable is wrong so for that all the processes doing the race to say that my
output is correct this condition known as a race condition. Several processes access and
process the manipulations over the same data concurrently, and then the outcome
depends on the particular order in which the access takes place. A race condition is a
situation that may occur inside a critical section. This happens when the result of multiple
thread execution in the critical section differs according to the order in which the threads
execute. Race conditions in critical sections can be avoided if the critical section is
treated as an atomic instruction. Also, proper thread synchronization using locks or
atomic variables can prevent race conditions.
Example:
Let’s understand one example to understand the race condition better:
Let’s say there are two processes P1 and P2 which share a common variable (shared=10),
both processes are present in – queue and waiting for their turn to be executed. Suppose,
Process P1 first come under execution, and the CPU store a common variable between
them (shared=10) in the local variable (X=10) and increment it by 1(X=11), after then
when the CPU read line sleep(1),it switches from current process P1 to process P2
present in ready-queue. The process P1 goes in a waiting state for 1 second.
Now CPU execute the Process P2 line by line and store common variable (Shared=10) in
its local variable (Y=10) and decrement Y by 1(Y=9), after then when CPU read sleep(1),
the current process P2 goes in waiting for state and CPU remains idle for some time as
there is no process in ready-queue, after completion of 1 second of process P1 when it
comes in ready-queue, CPU takes the process P1 under execution and execute the
remaining line of code (store the local variable (X=11) in common variable
(shared=11) ), CPU remain idle for sometime waiting for any process in ready-
queue,after completion of 1 second of Process P2, when process P2 comes in ready-
queue, CPU start executing the further remaining line of Process P2(store the local
variable (Y=9) in common variable (shared=9) ).
Initially Shared = 10
Process 1 Process 2

int X = shared int Y = shared

X++ Y–

sleep(1) sleep(1)

shared = X shared = Y

Requirements of Synchronization
The following three requirements must be met by a solution to the critical section
problem:

● Mutual Exclusion: If a process is running in the critical section, no other process


should be allowed to run in that section at that time.
● Progress: If no process is still in the critical section and other processes are waiting
outside the critical section to execute, then any one of the threads must be
permitted to enter the critical section. The decision of which process will enter the
critical section will be taken by only those processes that are not executing in the
remaining section.
● No starvation: Starvation means a process that keeps waiting forever to access the
critical section but never gets a chance. No starvation is also known as Bounded
Waiting.
○ A process should not wait forever to enter inside the critical section.
○ When a process submits a request to access its critical section, there should
be a limit or bound, which is the number of other processes that are allowed
to access the critical section before it.
○ After this bond is reached, this process should be allowed to access the
critical section.

Let us now discuss some of the solutions to the Critical Section Problem.

Note: We are assuming the final value of a common variable(shared) after execution of
Process P1 and Process P2 is 10 (as Process P1 increment variable (shared=10) by 1 and
Process P2 decrement variable (shared=11) by 1 and finally it becomes shared=10). But
we are getting undesired value due to a lack of proper synchronization.

Actual meaning of race-condition

● If the order of execution of the process(first P1 -> then P2) then we will get the
value of common variable (shared) =9.
● If the order of execution of the process(first P2 -> then P1) then we will get the
final value of common variable (shared) =11.
● Here the (value1 = 9) and (value2=10) are racing, If we execute these two
processes in our computer system then sometime we will get 9 and sometime
we will get 10 as the final value of a common variable(shared). This
phenomenon is called race condition.

Critical Section Problem


A critical section is a code segment that can be accessed by only one process at a time.
The critical section contains shared variables that need to be synchronized to maintain the
consistency of data variables. So the critical section problem means designing a way for
cooperative processes to access shared resources without creating data inconsistencies.

In the entry section, the process requests for entry in the Critical Section.
Any solution to the critical section problem must satisfy three requirements:
● Mutual Exclusion: If a process is executing in its critical section, then no other
process is allowed to execute in the critical section.
● Progress: If no process is executing in the critical section and other processes
are waiting outside the critical section, then only those processes that are not
executing in their remainder section can participate in deciding which will
enter the critical section next, and the selection cannot be postponed
indefinitely.
● Bounded Waiting: A bound must exist on the number of times that other
processes are allowed to enter their critical sections after a process has made a
request to enter its critical section and before that request is granted.

Peterson’s Solution
Peterson’s Solution is a classical software-based solution to the critical section problem.
In Peterson’s solution, we have two shared variables:
● boolean flag[i]: Initialized to FALSE, initially no one is interested in entering
the critical section
● int turn: The process whose turn is to enter the critical section.

Peterson’s Solution preserves all three conditions:


● Mutual Exclusion is assured as only one process can access the critical section
at any time.
● Progress is also assured, as a process outside the critical section does not block
other processes from entering the critical section.
● Bounded Waiting is preserved as every process gets a fair chance.

Semaphores in Process Synchronization

Semaphores are just normal variables used to coordinate the activities of multiple
processes in a computer system. They are used to enforce mutual exclusion, avoid race
conditions, and implement synchronization between processes.
The process of using Semaphores provides two operations: wait (P) and signal (V). The
wait operation decrements the value of the semaphore, and the signal operation
increments the value of the semaphore. When the value of the semaphore is zero, any
process that performs a wait operation will be blocked until another process performs a
signal operation.

Semaphores are used to implement critical sections, which are regions of code that must
be executed by only one process at a time. By using semaphores, processes can
coordinate access to shared resources, such as shared memory or I/O devices.

A semaphore is a special kind of synchronization data that can be used only through
specific synchronization primitives. When a process performs a wait operation on a
semaphore, the operation checks whether the value of the semaphore is >0. If so, it
decrements the value of the semaphore and lets the process continue its execution;
otherwise, it blocks the process on the semaphore. A signal operation on a semaphore
activates a process blocked on the semaphore if any, or increments the value of the
semaphore by 1. Due to these semantics, semaphores are also called counting
semaphores. The initial value of a semaphore determines how many processes can get
past the wait operation.

Semaphores are of two types:

1. Binary Semaphore –
This is also known as a mutex lock. It can have only two values – 0 and 1. Its
value is initialized to 1. It is used to implement the solution of critical section
problems with multiple processes.
2. Counting Semaphore –
Its value can range over an unrestricted domain. It is used to control access to a
resource that has multiple instances.

Now let us see how it does so.

First, look at two operations that can be used to access and change the value of the
semaphore variable.

Some points regarding P and V operation:

1. P operation is also called wait, sleep, or down operation, and V operation is


also called signal, wake-up, or up operation.
2. Both operations are atomic and semaphore(s) is always initialized to one. Here
atomic means that variable on which read, modify and update happens at the
same time/moment with no pre-emption i.e. in-between read, modify and
update no other operation is performed that may change the variable.
3. A critical section is surrounded by both operations to implement process
synchronization. See the below image. The critical section of Process P is in
between P and V operation.
Now, let us see how it implements mutual exclusion. Let there be two processes P1 and
P2 and a semaphore s is initialized as 1. Now if suppose P1 enters in its critical section
then the value of semaphore s becomes 0. Now if P2 wants to enter its critical section
then it will wait until s > 0, this can only happen when P1 finishes its critical section and
calls V operation on semaphore s.

This way mutual exclusion is achieved. Look at the below image for details which is a
Binary semaphore.
Deadlock

Deadlocks are a set of blocked processes each holding a resource and waiting
to acquire a resource held by another process.
In the above figure, process T0 has resource1, it requires resource2 in order
to finish its execution. Similarly, process T1 has resource2 and it also needs
to acquire resource1 to finish its execution. In this way, T0 and T1 are in a
deadlock because each of them needs the resource of others to complete their
execution but neither of them is willing to give up their resources.

In General, a process must request a resource before using it and it must


release the resource after using it. And any process can request as many
resources as it requires in order to complete its designated task. And there is a
condition that the number of resources requested may not exceed the total
number of resources available in the system.

Basically in the Normal mode of Operation utilization of resources by a


process is in the following sequence:

1. Request:
Firstly, the process requests the resource. In a case, if the request
cannot be granted immediately(e.g: resource is being used by any other
process), then the requesting process must wait until it can acquire the
resource.
2. Use:
The Process can operate on the resource ( e.g: if the resource is a
printer then in that case process can print on the printer).
3. Release:
The Process releases the resource.

Let us take a look at the differences between starvation and deadlock.

Starvation vs Deadlock
Starvation Deadlock

When all the low priority processes got Deadlock is a situation that
blocked, while the high priority processes occurs when one of the
execute then this situation is termed as processes got blocked.
Starvation.

Starvation is a long waiting but it is not an Deadlock is an infinite


infinite process. process.

It is not necessary that every starvation is a There is starvation in every


deadlock. deadlock.

Starvation is due to uncontrolled priority During deadlock, preemption


and resource management. and circular wait does not
occur simultaneously.

The occurrence of deadlock can be detected by the resource


scheduler.

Now in the next section, we will cover the conditions that are
required to cause deadlock.

Necessary Conditions
The deadlock situation can only arise if all the following four
conditions hold simultaneously:

1.Mutual Exclusion

According to this condition, at least one resource should be


non-shareable (non-shareable resources are those that can
be used by one process at a time.)

2. Hold and wait

According to this condition, A process is holding at least one


resource and is waiting for additional resources.

3. NO preemption

Resources cannot be taken from the process because


resources can be released only voluntarily by the process
holding them.

4. Circular wait

In this condition, the set of processes are waiting for each


other in the circular form.

The above four conditions are not completely independent as


the circular wait condition implies the hold and wait
condition. We emphasize on the fact that all four conditions
must hold for a deadlock.
Deadlock conditions can be avoided with the help of a
number of methods. Let us take a look at some of the
methods.

Methods For Handling Deadlocks

Methods that are used in order to handle the problem of


deadlocks are as follows:

1. Ignoring the Deadlock

According to this method, it is assumed that deadlock would


never occur.This approach is used by many operating
systems where they assume that deadlock will never occur
which means operating systems simply ignore the deadlock.
This approach can be beneficial for those systems that are
only used for browsing and for normal tasks. Thus ignoring
the deadlock method can be useful in many cases but it is
not perfect in order to remove the deadlock from the
operating system.

2.Deadlock Prevention

As we have discussed in the above section, all four


conditions: Mutual Exclusion, Hold and Wait, No preemption,
and circular wait if held by a system then causes deadlock to
occur. The main aim of the deadlock prevention method is to
violate any one condition among the four; because if any of
one condition is violated then the problem of deadlock will
never occur. As the idea behind this method is simple but the
difficulty can occur during the physical implementation of this
method in the system.
3.Avoiding the Deadlock

This method is used by the operating system in order to


check whether the system is in a safe state or in an unsafe
state. This method checks every step performed by the
operating system. Any process continues its execution until
the system is in a safe state. Once the system enters into an
unsafe state, the operating system has to take a step back.

Basically, with the help of this method, the operating system


keeps an eye on each allocation, and makes sure that
allocation does not cause any deadlock in the system.

4.Deadlock detection and recovery

With this method, the deadlock is detected first by using


some algorithms of the resource-allocation graph. This graph
is mainly used to represent the allocations of various
resources to different processes. After the detection of a
deadlock, a number of methods can be used in order to
recover from that deadlock.

One way is preemption by the help of which a resource held


by one process is provided to another process.

The second way is to roll back, as the operating system keeps


a record of the process state and it can easily make a process
roll back to its previous state due to which deadlock
situations can be easily eliminated.

The third way to overcome the deadlock situation is by killing


one or more processes.
In our upcoming tutorial, we will cover each method in detail
one by one.
Example of a Resource Allocation Graph

Resource Allocation Graph With A Deadlock


Graph With A Cycle But No Deadlock

Safe State
When a process requests an available resource, system must
decide if immediate allocation leaves the system in a safe
state
•System is in safe state if there exists a sequence <P1, P2,
…, Pn> of ALL the processes is the systems such that for
each Pi, the resources that Pi can still request can be
satisfied by currently available resources + resources held by
all the Pj, with j < i
•That is:
•If Pi resource needs are not immediately available, then Pi
can wait until all Pj have finished
•When Pj is finished, Pi can obtain needed resources,
execute, return allocated resources, and terminate
•When Pi terminates, Pi +1 can obtain its needed resources,
and so on

Basic Facts
•If a system is in safe state Þ no deadlocks

•If a system is in unsafe state Þ possibility of deadlock

•Avoidance Þ ensure that a system will never enter an unsafe


state.

Safe, Unsafe , Deadlock State

Avoidance algorithms
•Single instance of a resource type
•Use a resource-allocation graph

•Multiple instances of a resource type


• Use the banker’s algorithm

Resource-Allocation Graph Scheme

•Claim edge Pi ® Rj indicated that process Pj may request


resource Rj; represented by a dashed line
•Claim edge converts to request edge when a process
requests a resource
•Request edge converted to an assignment edge when the
resource is allocated to the process
•When a resource is released by a process, assignment edge
reconverts to a claim edge
•Resources must be claimed a priori in the system

Resource-Allocation Graph
Unsafe State In Resource-Allocation Graph

Resource-Allocation Graph Algorithm


Suppose that process Pi requests a resource Rj
The request can be granted only if converting the request
edge to an assignment edge does not result in the formation
of a cycle in the resource allocation graph

Banker’s Algorithm
•Multiple instances
•Each process must declare its maximum resource use (a
priori)
•When a process requests a resource it may have to wait
•When a process gets all its resources it must return them in
a finite amount of time

Data Structures for the Banker’s Algorithm

Let n = number of processes, and m = number of resources types.

•Available: Vector of length m. If available [j] = k, there are


k instances of resource type Rj available
•Max: n x m matrix. If Max [i,j] = k, then process Pi may
request at most k instances of resource type Rj
•Allocation: n x m matrix. If Allocation[i,j] = k then Pi is
currently allocated k instances of Rj
•Need: n x m matrix. If Need[i,j] = k, then Pi may need k
more instances of Rj to complete its task

Need [i,j] = Max[i,j] – Allocation [i,j]

Safety Algorithm

1. Let Work and Finish be vectors of length m and n,


respectively. Initialize:
Work = Available
Finish [i] = false for i = 0, 1, …, n- 1
2. Find and i such that both:
(a) Finish [i] = false
(b) Needi £ Work
If no such i exists, go to step 4
3. Work = Work + Allocationi
Finish[i] = true
go to step 2
4. If Finish [i] == true for all i, then the system is in a safe
state

Resource-Request Algorithm for


Process Pi
Request = request vector for process Pi. If Requesti [j] = k
then process Pi wants k instances of resource type Rj
1. If Request i £ Need i goes to step 2. Otherwise, raise error
condition, since process has exceeded its maximum claim
2. If Request i £ Available, go to step 3. Otherwise Pi must
wait, since resources are not available
3. Pretend to allocate requested resources to Pi by modifying
the state as follows:
Available = Available – Request;
Allocation i = Allocation i + Request i;
Need i = Need i – Request i;
lIf safe Þ the resources are allocated to Pi
lIf unsafe Þ Pi must wait, and the old resource-allocation state is
restored

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy