Os Unit 2 Sem II Bca-1
Os Unit 2 Sem II Bca-1
Synchronization
• Process synchronization in an operating system (OS) ensures that
multiple processes can execute concurrently while maintaining
consistency and avoiding conflicts, especially when they access shared
resources.
• It is crucial for preventing race conditions, ensuring data integrity, and
improving system performance.
Why is Process Synchronization Needed?
• When multiple processes access shared data or resources,
inconsistency may occur due to race conditions.
• Without proper synchronization, one process may read or modify
data before another process completes its task, leading to incorrect
results.
• It helps in coordinating process execution to ensure that critical
sections (where shared resources are accessed) do not lead to
deadlocks or data corruption.
Critical Section Problem
• The Critical Section Problem arises in process synchronization, where
multiple processes attempt to access shared resources simultaneously. If
not handled properly, it can lead to race conditions, data inconsistency,
and system failures.
• Consider system of n processes {p0, p1, … pn-1}
• Each process has critical section segment of code
– Process may be changing common variables, updating table,
writing file, etc
– When one process in critical section, no other may be in its critical
section
• Each process must ask permission to enter critical
section in entry section, may follow critical section with
exit section, then remainder section
• General structure of process Pi
• Critical Section is the part of a program which tries to access shared resources.
That resource may be any resource in a computer like a memory location, CPU
or any I/O device.
• The critical section cannot be executed by more than one process at the same
time.
• Operating system faces the difficulties in allowing and disallowing the processes
from entering the critical section.
• The critical section problem is used to design a set of protocols which can ensure
that the Race condition among the processes will never arise.
Race Condition
• A race condition happens when multiple processes (or threads) try to
change/access the same data at the same time.
• The final result depends on which process finishes first, which can
lead to incorrect or unpredictable results.
Problem:
• If processes (or threads) modify shared data without coordination, it
can lead to incorrect values.This can cause software bugs, incorrect
calculations, or system crashes.
How to Fix it? (Synchronization)
• Ensure that only one process at a time can modify the shared data.
• Use locks or synchronization techniques to control access and avoid
conflicts.
Critical Section handling in OS
do {
while (test and set(&lock)) ;
/* do nothing */
/* critical section */
lock = false;
/* remainder section */
}
while (true);
2. Compare-and-Swap (CAS)
• Atomically compares a memory location with an expected value and swaps
it if they match.
• Used in lock-free algorithms and multithreading libraries
• Mutual-Exclusion Implementation with compare_and_swap()
do{
while (compare_and_swap(&lock, 0, 1) != 0) ;
/* do nothing */
/* critical section */
lock = 0;
/* remainder section */
}
while (true);
3. Bounded-waiting mutual exclusion with test and set().
• Ensures that every process gets a turn within a finite number of attempts, preventing starvation.
do{
waiting[i] = true;
key = true;
while (waiting[i] && key)
key = test and set(&lock);
waiting[i] = false;
/* critical section */
j = (i + 1) % n;
while ((j != i) && !waiting[j])
j = (j + 1) % n;
if (j == i)
lock = false;
else
waiting[j] = false;
• /* remainder section */
}
while (true);
Mutex Locks
• A Mutex (Mutual Exclusion) Lock is a synchronization mechanism used in
operating systems to ensure that only one process or thread can access a
shared resource (critical section) at a time.
• It prevents race conditions by allowing a process to lock the resource
before using it and unlock it after use.
How Mutex Locks Work?
1.Lock (Acquire): A process/thread requests access by locking the mutex.
2.Critical Section: The process executes its task safely without interference.
3.Unlock (Release): Once done, the process unlocks the mutex, allowing
others to access the resource.
• Definition of acquire() follows:
acquire() {
while (!available);
/* busy wait */
available = false;;
}
• Definition of release() follows:
release() {
available = true;
}
Solution to the critical-section
problem using mutex locks
do {
acquire lock
critical section
release lock
remainder section
} while(true);
• Mutex locks can cause busy waiting if implemented as spinlocks,
where a process continuously checks for lock availability instead of
sleeping.
• This leads to CPU wastage, reducing system efficiency, especially
when the waiting time for the critical section is long.
Semaphores
• A semaphore is a synchronization mechanism used in operating systems
and concurrent programming to control access to shared resources.
• A semaphore S is an integer variable . It can be initialized or accessed
only through two standard atomic operations:
1.wait()->wait(S) (also called P() operation)
2.signal()->signal(S) (also called V() operation)
Definition of wait ()
wait(S) {
while (S <= 0);
// busy wait
S--;
}
Definition of signal ()
Signal(){
S++; }
Semaphore Usage:
• A counting semaphore is a synchronization mechanism where the integer value can
range over an unrestricted domain. It is used to manage access to multiple instances
of a shared resource.
• A binary semaphore is a special type of semaphore where the integer value can only
be 0 or 1, making it functionally similar to a mutex lock. It is primarily used for
mutual exclusion and synchronization.
• Semaphores can be used to solve various synchronization problems in concurrent
programming.
• Consider two processes, P1 and P2, where S1 must execute before S2. This order of
execution can be enforced using a semaphore named "synch", which is initialized to
0.
P1:
S1;
signal(synch);
P2:
wait(synch);
S2;
Semaphore Implementation:
• It must be ensured that no two processes execute the wait() and
signal() operations on the same semaphore simultaneously.
• Since semaphores are used for synchronization, their
implementation itself becomes a critical section problem, where
the wait() and signal() operations must be placed within a critical
section.
• This approach may lead to busy waiting, where a process
continuously checks for resource availability instead of being put
to sleep.
• However, if the critical section is short, the overhead of busy
waiting is minimal.
• In cases where applications frequently enter critical sections, busy
waiting is inefficient and should be avoided.
Operations on the Semaphore
In a real operating system, when a process waits and the resource is not available, it
cannot just sit in a busy-wait loop (like while(S <= 0);)—because that wastes
CPU time.
Instead, the OS uses process queues and context switching to manage access efficiently.
So, we use:
🔒 block():
• When a process cannot proceed (i.e., wait() finds S <= 0), it's put into a waiting
queue.
• The process is blocked (not running).
• It waits until some other process signals that the resource is available.
🔓 wakeup(P):
• When signal() is called and a process is waiting, it removes one process from the
waiting queue.
• That process is then moved to the ready queue so it can continue running.
typedef struct {
int value; • signal() semaphore operation can
struct process *list; be:
} semaphore; signal(semaphore *S) {
S->value++;
if (S->value <= 0) {
• wait() semaphore operation can be:
remove a process P from S->list;
wait(semaphore *S) {
wakeup(P); // Resume process P
S->value--;
if (S->value < 0) {
}
add this process to S->list; }
block(); // Suspend process
}
}
Deadlock and Starvation:
• Deadlock – two or more processes are waiting indefinitely for an event that can be caused by only
one of the waiting processes.
• Let S and Q be two semaphores initialized to 1
• Suppose that Po executes wait (S) and then PI executes wait (Q). When Po executes
wait (Q), it must wait until P1 executes signal (Q). Similarly, when P1 executes wait (S), it
must wait until Po executes signal (S). Since these signal () operations cannot be
executed, Po and P1 are deadlocked.
P0 P1
wait(S); wait(Q);
wait(Q); wait(S);
. .
. .
signal(S); signal(Q);
signal(Q); signal(S);
Now We Have a Deadlock
• P0 is waiting for Q, which is held by P1
• P1 is waiting for S, which is held by P0
• Neither can proceed
• No one can call signal()
• System is stuck → DEADLOCK
• Starvation – indefinite blocking(waiting)
– A process may never be removed from the semaphore queue in which it is
suspended.
...
/* Add next_produced to the buffer */
...
} while (true);
Structure of the Consumer Process:
do {
wait(full); // Ensure there is an item to consume
wait(mutex); // Lock access to the buffer
...
/* Remove an item from buffer to next_consumed */
...
...
/* Consume the item in next_consumed */
...
} while (true);
2.The Readers-Writers Problem:
The Readers-Writers Problem is a synchronization issue that arises
when multiple processes need to access a shared data set.
• Readers: Processes that only read the data set and do not modify it.
• Writers: Processes that can both read and write to the data set.
Problem Constraints:
• Multiple readers can access the shared data simultaneously without
interference.
• Only one writer can access the shared data at any given time to
prevent data inconsistency.
• Different variations of this problem involve priority considerations
between readers and writers.
Shared Data and Semaphores:
• To ensure synchronization, the following are used:
• Semaphore wrt (initialized to 1): Ensures mutual exclusion for writers. Only
one writer can access the data at a time.
• Semaphore mutex (initialized to 1): Ensures mutual exclusion when updating
the reader count (readcount).
• Integer readcount (initialized to 0): Tracks the number of active readers.
Synchronization Rules:
• Readers should not block each other while reading, even if a writer is waiting.
• Writers should access the shared data as soon as they are ready and prevent
new readers from starting while waiting.
• The first reader locks the resource for readers using wrt.
• The last reader releases the lock, allowing a waiting writer to proceed.
Structure of a Writer Process
A writer must wait for exclusive access before performing the write
operation and then release the lock once done.
do {
wait(wrt); // Lock access for writers
// Writing is performed
// Reading is performed
monitor DiningPhilosophers {
enum { THINKING, HUNGRY, EATING } state[5];
condition self[5];
void pickup(int i) {
state[i] = HUNGRY; // Philosopher is hungry
test(i); // Check if philosopher can eat
if (state[i] != EATING) // If philosopher cannot eat, wait
self[i].wait;
}
void putdown(int i) {
state[i] = THINKING; // Philosopher finishes eating
// Test left and right neighbors to see if they can eat
test((i + 4) % 5); // Check left neighbor (i - 1)
test((i + 1) % 5); // Check right neighbor (i + 1)
}
void test(int i) {
// Check if both neighbors are not eating, and the philosopher is hungry
if (state[(i + 4) % 5] != EATING && state[i] == HUNGRY && state[(i + 1) % 5] != EATING) {
state[i] = EATING; // Philosopher can eat now
self[i].signal(); // Wake up philosopher to eat
}}
initialization_code() {
for (int i = 0; i < 5; i++)
state[i] = THINKING; // Initially, all philosophers are thinking
}
}
Philosopher Actions
• Each philosopher invokes the following operations to pick up and put
down chopsticks:
DiningPhilosophers.pickup(i); // Philosopher i tries to pick up
chopsticksEAT // Philosopher
DiningPhilosophers.putdown(i); // Philosopher i puts down chopsticks
Synchronization Examples:
• Based on windows:
Uniprocessor Systems: Use interrupt masks to protect access to global resources.
Multiprocessor Systems: Use spinlocks, ensuring that the spinlocking thread is
never preempted.
Dispatcher Objects in User-Land: Act as mutexes, semaphores, events, and timers.
-> Events: Function like condition variables.
-> Timers: Notify one or more threads when the set time expires.
Dispatcher Objects States:
• Signaled State: Object is available.
• Non-Signaled State: Thread will block.
Based on Linux:
•Linux Synchronization Evolution:
Before Kernel 2.6: Used interrupt disabling to implement short critical
sections.
From Kernel 2.6 Onward: Became fully preemptive.
•Linux Provides Various Synchronization Mechanisms:
Semaphores
Atomic integers
Spinlocks
Reader-writer versions of both semaphores and spinlocks
•Single-CPU Systems:
Spinlocks are replaced by enabling and disabling kernel preemption.
Atomic Variables in Linux
Atomic Integer Type: atomic_t is used to define atomic integer
variables.
Example Variables:
atomic_t counter; → Declares an atomic integer variable named
counter.int value; → Declares a normal integer variable named value.
Process Scheduling
• Process Scheduling is the method used by the operating system (OS)
to decide which process will run on the CPU next. Since multiple
processes run in a system, the OS manages them efficiently to ensure
smooth execution.
Criteria:
• Max CPU utilization
• Max throughput
• Min turnaround time
• Min waiting time
• Min response time
•Thread: A unit of execution within a process.
•Process: A running instance of a program that may have one or more
threads.
•CPU Utilization: Keeping the CPU as busy as possible.
•Throughput: Number of processes completed per time unit.
•Turnaround Time (TAT): Total time to execute a process.
•Waiting Time (WT): Time a process spends in the ready queue.
•Response Time: Time from request submission to the first response.
•Burst Time: is the total time a process needs the CPU for
execution.
Deadlocks
• A deadlock is a situation in which two or more processes (or threads)
are unable to proceed because they are each waiting for a resource
held by another process, creating a cycle of dependency.
• This results in an indefinite blocking state where none of the
processes can make progress.
Deadlock Condition:
• A deadlock occurs when every process in a set is waiting for a
resource that is held by another process in the same set.
• This causes an indefinite wait where no process can proceed
For example: Process 1 is holding Resource 1 while Process 2 acquires
Resource 2, and Process 2 is waiting for Resource 1.
System Model:
• A system consists of a finite number of resources (e.g., memory,
printers, CPUs).
• These resources are distributed among multiple processes.
• A process must:
->Request a resource before using it.
->Release the resource after using it.
• A process can request any number of resources needed for a task.
• The total number of resources requested must not exceed the total
available resources.
Normal Process Execution Sequence:
1.Request :
• If the resource is not available, the process must wait.
• Example: open(), malloc(), new(), request().
2.Use:
• The process performs operations using the resource.
• Example: Printing to a printer, reading from a file.
3.Release:
• The process releases the resource, making it available for others.
• Example: close(), free(), delete(), release().
Deadlock Characterization:
Deadlock can arise if the following four conditions hold simultaneously
Necessary Conditions:
1.Mutual Exclusion –
At least one resource must be kept in a non-shareable state; if another process
requests it, it must wait for it to be released.
2.Hold and Wait –
A process must hold at least one resource while also waiting for at least one resource
that another process is currently holding.
3.No preemption –
Once a process holds a resource (i.e. after its request is granted), that resource cannot
be taken away from that process until the process voluntarily releases it.
4.Circular Wait –
There must be a set of processes P0, P1, P2,…, PN such that every P[I] is waiting for P[(I
+ 1) percent (N + 1)]. (It is important to note that this condition implies the hold-and-
wait condition, but dealing with the four conditions is easier if they are considered
separately).
Resource Allocation Graph (RAG)
• A Resource Allocation Graph (RAG) is a pictorial
representation of the system's state, showing how resources
are allocated to processes.
• It provides complete information about which processes
hold resources and which are waiting for resources.
• Deadlocks can be better understood using a directed graph
known as a system resource-allocation graph, which consists
of:
• Vertices (V): Represent processes and resources.
• Edges (E): Represent relationships between processes and
resources.
Components of RAG
1.Nodes (Vertices):
1.Processes (P1, P2, ..., Pn): Represented as circles (○), indicating all
active processes in the system.
2.Resources (R1, R2, ..., Rm): Represented as rectangles (□),
indicating different types of resources in the system.
2.Edges (Arrows):
1.Request Edge (Pi → Rj):
1.Indicates that process Pi has requested resource Rj and is waiting for it.
2.Assignment Edge (Rj → Pi):
1.Indicates that resource Rj has been allocated to process Pi, meaning the
process is using the resource.
How RAG Works?
• Each process (Pi) is represented as a circle (○).
• Each resource (Rj) is represented as a rectangle (□).
• If a resource has multiple instances, each instance is represented as a dot
inside the rectangle.
• A request edge (Pi → Rj) points to the rectangle (resource type), indicating
that the process is waiting for the resource.
• An assignment edge (Rj → Pi) points to one of the dots in the rectangle,
indicating that the process is using a specific instance of the resource.
• When a process requests a resource, a request edge is added to the graph.
• If the resource is available, the request edge is converted into an assignment
edge.
• When the process releases the resource, the assignment edge is removed
from the graph.
RAG with deadlock
Step-by-Step Analysis of the Graph
1.P1 is holding an instance of R2 and requesting R1.
2.P2 is holding R1 R2 and requesting R3.
3.P3 is holding R3 and requesting R2.
4.R4 is not involved in any allocation.
RAG with Cycle , No deadlock
Step-by-Step Analysis
1.P1 is requesting R2, while P2 and P3 are holding instances of R1.
2.P4 is requesting R2, and P3 is holding an instance of R1.
3.P1 is also holding an instance of R1, and R2 is allocated to P3 and P4.
Methods for handling deadlocks;
Deadlocks occur when a group of processes is stuck in a circular wait,
each waiting for a resource held by another process.
There are three main strategies to handle deadlocks:
1. Deadlock Prevention
2. Deadlock Avoidance:
3. Deadlock Detection & Recovery
1.Deadlock Prevention
Deadlock prevention ensures that at least one of the four necessary
conditions for deadlock does not hold. The four conditions are Mutual
Exclusion, Hold and Wait, No Preemption, and Circular Wait. Below
are the strategies to prevent each condition:
1. Mutual Exclusion (ME)
• Mutual exclusion is required for non-sharable resources.
• Example: A printer cannot be shared by multiple processes at the same time.
• Mutual exclusion is not required for sharable resources, meaning
they cannot be involved in deadlock.
• Example: Read-only files can be accessed by multiple processes
simultaneously.
• Since some resources are inherently non-sharable, deadlock cannot
be prevented by removing mutual exclusion.
2. Hold and Wait (Avoiding Hold and Wait)
A process should not hold any resources while waiting for additional
ones. This can be achieved using the following protocols:
Protocol 1: Request All Resources at the Beginning
• A process must request all required resources before starting
execution.
• Example: If a process needs a DVD drive, disk file, and printer, it
must request all at the start, even if it only needs the printer at the
end.
• Disadvantage: Leads to low resource utilization, as resources may
remain unused for long periods.
Protocol 2: Release Resources Before Requesting New Ones
• A process can request resources only if it holds none.
• Example:
• A process first requests a DVD drive and disk file, copies data, and then
releases both.
• It then requests a disk file and printer, prints data, and releases them.
• Disadvantage: Can lead to starvation, as some processes may wait
indefinitely for popular resources.
3. No Preemption
• If a process holding resources requests another unavailable resource, it must
release all currently held resources before waiting.
• Preempted resources are added to the list of requested resources.
• The process resumes execution only when it reacquires both the preempted
resources and the newly requested ones.
Steps for No Preemption Strategy:
1.If a process requests a resource:
1. If available → allocate.
2. If unavailable but held by another waiting process → preempt the resource and
allocate it.
3. If unavailable and held by an active process → requesting process must wait.
2.While waiting, its resources may be preempted if another process requests
them.
3.The process resumes only when it reacquires both old and newly requested
resources.
4. Circular Wait (Breaking Circular Wait Condition)
• To prevent circular wait, a system enforces an ordering of resource
requests.
• Method: Assign a unique number to each resource and require processes
to request resources in increasing numerical order.
Example:
• Suppose resources are numbered:
• R1 = 1, R2 = 2, R3 = 3, ...
• A process holding R2 can only request R3 or higher but not R1.
• This prevents circular dependency, breaking the cycle that leads to
deadlock.
Challenge:
• Deciding the correct ordering of resources for efficient resource allocation.
Deadlock Avoidance:
Deadlock avoidance requires the system to have prior knowledge
about resource requests to ensure that a deadlock never occurs. The
simplest and most useful approach requires each process to declare
the maximum number of resources it may need.
A deadlock-avoidance algorithm continuously checks the resource-
allocation state to prevent circular waiting. The resource-allocation
state consists of:
• The number of available resources.
• The number of allocated resources.
• The maximum demand of each process.
Safe State:
• When a process requests an available resource, system must decide if
immediate allocation leaves the system in a safe state
• System is in safe state if there exists a sequence <P1, P2, …, Pn> of ALL
the processes in the systems such that for each P i, the resources that Pi
can still request can be satisfied by currently available resources +
resources held by all the Pj, with j < i
• That is:
– If Pi resource needs are not immediately available, then Pi can wait until all Pj have
finished
– When Pj is finished, Pi can obtain needed resources, execute, return allocated
resources, and terminate
– When Pi terminates, Pi +1 can obtain its needed resources, and so on
• If a system is in safe state ⇒ no deadlocks
• A claim edge (Pi → Rj) indicates that process Pi may request resource
Rj; represented by a dashed line.
• When Pi requests Rj, the claim edge converts to a request edge.
• If the request is granted, the request edge converts to an assignment
edge.
• Once the resource is released, the assignment edge reconverts to a
claim edge.
• All claim edges must be present before a process can request any
resource.
• A request is granted only if it does not create a cycle in the resource-
allocation graph.
• If granting the request would create a cycle, the request is denied.
Resource-Allocation Graph for deadlock avoidance
53 → 37 16
37 → 14 23
14 → 65 51
65 → 67 2
67 → 98 31
98 → 122 24
122 → 124 2
124 → 183 59