0% found this document useful (0 votes)
13 views123 pages

Os Unit 2 Sem II Bca-1

Process synchronization in operating systems is essential for ensuring that multiple processes can execute concurrently without conflicts, particularly when accessing shared resources, thus preventing race conditions and maintaining data integrity. Key concepts include the critical section problem, where processes must coordinate access to shared resources, and various synchronization mechanisms like mutex locks and semaphores that help manage access. Classic synchronization problems such as the Producer-Consumer and Readers-Writers problems illustrate the challenges and solutions in coordinating process execution.

Uploaded by

gowtham sam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views123 pages

Os Unit 2 Sem II Bca-1

Process synchronization in operating systems is essential for ensuring that multiple processes can execute concurrently without conflicts, particularly when accessing shared resources, thus preventing race conditions and maintaining data integrity. Key concepts include the critical section problem, where processes must coordinate access to shared resources, and various synchronization mechanisms like mutex locks and semaphores that help manage access. Classic synchronization problems such as the Producer-Consumer and Readers-Writers problems illustrate the challenges and solutions in coordinating process execution.

Uploaded by

gowtham sam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 123

Process

Synchronization
• Process synchronization in an operating system (OS) ensures that
multiple processes can execute concurrently while maintaining
consistency and avoiding conflicts, especially when they access shared
resources.
• It is crucial for preventing race conditions, ensuring data integrity, and
improving system performance.
Why is Process Synchronization Needed?
• When multiple processes access shared data or resources,
inconsistency may occur due to race conditions.
• Without proper synchronization, one process may read or modify
data before another process completes its task, leading to incorrect
results.
• It helps in coordinating process execution to ensure that critical
sections (where shared resources are accessed) do not lead to
deadlocks or data corruption.
Critical Section Problem
• The Critical Section Problem arises in process synchronization, where
multiple processes attempt to access shared resources simultaneously. If
not handled properly, it can lead to race conditions, data inconsistency,
and system failures.
• Consider system of n processes {p0, p1, … pn-1}
• Each process has critical section segment of code
– Process may be changing common variables, updating table,
writing file, etc
– When one process in critical section, no other may be in its critical
section
• Each process must ask permission to enter critical
section in entry section, may follow critical section with
exit section, then remainder section
• General structure of process Pi
• Critical Section is the part of a program which tries to access shared resources.
That resource may be any resource in a computer like a memory location, CPU
or any I/O device.
• The critical section cannot be executed by more than one process at the same
time.
• Operating system faces the difficulties in allowing and disallowing the processes
from entering the critical section.
• The critical section problem is used to design a set of protocols which can ensure
that the Race condition among the processes will never arise.
Race Condition
• A race condition happens when multiple processes (or threads) try to
change/access the same data at the same time.
• The final result depends on which process finishes first, which can
lead to incorrect or unpredictable results.
Problem:
• If processes (or threads) modify shared data without coordination, it
can lead to incorrect values.This can cause software bugs, incorrect
calculations, or system crashes.
How to Fix it? (Synchronization)
• Ensure that only one process at a time can modify the shared data.
• Use locks or synchronization techniques to control access and avoid
conflicts.
Critical Section handling in OS

Two approaches depending on if kernel is pre-emptive or non-preemptive

– Pre-emptive – Allows pre-emption of process when running in kernel


mode

– Non-preemptive – runs until exits kernel mode, blocks, or voluntarily


yields CPU
Producer
Consumer
Solution to critical section problem
1.Mutual Exclusion - If process Pi is executing in its critical section, then
no other processes can be executing in their critical sections
2.Progress - If no process is executing in its critical section and there
exist some processes that wish to enter their critical section, then the
selection of the processes that will enter the critical section next
cannot be postponed indefinitely
3.Bounded Waiting - A bound must exist on the number of times that
other processes are allowed to enter their critical sections after a
process has made a request to enter its critical section and before that
request is granted.
Petersons Solution
• Peterson's solution is a classic software-based algorithm for achieving
mutual exclusion in a critical section for two processes.
• It ensures that only one process can enter its critical section at a time,
avoiding race conditions.
• Petersons solution requires the two process to share two data items:
The structure of process Pi in petersons solution
Peterson’s Algorithm
Explanation
• Initial Setup: Both processes set flag values to false, and turn is set to one of the
process IDs.
• Intention to Enter: A process sets its flag to true to indicate it wants to enter the
critical section.
• Set the Turn: The process sets the turn variable to its own ID, indicating it is ready to
enter.
• Waiting Loop: The process waits if the other process also wants to enter and it’s the
other process’s turn.
• Critical Section: The process enters the critical section once it successfully exits the
waiting loop.
• Exiting the Critical Section: The process resets its flag to false, allowing the other
process to enter.
Solution to Peterson’s solution
1. Mutual exclusion is preserved.
2. The Progress requirement is satisfied.
3. The bounded-waiting requirement is met.
Synchronization Hardware
• Synchronization hardware consists of special instructions built into a
computer’s processor that help multiple programs or processes
coordinate safely when sharing resources.
• These instructions make sure that only one process at a time can
access important data, preventing problems like data corruption or
unexpected results when multiple programs try to update the same
information at once.
Types of Synchronization Hardware
Mechanisms
1. Test-and-Set (TAS) Lock:
• Uses a special atomic instruction to check and modify a value in memory.
• Prevents multiple processes from entering the critical section.
• Mutual-Exclusion Implementation with test_and_set()

do {
while (test and set(&lock)) ;
/* do nothing */
/* critical section */
lock = false;
/* remainder section */
}
while (true);
2. Compare-and-Swap (CAS)
• Atomically compares a memory location with an expected value and swaps
it if they match.
• Used in lock-free algorithms and multithreading libraries
• Mutual-Exclusion Implementation with compare_and_swap()
do{
while (compare_and_swap(&lock, 0, 1) != 0) ;
/* do nothing */
/* critical section */
lock = 0;
/* remainder section */
}
while (true);
3. Bounded-waiting mutual exclusion with test and set().
• Ensures that every process gets a turn within a finite number of attempts, preventing starvation.
do{
waiting[i] = true;
key = true;
while (waiting[i] && key)
key = test and set(&lock);
waiting[i] = false;
/* critical section */
j = (i + 1) % n;
while ((j != i) && !waiting[j])
j = (j + 1) % n;
if (j == i)
lock = false;
else
waiting[j] = false;
• /* remainder section */
}
while (true);
Mutex Locks
• A Mutex (Mutual Exclusion) Lock is a synchronization mechanism used in
operating systems to ensure that only one process or thread can access a
shared resource (critical section) at a time.
• It prevents race conditions by allowing a process to lock the resource
before using it and unlock it after use.
How Mutex Locks Work?
1.Lock (Acquire): A process/thread requests access by locking the mutex.
2.Critical Section: The process executes its task safely without interference.
3.Unlock (Release): Once done, the process unlocks the mutex, allowing
others to access the resource.
• Definition of acquire() follows:
acquire() {
while (!available);
/* busy wait */
available = false;;
}
• Definition of release() follows:
release() {
available = true;
}
Solution to the critical-section
problem using mutex locks
do {
acquire lock
critical section
release lock
remainder section
} while(true);
• Mutex locks can cause busy waiting if implemented as spinlocks,
where a process continuously checks for lock availability instead of
sleeping.
• This leads to CPU wastage, reducing system efficiency, especially
when the waiting time for the critical section is long.
Semaphores
• A semaphore is a synchronization mechanism used in operating systems
and concurrent programming to control access to shared resources.
• A semaphore S is an integer variable . It can be initialized or accessed
only through two standard atomic operations:
1.wait()->wait(S) (also called P() operation)
2.signal()->signal(S) (also called V() operation)
Definition of wait ()
wait(S) {
while (S <= 0);
// busy wait
S--;
}

Definition of signal ()
Signal(){
S++; }
Semaphore Usage:
• A counting semaphore is a synchronization mechanism where the integer value can
range over an unrestricted domain. It is used to manage access to multiple instances
of a shared resource.
• A binary semaphore is a special type of semaphore where the integer value can only
be 0 or 1, making it functionally similar to a mutex lock. It is primarily used for
mutual exclusion and synchronization.
• Semaphores can be used to solve various synchronization problems in concurrent
programming.
• Consider two processes, P1 and P2, where S1 must execute before S2. This order of
execution can be enforced using a semaphore named "synch", which is initialized to
0.
P1:
S1;
signal(synch);

P2:
wait(synch);
S2;
Semaphore Implementation:
• It must be ensured that no two processes execute the wait() and
signal() operations on the same semaphore simultaneously.
• Since semaphores are used for synchronization, their
implementation itself becomes a critical section problem, where
the wait() and signal() operations must be placed within a critical
section.
• This approach may lead to busy waiting, where a process
continuously checks for resource availability instead of being put
to sleep.
• However, if the critical section is short, the overhead of busy
waiting is minimal.
• In cases where applications frequently enter critical sections, busy
waiting is inefficient and should be avoided.
Operations on the Semaphore
In a real operating system, when a process waits and the resource is not available, it
cannot just sit in a busy-wait loop (like while(S <= 0);)—because that wastes
CPU time.
Instead, the OS uses process queues and context switching to manage access efficiently.
So, we use:
🔒 block():
• When a process cannot proceed (i.e., wait() finds S <= 0), it's put into a waiting
queue.
• The process is blocked (not running).
• It waits until some other process signals that the resource is available.
🔓 wakeup(P):
• When signal() is called and a process is waiting, it removes one process from the
waiting queue.
• That process is then moved to the ready queue so it can continue running.
typedef struct {
int value; • signal() semaphore operation can
struct process *list; be:
} semaphore; signal(semaphore *S) {
S->value++;
if (S->value <= 0) {
• wait() semaphore operation can be:
remove a process P from S->list;
wait(semaphore *S) {
wakeup(P); // Resume process P
S->value--;
if (S->value < 0) {
}
add this process to S->list; }
block(); // Suspend process
}
}
Deadlock and Starvation:
• Deadlock – two or more processes are waiting indefinitely for an event that can be caused by only
one of the waiting processes.
• Let S and Q be two semaphores initialized to 1
• Suppose that Po executes wait (S) and then PI executes wait (Q). When Po executes
wait (Q), it must wait until P1 executes signal (Q). Similarly, when P1 executes wait (S), it
must wait until Po executes signal (S). Since these signal () operations cannot be
executed, Po and P1 are deadlocked.
P0 P1
wait(S); wait(Q);
wait(Q); wait(S);
. .
. .
signal(S); signal(Q);
signal(Q); signal(S);
Now We Have a Deadlock
• P0 is waiting for Q, which is held by P1
• P1 is waiting for S, which is held by P0
• Neither can proceed
• No one can call signal()
• System is stuck → DEADLOCK
• Starvation – indefinite blocking(waiting)
– A process may never be removed from the semaphore queue in which it is
suspended.

• Priority Inversion – A low-priority process holds a


resource (lock) that a high-priority process needs.The
high-priority process is forced to wait, even though it
should be running first.
Classic Problems of
Synchronization:
1. The Bounded Buffer Problem:
• The Bounded Buffer Problem (also known as the Producer-Consumer
Problem) is a classic synchronization problem where n buffers are shared
between producer and consumer processes. Each buffer can hold only one
item at a time.
• A semaphore mutex is used to ensure mutual exclusion while accessing the
buffer pool.
• It is initialized to 1.
• A semaphore full keeps track of the number of filled buffers and is
initialized to 0.
• A semaphore empty counts the number of available buffers and is
initialized to n.
Structure of the Producer Process:
do {
...
/* Produce an item in next_produced */
...
wait(empty); // Ensure there is space in the buffer
wait(mutex); // Lock access to the buffer

...
/* Add next_produced to the buffer */
...

signal(mutex); // Release buffer access


signal(full); // Increase the count of filled buffers

} while (true);
Structure of the Consumer Process:
do {
wait(full); // Ensure there is an item to consume
wait(mutex); // Lock access to the buffer
...
/* Remove an item from buffer to next_consumed */
...

signal(mutex); // Release buffer access


signal(empty); // Increase the count of empty buffers

...
/* Consume the item in next_consumed */
...
} while (true);
2.The Readers-Writers Problem:
The Readers-Writers Problem is a synchronization issue that arises
when multiple processes need to access a shared data set.
• Readers: Processes that only read the data set and do not modify it.
• Writers: Processes that can both read and write to the data set.
Problem Constraints:
• Multiple readers can access the shared data simultaneously without
interference.
• Only one writer can access the shared data at any given time to
prevent data inconsistency.
• Different variations of this problem involve priority considerations
between readers and writers.
Shared Data and Semaphores:
• To ensure synchronization, the following are used:
• Semaphore wrt (initialized to 1): Ensures mutual exclusion for writers. Only
one writer can access the data at a time.
• Semaphore mutex (initialized to 1): Ensures mutual exclusion when updating
the reader count (readcount).
• Integer readcount (initialized to 0): Tracks the number of active readers.
Synchronization Rules:
• Readers should not block each other while reading, even if a writer is waiting.
• Writers should access the shared data as soon as they are ready and prevent
new readers from starting while waiting.
• The first reader locks the resource for readers using wrt.
• The last reader releases the lock, allowing a waiting writer to proceed.
Structure of a Writer Process
A writer must wait for exclusive access before performing the write
operation and then release the lock once done.
do {
wait(wrt); // Lock access for writers

// Writing is performed

signal(wrt); // Release access for other writers or readers


} while (TRUE)
Problem Context: Readers-Writers Problem
• In a shared system:
• Multiple readers can read the shared data simultaneously.
• Only one writer can write at a time.
• No reader should read while a writer is writing (to avoid reading
inconsistent data).
• So, writers need exclusive access to the shared resource.
1. do { ... } while (TRUE);
This means the writer process runs continuously in a loop.
The writer keeps trying to write whenever it gets permission.
2. wait(wrt);
• This is a synchronization primitive (also called P() or down() in
some systems).
• wrt is a semaphore that controls access to the shared resource.
• When the writer calls wait(wrt):
• It waits if another writer is currently writing.
• It also waits if any readers are reading.
• It gets exclusive access only when:
• No other writers are writing.
• No readers are reading.
3.signal(wrt);
• This is the unlock operation (also called V() or up()).
• It signals that:
• The writer has finished writing.
• The shared resource is now free for other writers or readers.
Imagine a whiteboard in a meeting room:
• Readers are students coming to read the notes — many can read at once.
• Writers are teachers updating the notes — only one can write at a time, and
no students can read while writing.
• Writer Process Analogy:
• The teacher waits for an empty room (wait(wrt)).
• Then she writes on the board (critical section).
• Once done, she leaves the room (signal(wrt)) allowing others to enter.
Structure of a Reader Process:
A reader must update the readcount variable safely and ensure proper synchronization with other readers
and writers.
do {
wait(mutex); // Lock access to modify readcount
readcount++;
if (readcount == 1)
wait(wrt); // First reader locks writers out
signal(mutex); // Release lock

// Reading is performed

wait(mutex); // Lock to update readcount


readcount--;
if (readcount == 0)
signal(wrt); // Last reader allows writers
signal(mutex); // Release lock
} while (TRUE);
1. wait(mutex);
• The reader locks mutex so that no other reader can change readcount at the same
time.
• This prevents a race condition (where two readers try to modify readcount at the same
time).
• 2. readcount++;
• The reader increments the readcount because it's now reading.
• readcount tells how many readers are reading right now.
3. if (readcount == 1) wait(wrt);
• If this is the first reader, it blocks writers using wait(wrt).
• That way, no writer can write while even one reader is reading.
• Other readers can continue entering without blocking writers again (as readcount >
1 won't call wait(wrt) again).
4. signal(mutex);
• The critical section around readcount is done, so release mutex.
• Now, other readers can safely enter.
5. Reading is performed
• The reader reads the shared data.
• Multiple readers can be reading here at the same time, safely.
6. wait(mutex);
• Before changing readcount again (on exit), acquire the mutex.
• This ensures only one reader modifies readcount at a time.
7. readcount--;
• The reader is leaving, so decrease the readcount.
8. if (readcount == 0) signal(wrt);
• If this is the last reader, it unblocks writers by calling signal(wrt).
• Now, writers are allowed to proceed.
9. signal(mutex);
• Unlock the mutex so other readers can update readcount.
Dining Philosophers Problem:
• Where multiple philosophers sit around a table, alternating between
thinking and eating.
• Philosophers spend their lives alternating between thinking and
eating.
• They do not interact with their neighbors but occasionally try to pick
up two chopsticks (one at a time) to eat from a bowl.
• A philosopher needs both chopsticks to eat and must release both
when done.
• Philosopher
A process/thread
• Chopstick
A shared resource (mutex/semaphore)
• Thinking
Doing non-critical operations
• Eating
Performing critical section (needs resource access)
• Deadlock
All philosophers hold one chopstick and wait forever
• Starvation
A philosopher never gets a chance to eat
For a System with 5 Philosophers:
• Shared Data: Bowl of rice (shared data set).
• Semaphore chopstick[5], initialized to 1, to represent the availability
of each chopstick.
Structure of Philosopher i:
do {
wait(chopstick[i]); // pick left chopstick
wait(chopstick[(i+1)%5]); // pick right chopstick
eat();
signal(chopstick[i]); // put left chopstick
signal(chopstick[(i+1)%5]); // put right chopstick
} while (TRUE);
• The philosopher waits for both chopsticks before eating.
• After finishing the meal, the philosopher releases both chopsticks and
resumes thinking.
Synchronization Approach:
• A philosopher is allowed to pick up chopsticks only if both are
available.
• To prevent deadlock, an asymmetric solution is used:
• Odd-numbered philosophers pick up their left chopstick first, then the right
chopstick.
• Even-numbered philosophers pick up their right chopstick first, then the left
chopstick.
Monitors
• A high-level abstraction that provides a convenient and effective
mechanism for process synchronization.
• It is an abstract data type, where internal variables can only be
accessed by the code inside the monitor's procedures.
• Only one process can be active within the monitor at a time, ensuring
mutual exclusion.
• However, monitors are not powerful enough to model certain
complex synchronization schemes.
Monitor Structure:
monitor monitor-name {
// Shared variable declarations
procedure P1 (…) {
… }
procedure Pn (…) {
… }
// Initialization code
Initialization (…) {
… }
}
• Shared variables are declared inside the monitor.
• Procedures (P1, P2, ..., Pn) operate on shared data but can only be accessed from within the
monitor.
• Initialization code is used to set up the monitor's internal state.
Schematic view of a monitor
Monitor: Condition Variables & Choices
• If process P invokes x.signal(), and process Q is suspended in x.wait(),
a decision must be made on which process proceeds next.
• P and Q cannot execute in parallel—if Q is resumed, then P must wait.
Options for Handling x.signal() and x.wait()
1.Signal and Wait
• Process P waits until Q either:
• Leaves the monitor, or
• Waits for another condition.
2.Signal and Continue
• Process Q waits until P either:
• Leaves the monitor, or
• Waits for another condition.
Dining Philosophers Problem Solution Using Monitor
The Dining Philosophers Problem can be solved using monitors to ensure controlled
access to shared resources (chopsticks). This approach prevents deadlock, but
starvation is possible.

monitor DiningPhilosophers {
enum { THINKING, HUNGRY, EATING } state[5];
condition self[5];

void pickup(int i) {
state[i] = HUNGRY; // Philosopher is hungry
test(i); // Check if philosopher can eat
if (state[i] != EATING) // If philosopher cannot eat, wait
self[i].wait;
}
void putdown(int i) {
state[i] = THINKING; // Philosopher finishes eating
// Test left and right neighbors to see if they can eat
test((i + 4) % 5); // Check left neighbor (i - 1)
test((i + 1) % 5); // Check right neighbor (i + 1)
}
void test(int i) {
// Check if both neighbors are not eating, and the philosopher is hungry
if (state[(i + 4) % 5] != EATING && state[i] == HUNGRY && state[(i + 1) % 5] != EATING) {
state[i] = EATING; // Philosopher can eat now
self[i].signal(); // Wake up philosopher to eat
}}
initialization_code() {
for (int i = 0; i < 5; i++)
state[i] = THINKING; // Initially, all philosophers are thinking
}
}
Philosopher Actions
• Each philosopher invokes the following operations to pick up and put
down chopsticks:
DiningPhilosophers.pickup(i); // Philosopher i tries to pick up
chopsticksEAT // Philosopher
DiningPhilosophers.putdown(i); // Philosopher i puts down chopsticks
Synchronization Examples:
• Based on windows:
Uniprocessor Systems: Use interrupt masks to protect access to global resources.
Multiprocessor Systems: Use spinlocks, ensuring that the spinlocking thread is
never preempted.
Dispatcher Objects in User-Land: Act as mutexes, semaphores, events, and timers.
-> Events: Function like condition variables.
-> Timers: Notify one or more threads when the set time expires.
Dispatcher Objects States:
• Signaled State: Object is available.
• Non-Signaled State: Thread will block.
Based on Linux:
•Linux Synchronization Evolution:
Before Kernel 2.6: Used interrupt disabling to implement short critical
sections.
From Kernel 2.6 Onward: Became fully preemptive.
•Linux Provides Various Synchronization Mechanisms:
Semaphores
Atomic integers
Spinlocks
Reader-writer versions of both semaphores and spinlocks
•Single-CPU Systems:
Spinlocks are replaced by enabling and disabling kernel preemption.
Atomic Variables in Linux
Atomic Integer Type: atomic_t is used to define atomic integer
variables.
Example Variables:
atomic_t counter; → Declares an atomic integer variable named
counter.int value; → Declares a normal integer variable named value.
Process Scheduling
• Process Scheduling is the method used by the operating system (OS)
to decide which process will run on the CPU next. Since multiple
processes run in a system, the OS manages them efficiently to ensure
smooth execution.
Criteria:
• Max CPU utilization
• Max throughput
• Min turnaround time
• Min waiting time
• Min response time
•Thread: A unit of execution within a process.
•Process: A running instance of a program that may have one or more
threads.
•CPU Utilization: Keeping the CPU as busy as possible.
•Throughput: Number of processes completed per time unit.
•Turnaround Time (TAT): Total time to execute a process.
•Waiting Time (WT): Time a process spends in the ready queue.
•Response Time: Time from request submission to the first response.
•Burst Time: is the total time a process needs the CPU for
execution.
Deadlocks
• A deadlock is a situation in which two or more processes (or threads)
are unable to proceed because they are each waiting for a resource
held by another process, creating a cycle of dependency.
• This results in an indefinite blocking state where none of the
processes can make progress.
Deadlock Condition:
• A deadlock occurs when every process in a set is waiting for a
resource that is held by another process in the same set.
• This causes an indefinite wait where no process can proceed
For example: Process 1 is holding Resource 1 while Process 2 acquires
Resource 2, and Process 2 is waiting for Resource 1.
System Model:
• A system consists of a finite number of resources (e.g., memory,
printers, CPUs).
• These resources are distributed among multiple processes.
• A process must:
->Request a resource before using it.
->Release the resource after using it.
• A process can request any number of resources needed for a task.
• The total number of resources requested must not exceed the total
available resources.
Normal Process Execution Sequence:
1.Request :
• If the resource is not available, the process must wait.
• Example: open(), malloc(), new(), request().
2.Use:
• The process performs operations using the resource.
• Example: Printing to a printer, reading from a file.
3.Release:
• The process releases the resource, making it available for others.
• Example: close(), free(), delete(), release().
Deadlock Characterization:
Deadlock can arise if the following four conditions hold simultaneously
Necessary Conditions:
1.Mutual Exclusion –
At least one resource must be kept in a non-shareable state; if another process
requests it, it must wait for it to be released.
2.Hold and Wait –
A process must hold at least one resource while also waiting for at least one resource
that another process is currently holding.
3.No preemption –
Once a process holds a resource (i.e. after its request is granted), that resource cannot
be taken away from that process until the process voluntarily releases it.
4.Circular Wait –
There must be a set of processes P0, P1, P2,…, PN such that every P[I] is waiting for P[(I
+ 1) percent (N + 1)]. (It is important to note that this condition implies the hold-and-
wait condition, but dealing with the four conditions is easier if they are considered
separately).
Resource Allocation Graph (RAG)
• A Resource Allocation Graph (RAG) is a pictorial
representation of the system's state, showing how resources
are allocated to processes.
• It provides complete information about which processes
hold resources and which are waiting for resources.
• Deadlocks can be better understood using a directed graph
known as a system resource-allocation graph, which consists
of:
• Vertices (V): Represent processes and resources.
• Edges (E): Represent relationships between processes and
resources.
Components of RAG
1.Nodes (Vertices):
1.Processes (P1, P2, ..., Pn): Represented as circles (○), indicating all
active processes in the system.
2.Resources (R1, R2, ..., Rm): Represented as rectangles (□),
indicating different types of resources in the system.
2.Edges (Arrows):
1.Request Edge (Pi → Rj):
1.Indicates that process Pi has requested resource Rj and is waiting for it.
2.Assignment Edge (Rj → Pi):
1.Indicates that resource Rj has been allocated to process Pi, meaning the
process is using the resource.
How RAG Works?
• Each process (Pi) is represented as a circle (○).
• Each resource (Rj) is represented as a rectangle (□).
• If a resource has multiple instances, each instance is represented as a dot
inside the rectangle.
• A request edge (Pi → Rj) points to the rectangle (resource type), indicating
that the process is waiting for the resource.
• An assignment edge (Rj → Pi) points to one of the dots in the rectangle,
indicating that the process is using a specific instance of the resource.
• When a process requests a resource, a request edge is added to the graph.
• If the resource is available, the request edge is converted into an assignment
edge.
• When the process releases the resource, the assignment edge is removed
from the graph.
RAG with deadlock
Step-by-Step Analysis of the Graph
1.P1 is holding an instance of R2 and requesting R1.
2.P2 is holding R1 R2 and requesting R3.
3.P3 is holding R3 and requesting R2.
4.R4 is not involved in any allocation.
RAG with Cycle , No deadlock
Step-by-Step Analysis
1.P1 is requesting R2, while P2 and P3 are holding instances of R1.
2.P4 is requesting R2, and P3 is holding an instance of R1.
3.P1 is also holding an instance of R1, and R2 is allocated to P3 and P4.
Methods for handling deadlocks;
Deadlocks occur when a group of processes is stuck in a circular wait,
each waiting for a resource held by another process.
There are three main strategies to handle deadlocks:
1. Deadlock Prevention
2. Deadlock Avoidance:
3. Deadlock Detection & Recovery
1.Deadlock Prevention
Deadlock prevention ensures that at least one of the four necessary
conditions for deadlock does not hold. The four conditions are Mutual
Exclusion, Hold and Wait, No Preemption, and Circular Wait. Below
are the strategies to prevent each condition:
1. Mutual Exclusion (ME)
• Mutual exclusion is required for non-sharable resources.
• Example: A printer cannot be shared by multiple processes at the same time.
• Mutual exclusion is not required for sharable resources, meaning
they cannot be involved in deadlock.
• Example: Read-only files can be accessed by multiple processes
simultaneously.
• Since some resources are inherently non-sharable, deadlock cannot
be prevented by removing mutual exclusion.
2. Hold and Wait (Avoiding Hold and Wait)
A process should not hold any resources while waiting for additional
ones. This can be achieved using the following protocols:
Protocol 1: Request All Resources at the Beginning
• A process must request all required resources before starting
execution.
• Example: If a process needs a DVD drive, disk file, and printer, it
must request all at the start, even if it only needs the printer at the
end.
• Disadvantage: Leads to low resource utilization, as resources may
remain unused for long periods.
Protocol 2: Release Resources Before Requesting New Ones
• A process can request resources only if it holds none.
• Example:
• A process first requests a DVD drive and disk file, copies data, and then
releases both.
• It then requests a disk file and printer, prints data, and releases them.
• Disadvantage: Can lead to starvation, as some processes may wait
indefinitely for popular resources.
3. No Preemption
• If a process holding resources requests another unavailable resource, it must
release all currently held resources before waiting.
• Preempted resources are added to the list of requested resources.
• The process resumes execution only when it reacquires both the preempted
resources and the newly requested ones.
Steps for No Preemption Strategy:
1.If a process requests a resource:
1. If available → allocate.
2. If unavailable but held by another waiting process → preempt the resource and
allocate it.
3. If unavailable and held by an active process → requesting process must wait.
2.While waiting, its resources may be preempted if another process requests
them.
3.The process resumes only when it reacquires both old and newly requested
resources.
4. Circular Wait (Breaking Circular Wait Condition)
• To prevent circular wait, a system enforces an ordering of resource
requests.
• Method: Assign a unique number to each resource and require processes
to request resources in increasing numerical order.
Example:
• Suppose resources are numbered:
• R1 = 1, R2 = 2, R3 = 3, ...
• A process holding R2 can only request R3 or higher but not R1.
• This prevents circular dependency, breaking the cycle that leads to
deadlock.
Challenge:
• Deciding the correct ordering of resources for efficient resource allocation.
Deadlock Avoidance:
Deadlock avoidance requires the system to have prior knowledge
about resource requests to ensure that a deadlock never occurs. The
simplest and most useful approach requires each process to declare
the maximum number of resources it may need.
A deadlock-avoidance algorithm continuously checks the resource-
allocation state to prevent circular waiting. The resource-allocation
state consists of:
• The number of available resources.
• The number of allocated resources.
• The maximum demand of each process.
Safe State:
• When a process requests an available resource, system must decide if
immediate allocation leaves the system in a safe state
• System is in safe state if there exists a sequence <P1, P2, …, Pn> of ALL
the processes in the systems such that for each P i, the resources that Pi
can still request can be satisfied by currently available resources +
resources held by all the Pj, with j < i

• That is:
– If Pi resource needs are not immediately available, then Pi can wait until all Pj have
finished
– When Pj is finished, Pi can obtain needed resources, execute, return allocated
resources, and terminate
– When Pi terminates, Pi +1 can obtain its needed resources, and so on
• If a system is in safe state ⇒ no deadlocks

• If a system is in unsafe state ⇒ possibility of deadlock

• Avoidance ⇒ ensure that a system will never enter an unsafe state.


Deadlock Avoidance Algorithms
• Single instance of a resource type
– Use a resource-allocation graph

• Multiple instances of a resource type


– Use the banker’s algorithm
Resource-Allocation Graph Algorithm

• A claim edge (Pi → Rj) indicates that process Pi may request resource
Rj; represented by a dashed line.
• When Pi requests Rj, the claim edge converts to a request edge.
• If the request is granted, the request edge converts to an assignment
edge.
• Once the resource is released, the assignment edge reconverts to a
claim edge.
• All claim edges must be present before a process can request any
resource.
• A request is granted only if it does not create a cycle in the resource-
allocation graph.
• If granting the request would create a cycle, the request is denied.
Resource-Allocation Graph for deadlock avoidance

Unsafe State in RAG


• A request can be granted only if converting the request edge to an assignment edge does
not result in a cycle in the resource allocation graph. If a cycle is formed, the request is
denied to avoid deadlock.
Banker’s Algorithm
The Banker’s Algorithm is used when multiple instances of a resource
type exist. Each process must declare its maximum resource need
beforehand.
If a process requests resources, the system checks whether granting
them will keep the system in a safe state.
Let n be the number of processes and m be the number of resource
types.
1. Available: Vector showing how many instances of each resource
type are free.
2. Max: Matrix showing the maximum demand of each process
3. Allocation: Matrix showing the number of resources currently
allocated to processes
4. Need: Matrix calculated as: Need = Max - Allocation
Banker’s Safety Algorithm
1.Initialize Work = Available (Work is a temporary list showing how many free
resources are left.)
Finish[i] = false for all i. (tracks if a process is done. Initially, no process is
finished.)
2.Find a process Pi where:
Finish[i] = false (the process has not finished)
Need[i] ≤ Work If no such process exists, go to step 4.(it can
finish with the currently available resources)
3.Allocate resources temporarily:
Work = Work + Allocation[i]
Finish[i] = true
Repeat from step 2.
4.If all Finish[i] == true, the system is in a safe state.
Banker’s Resource Request Algorithm
When a process Pi requests resources (Requesti):
1.If Requesti ≤ Needi, go to Step 2.
Otherwise, raise an error condition, since the process has exceeded its maximum
claim.
2.If Requesti ≤ Available, go to Step 3.
Otherwise, Pi must wait, since the resources are not available.
3.Pretend to allocate requested resources to Pi by modifying the state as follows:
Available = Available – Requesti
Allocationi = Allocationi + Requesti
Needi = Needi – Requesti
4.Run the Safety Algorithm:
If safe → The resources are allocated to Pi.
If unsafe → Pi must wait, and the old resource-allocation state is restored.
1. Request Made:
1. Process P makes a request for some resources.
2. Check Validity:
1. If Request ≤ Need[P], continue.
2. Else, it's an error (process asked for more than declared maximum).
3. Check Availability:
1. If Request ≤ Available, proceed.
2. Else, the process must wait (resources not available now).
4. Try Allocation (Trial):
1. Temporarily allocate the requested resources.
2. Update Available, Allocation, and Need.
5. Check for Safe State:
1. Use a safety algorithm to simulate if all processes can finish.
2. If safe → grant request.
3. If unsafe → roll back and deny request.
Deadlock Detection
1. If Resources Have a Single Instance
• In this case for Deadlock detection, we can run an algorithm to check for the cycle in
the Resource Allocation Graph. The presence of a cycle in the graph is a sufficient
condition for deadlock.
• In the above diagram, resource 1 and resource 2 have single instances. There is a
cycle R1 → P1 → R2 → P2. So, Deadlock is Confirmed.
2. If There are Multiple Instances of Resources
• Detection of the cycle is necessary but not a sufficient condition for deadlock
detection, in this case, the system may or may not be in deadlock varies according to
different situations.
• For systems with multiple instances of resources, algorithms like Banker’s Algorithm
can be adapted to periodically check for deadlocks.
3. Wait-For Graph Algorithm
• The Wait-For Graph Algorithm is a deadlock detection algorithm used to detect
deadlocks in a system where resources can have multiple instances. The algorithm
works by constructing a Wait-For Graph, which is a directed graph that represents the
dependencies between processes and resources.
Deadlock Recovery
• Killing The Process: Killing all the processes involved in the deadlock.
Killing process one by one. After killing each process check for deadlock
again and keep repeating the process till the system recovers from
deadlock. Killing all the processes one by one helps a system to break
circular wait conditions.
• Process Rollback: Rollback deadlocked processes to a previously saved
state where the deadlock condition did not exist. It requires checkpointing
to periodically save the state of processes.
• Resource Preemption: Resources are preempted from the processes
involved in the deadlock, and preempted resources are allocated to other
processes so that there is a possibility of recovering the system from the
deadlock. In this case, the system goes into starvation.
• Concurrency Control: Concurrency control mechanisms are used to
prevent data inconsistencies in systems with multiple concurrent
processes. These mechanisms ensure that concurrent processes do not
access the same data at the same time, which can lead to inconsistencies
and errors.
Demand Paging
Demand Paging is a memory
management technique used in
virtual memory systems where
pages of a process are
loaded into memory only
when they are needed, rather
than loading the entire process
into memory at once.
->Less I/O needed – no unnecessary I/O.
->Less memory required.
->Faster response.
-> More users can run programs concurrently.
• Works similarly to a paging system with swapping.
• A page is needed ⇒ it is referenced:
->Invalid reference ⇒ abort.
->Not in memory ⇒ bring the page into memory.
• Lazy swapper – only swaps a page into memory if it will be needed.
• A swapper that works with pages is called a pager.
Concepts of Demand Paging:
• Page: A fixed-size block of logical memory.
• Frame: A fixed-size block of physical memory.
• Page Table: Keeps track of which pages are in memory.
• Page Fault: Occurs when a program tries to access a page that is not currently in memory.
How Demand Paging Works:
• Initially, no pages of the process are loaded into RAM (except maybe a
few essential ones).
• When the CPU tries to access a page that is not in memory, a page
fault occurs.
• The OS:
• Pauses the process.
• Loads the required page from disk into memory.
• Updates the page table.
• Resumes the process.
Example: Let’s say we have a process with 5 pages: P0, P1, P2, P3, P4 And only 3 frames in physical memory (RAM).
1. Initial Memory:
• Physical Memory: [ - ][ - ][ - ] ← All frames are empty
2. Process tries to access page P2:
• P2 not in memory → Page Fault
• OS loads P2 into first available frame.
• Physical Memory: [ P2 ][ - ][ - ]
3. Now process accesses P0:
• P0 not in memory → Page Fault
• OS loads P0.
• Physical Memory: [ P2 ][ P0 ][ - ]
4. Next, access P1:
• Page Fault again → Load P1
• Physical Memory: [ P2 ][ P0 ][ P1 ]
5. Now process accesses P3:
• No free frame → OS must replace one page using a page replacement algorithm (e.g., FIFO, LRU).
• Let’s say P2 is replaced (FIFO).
• Physical Memory: [ P3 ][ P0 ][ P1 ]
What is File Allocation?
• File allocation is a method used by the Operating System to assign disk space
to files. Since disks are divided into fixed-size blocks (or sectors), the OS must
decide how to store and manage files in these blocks efficiently and reliably.
Types of File Allocation Methods
• There are three main types of file allocation:
1. Contiguous Allocation
Definition:
• In contiguous allocation, each file is stored in a set of blocks that are physically
next to each other on the disk.
How it works:
• When a file is created, the operating system searches for a block of free space large
enough to store the entire file as one continuous sequence.
• The directory entry of the file stores:
• Starting block number
• File length (in blocks)
Example:
• A file named file1.txt of size 5 blocks is stored starting from block 100:

Disk Layout: 100 | 101 | 102 | 103 | 104


↓ ↓ ↓ ↓ ↓
• file1.txt is stored here
2. Linked Allocation
Definition:
• In this method, each file is stored as a linked list of disk blocks. The blocks can be
placed anywhere on the disk, not necessarily contiguous.
How it works:
• Each block contains:
• File data
• A pointer (address) to the next block
• The directory entry stores the first block (start block)
Example:
• File file2.txt stored in blocks 5 → 9 → 13 → 17:
• Block 5: [Data | → 9]
• Block 9: [Data | → 13]
• Block 13: [Data | → 17]
• Block 17: [Data | NULL]
3. Indexed Allocation
Definition:
• Each file has its own index block, which contains all the addresses (pointers) of the data blocks for that file.
• How it works:
• The directory entry contains a pointer to the index block
• The index block contains a list of all data blocks
• All data blocks can be scattered across the disk
• Example:
• File file3.txt has index block at 200, and stores data in blocks: 9, 14, 25, 30, 45:
• Index Block (200):
• [ 9 | 14 | 25 | 30 | 45 ]

• Block 9: [Data]
• Block 14: [Data]
• Block 25: [Data]

Disk Scheduling
• Disk head movement is slow compared to CPU and memory.
• Efficient scheduling minimizes the seek time (time to move the head to the
track).
• Goal: Serve pending requests in an optimal sequence.
• Disk Scheduling Algorithms are
FCFS(First come first serve)
SSJF(Shortest seek time first)
SCAN Scheduling
C-SCAN Scheduling
LOOK Scheduling
1. FCFS (First Come First Serve)
Requests are served in the order they arrive.
Simple but may lead to long wait times if requests are scattered.
Example:
Disk head starts at track 53.
Request queue: 98, 183, 37, 122, 14, 124, 65, 67
Seek sequence:
53 → 98 → 183 → 37 → 122 → 14 → 124 → 65 → 67
Total head movement =
|53-98| + |98-183| + |183-37| + ... = 640 tracks
Advantage: Simple
Disadvantage: Long average wait time
1. FCFS (First Come First Serve)
Step-by-step movement:
• Start at 53
• Move to 98 → 45 tracks
• Move to 183 → 85 tracks
• Move to 37 → 146 tracks
• Move to 122 → 85 tracks
• Move to 14 → 108 tracks
• Move to 124 → 110 tracks
• Move to 65 → 59 tracks
• Move to 67 → 2 tracks
Total head movement = 640 tracks
2. SSTF (Shortest Seek Time First)
• Selects the request closest to the current head position.
• Greedy algorithm – minimizes immediate seek time.
• Example:
• Head at 53
• Queue: 98, 183, 37, 122, 14, 124, 65, 67
• Seek sequence:
53 → 65 → 67 → 37 → 14 → 98 → 122 → 124 → 183
Total head movement = 236 tracks
Advantage: Better than FCFS
Disadvantage: Starvation for far requests
SSTF (Shortest Seek Time First)
• Always choose the nearest request from current head:
• Step-by-step movement:
• Start at 53
• Nearest = 65 (12 tracks)
• Next nearest = 67 (2 tracks)
• Nearest = 37 (30 tracks)
• Nearest = 14 (23 tracks)
• Nearest = 98 (84 tracks)
• Next = 122 (24 tracks)
• Next = 124 (2 tracks)
• Last = 183 (59 tracks)

• Total head movement ≈ 236 tracks


3. SCAN (Elevator Algorithm)
• Moves the head in one direction (up/down), servicing requests.
• At the end, reverses direction and continues.
• Example:
• Head at 53, moving toward higher tracks
• Queue: 98, 183, 37, 122, 14, 124, 65, 67
• Seek sequence:
(Upwards) 53 → 65 → 67 → 98 → 122 → 124 → 183 → then
reverse → 37 → 14
• (Downwards)53 → 37 → 14 → [reverse] → 65 → 67 → 98 → 122 → 124 →
183
Total head movement = 208 + 169 = 377 tracks
Advantage: More fair than SSTF
Disadvantage: Slightly more seek than SSTF
SCAN (Elevator Algorithm)
• Head moves in one direction (up) and services all in that direction, then reverses.
• Step-by-step movement:
• Start at 53, go up:
• 65 → 12 tracks
• 67 → 2 tracks
• 98 → 31 tracks
• 122 → 24 tracks
• 124 → 2 tracks
• 183 → 59 tracks
• Reverses direction:
• 37 → 146 tracks
• 14 → 23 tracks
Total head movement ≈ 377 tracks
From → To Movement
53 → 37 16
37 → 14 23
14 → 0 14
0 → 65 65
65 → 67 2
67 → 98 31
98 → 122 24
122 → 124 2
124 → 183 59
Total 236 tracks
4. LOOK
• Like SCAN, but only goes as far as the last request in each direction
(doesn’t go to disk end).
• Seek sequence:
53 → 65 → 67 → 98 → 122 → 124 → 183 → reverse →
37 → 14
(same as SCAN, if SCAN didn’t go to end)
Advantage: Avoids unnecessary travel
Disadvantage: Still possible to delay far requests
From → To Tracks Moved

53 → 37 16

37 → 14 23

14 → 65 51

65 → 67 2

67 → 98 31

98 → 122 24

122 → 124 2

124 → 183 59

Total 208 tracks


• 5. C-SCAN (Circular SCAN)
• Moves in one direction only.
• After reaching the last request, jumps to the beginning without
servicing requests.
• Seek sequence:
53 → 65 → 67 → 98 → 122 → 124 → 183 → (jump to
start) → 14 → 37
• Advantage: Uniform wait time
Disadvantage: Jump cost
6. C-LOOK
• Like C-SCAN, but only goes as far as last request (not end of disk).
• Then jumps to the lowest request and continues.
• Seek sequence:
53 → 65 → 67 → 98 → 122 → 124 → 183 → jump →
14 → 37
Advantage: Uniform wait time with less jump cost
Disadvantage: Can still favor mid-range requests
Step-by-step movement:
• Start at 53, go up:
• 65 → 12 tracks
• 67 → 2 tracks
• 98 → 31 tracks
• 122 → 24 tracks
• 124 → 2 tracks
• 183 → 59 tracks
• Reverses direction:
• 37 → 146 tracks
• 14 → 23 tracks

• Total head movement ≈ 377 tracks


Page Replacement Algorithm
• In virtual memory systems, page replacement algorithms decide which memory page to
remove (swap out) when a new page needs to be loaded, and the physical memory (RAM) is
already full.

Why Page Replacement?


• CPU generates addresses for pages not in memory → Page Fault occurs.
• If memory is full, an existing page must be replaced.
• Goal: Minimize the number of page faults.

In Virtual Memory Systems:


• The CPU generates logical addresses to access memory.
• If the required page is already in main memory (RAM) → Page Hit
• If the required page is not in memory and must be fetched from disk → Page Miss (also
called a Page Fault)
Page Replacement Algorithms
1. FIFO (First-In, First-Out)
• Oldest page is replaced first.
• Simple queue-based approach.
• This is the simplest page replacement algorithm. In this algorithm, the
operating system keeps track of all pages in the memory in a queue, the
oldest page is in the front of the queue. When a page needs to be replaced
page in the front of the queue is selected for removal.
Example Problem (FIFO)
• Consider page reference string 1, 3, 0, 3, 5, 6, 3 with 3-page frames. Find the
number of page faults using FIFO Page Replacement algorithm
2. Reference String: 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3
Number of Frames: 3
Step Reference Frame Content Hit / Miss
1 7 [7] Miss
2 0 [7, 0] Miss
3 1 [7, 0, 1] Miss
4 2 [0, 1, 2] (7 out) Miss
5 0 [0, 1, 2] Hit
6 3 [1, 2, 3] (0 out) Miss
7 0 [2, 3, 0] (1 out) Miss
8 4 [3, 0, 4] (2 out) Miss
9 2 [0, 4, 2] (3 out) Miss
10 3 [4, 2, 3] (0 out) Miss
11 0 [2, 3, 0] (4 out) Miss
12 3 [2, 3, 0] Hit

Total Page Faults (Misses): 10


Total Hits: 2
2. LRU (Least Recently Used)
LRU replaces the page that has not been used for the longest time when a
new page needs to be loaded and memory is full.
• It is based on the idea that recently used pages are more likely to be used
again.
• Tracks the usage history of each page. (e.g., stack or counter).
• When a page is used/accessed, it becomes the most recently used.
• When a replacement is needed, the page that was least recently accessed is
removed.
Example Problem (LRU)
• Same Reference String: 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0,
3,2,3
Frames: 4
• You maintain a record (stack or list) of page usage order:
• When a page is used/accessed, it becomes the most recently used.
• When a replacement is needed, the page that was least recently
accessed is removed.
3. Optimal Page Replacement
• Replaces the page that won’t be used for the longest time in future.
• Best possible result (used for comparison).

• Example: Consider the page references 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 3


with 4-page frame. Find number of page fault using Optimal Page
Replacement Algorithm.
4. Most Recently Used (MRU)
• In this algorithm, page will be replaced which has been used recently.

• Example 4: Consider the page reference string 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 3


with 4-page frames. Find number of page faults using MRU Page Replacement
Algorithm.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy