0% found this document useful (0 votes)
3 views

OS unit 2

The document discusses process scheduling and synchronization in operating systems, covering CPU scheduling, scheduling criteria, algorithms, and multiple processor scheduling. It details various scheduling algorithms such as First-Come, First-Served, Shortest-Job-First, and Round-Robin, along with their advantages and disadvantages. Additionally, it addresses process synchronization issues, the critical-section problem, and methods for handling deadlocks.

Uploaded by

csemagzine2025
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

OS unit 2

The document discusses process scheduling and synchronization in operating systems, covering CPU scheduling, scheduling criteria, algorithms, and multiple processor scheduling. It details various scheduling algorithms such as First-Come, First-Served, Shortest-Job-First, and Round-Robin, along with their advantages and disadvantages. Additionally, it addresses process synchronization issues, the critical-section problem, and methods for handling deadlocks.

Uploaded by

csemagzine2025
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

UNIT II PROCESS SCHEDULING AND SYNCHRONIZATION

CPU scheduling – Scheduling criteria – Scheduling algorithms – Multiple Processor


scheduling – Real time scheduling – Algorithm evaluation – Case study – Process
scheduling in Linux – Process synchronization – The critical-section problem –
Synchronization hardware – Semaphores – Classic problems of synchronization –
Critical regions – Monitors – Deadlock system model – Deadlock characterization –
Methods for handling deadlocks – Deadlock prevention – Deadlock avoidance –
Deadlock detection – Recovery from deadlock.
2.1 CPU SCHEDULING
CPU scheduling is the task of selecting a waiting process from the ready queue and allocating
the CPU to it. CPU scheduling is the basis of multiprogrammed operating systems.
CPU scheduling is the process of deciding which of the processes in the ready queue is to be
allocated the CPU. The main objective of scheduling is to increase CPU utilization and throughput.
2.1.1 Basic Concepts
The objective of multiprogramming is to have some process running at all times, in order to
maximize CPU utilization.
A uniprocessor system can have only one running process. If more processes exist, the rest
must wait until the CPU is free and can be rescheduled.
In a multiprogrammed system, several processes are kept in memory at one time. The OS
picks up and executes one of the jobs in main memory. When that job needs to wait, the CPU is
switched to another job. The first job finishes waiting and gets the CPU back.
2.1.2 CPU – I/O Burst Cycle
Process execution consists of a cycle of CPU execution(CPU burst) and I/O wait(I/O burst).
An I/O bound process spends more of its time doing I/O than it spends doing computation. A
CPU bound process spends more of its time doing computation than an I/O bound process.
An I/O bound process would have many very short CPU bursts. A CPU bound process might
have a few very long CPU bursts.

Fig: Alternating sequence of CPU and I/O bursts

1
2.1.3 CPU Scheduler
The operating system must select one of the processes in the ready queue to be executed.
The selection process is carried out by the short term scheduler (or CPU scheduler).
The short term scheduler selects processes from main memory that are ready to execute and
allocates the CPU to one of them.
2.1.4 Preemptive Scheduling
CPU scheduling decisions may take place under the following four circumstances:
1. When a process switches from running state to waiting state. (nonpreemptive)
2. When a process switches from running state to ready state. (preemptive)
3. When a process switches from waiting state to ready state. (preemptive)
4. When a process terminates. (nonpreemptive)
A preemptive scheduling algorithm will preempt the currently executing processes, whereas a
nonpreemptive scheduling algorithm will allow the currently running process to finish its work.
2.1.5 Dispatcher
The process of assigning the CPU to a process is called dispatching.
The dispatcher is the function that is responsible for assigning the CPU to the process that has
been selected by the short term scheduler. This function involves:
1. Switching context.
2. Switching to user mode.
3. Jumping to the proper location in the user program to restart that program.
The time it takes for the dispatcher to stop one process and start another running is known as the
dispatch latency.

2.2 SCHEDULING CRITERIA


Many criteria have been used for comparing CPU scheduling algorithms. The criteria include
the following:
1. CPU utilization: It is defined as the average fraction of time during which CPU is busy. CPU
utilization may range from 0 to 100 percent. CPU utilization must be high.
2. Throughput: It is defined as the average amount of work completed per unit time. For long
processes, throughput - 1 process/hour; For short processes, throughput - 10 processes/second.
3. Turnaround time: The interval from the time of submission of a process to the time of
completion is the turnaround time.
4. Waiting time: It is defined as the sum of the periods spent waiting in the ready queue.
5. Response time: The interval from the time of submission of a process until the first response is
produced. i.e) the amount of time it takes to start responding.

2.3 SCHEDULING ALGORITHMS


CPU scheduling is the process of deciding which of the processes in the ready queue is to be
allocated the CPU. The various CPU scheduling algorithms are
1. First-Come, First-Served Scheduling.
2. Shortest-Job-First Scheduling.
3. Priority Scheduling.
4. Round-Robin Scheduling.
5. Multilevel Queue Scheduling.
6. Multilevel Feedback Queue Scheduling.

2
1. First-Come, First-Served Scheduling(FCFS)
It is the simplest CPU scheduling algorithm.
In FCFS, the process that requests the CPU first is allocated the CPU first. The FCFS
policy is easily managed with a FIFO queue.
The FCFS scheduling algorithm is nonpreemptive. Once the CPU has been allocated to a
process, that process keeps the CPU until it terminates.
If many processes are waiting for the one big process to get off the CPU, it is known as convoy
effect. This effect results in lower CPU and device utilization.
The average waiting time under the FCFS policy is often quite long.
The FCFS algorithm is not suitable for time sharing systems, but it is suitable for batch systems.
2. Shortest-Job-First Scheduling(SJF) or Shortest-Next-CPU burst Scheduling
SJF algorithm allocates the CPU to the process that has the smallest CPU burst time.
If two processes have the same CPU burst time then FCFS algorithm is used.
SJF scheduling algorithm may be either preemptive or non preemptive. A preemptive SJF
algorithm will preempt the currently executing processes, whereas a nonpreemptive SJF algorithm
will allow the currently running process to finish its work. Preemptive SJF scheduling is sometimes
called shortest-remaining-time-first scheduling.
SJF scheduling algorithm is optimal since it gives the minimum average waiting time.
The real difficulty with the SJF algorithm is knowing the length of the next CPU burst time.
The next CPU burst is generally predicted as an exponential average of the lengths of previous
CPU bursts. The exponential average formula is
Τn+1=αtn+(1-α)Τn
tn – length of nth CPU burst. Τn+1 – predicted value for the next CPU burst.
3. Priority Scheduling
A priority is associated with each process, and the CPU is allocated to the process with the
highest priority. Equal priority processes are scheduled in FCFS order.
The priority (p) is the inverse of the predicted next CPU burst. Priorities are generally some
fixed range of numbers, such as 0 to 7, or 0 to 4095.
Priority scheduling can be either preemptive or nonpreemptive. A preemptive priority
scheduling algorithm will preempt the CPU if the priority of the newly arrived process is higher the
priority of the currently running process. A nonpreemptive priority scheduling algorithm will simply
put the new process at the head of the ready queue.
A major problem with priority scheduling algorithm is indefinite blocking (or starvation). A
priority scheduling algorithm can leave some low priority processes waiting indefinitely for the CPU.
A solution to the indefinite blocking or starvation problem is aging. Aging is a technique of
gradually increasing the priority of processes that wait in the system for a long time.
4. Round-Robin Scheduling
It is similar to FCFS scheduling, but preemption is added to switch between processes. A small
unit of time, called a time quantum is defined. A time quantum is generally from 10 to 100 milliseconds
The RR scheduling algorithm is preemptive. If a process CPU burst exceeds the given time
quantum, that process is preempted and is put back in the ready queue. If the process CPU burst is
lesser than the given time quantum the process itself will release the CPU voluntarily.
The RR policy is easily managed with a FIFO queue. The average waiting time under RR policy
is often quite long. The round robin algorithm is designed especially for time sharing systems.
The time quantum must be large with respect to the context switch time. The average
turnaround time increases for a smaller time quantum.

3
5. Multilevel Queue Scheduling
When processes can be easily classified into different groups then multilevel queue scheduling
is applied. For example: foreground (or interactive) processes and background (or batch) processes.
A multilevel queue scheduling algorithm partitions the ready queue into several separate
queues. The processes are permanently assigned to one queue.
Each queue has its own scheduling algorithm. For example: foreground queue follows RR
algorithm and background queue follows FCFS algorithm.

Fig: Multilevel Queue Scheduling


There are different ways of managing queues.
1. We can execute the higher priority process first.
2. We can assign a time slice to each queue.
6. Multilevel Feedback Queue Scheduling
Multilevel Feedback Queue Scheduling allows a process to move between queues.
If a process uses too much CPU time, it will be moved to a lower priority queue. i.e) I/O bound
processes stay in the higher priority queues and CPU bound processes stay in the lower priority queues.
Similarly, a process that waits too long in a lower priority queue may be moved to a higher
priority queue. This form of aging prevents starvation.

Fig: Multilevel Feedback Queue Scheduling


Multilevel-feedback-queue scheduler defined by the following parameters:
1. The number of queues
2. The scheduling algorithms for each queue
3. The method used to determine when to upgrade a process
4. The method used to determine when to demote a process
5. The method used to determine which queue a process will enter when that process needs service.

4
2.4 Multiple Processor scheduling
CPU scheduling is more complex when multiple CPU’s are available.
In a homogeneous system, where the processors are identical, any available processor can be
used to run any processes in the queue.
In a heterogeneous system, where the processors are different, only programs compiled for a
given processor’s instruction set could be run on that processor.
If several identical processors are available, then load sharing can occur. With a separate queue
for each processor, there could be a situation where one processor could be idle, with an empty queue,
while another processor was very busy. To prevent this situation, we use a common ready queue. All
processes go into one queue and are allocated onto any available processor.
Two scheduling approaches may be used:
1. Each processor is self scheduling (symmetric multiprocessing). Each processor examines
the common ready queue and selects a process to execute.
2. One processor is appointed as scheduler for the other processes, thus creating a master
slave structure.(asymmetric multiprocessing)

2.5 Real-Time scheduling


Real-time system is divided into two types: 1. Hard real-time system 2. Soft real-time system
Hard real-time systems are required to complete a critical task within a guaranteed amount of
time. The scheduler admits the process, guaranteeing that the process will complete on time, or rejects
the request as impossible. This is known as resource reservation.
Soft real-time system is less restrictive. A critical real time task gets priority over other tasks,
and retains that priority until it completes.
Implementing soft real-time system requires
1. Priority scheduling where real-time processes have the highest priority.
2. Dispatch latency must be small.
To keep dispatch latency low, we need preemptible system calls. One is to insert preemption
points in the long duration system calls. Another method is to make the entire kernel preemptible.
If a higher priority process is waiting for a lower priority process to finish, it is known as
priority inversion. This problem can be solved using the priority inversion protocol.
The conflict phase of dispatch latency has two components
1. Preemption of any process running in the kernel.
2. Release by low priority processes resources needed by the high priority process.

Fig: Dispatch Latency

5
2.6 Algorithm Evaluation
To select an algorithm, we must first define the criteria. Our criteria may include several
measures such as
1. Maximize CPU utilization under the constraint the maximum response time is 1 second.
2. Maximize throughput such that turnaround time is linearly proportional to total execution time.
Once the selection criteria have been defined, it can be evaluated using various methods.
2.6.1 Deterministic Modeling
This method takes a particular predetermined workload and defines the performance of the
algorithm for that workload.
For example, assume that all five processes arrive at time 0, in the order given, with the CPU
burst time given in milliseconds.
Process Burst Time
P1 10
P2 29
P3 3
P4 7
P5 12
We can apply FCFS, SJF and RR(q=2) scheduling algorithms and find out the average waiting
time for all algorithms. Since SJF gives the minimum average waiting time, it is the best algorithm.
2.6.2 Queueing Models
It is an area of study that can be useful in comparing scheduling algorithms. Knowing arrival
time and burst time, we can compute CPU utilization, average queue length and average wait time.
Little’s formula can be defined as
N = λ*W
where N – Average Queue Length
λ – Average arrival time in the Queue
W – Waiting time of the process.
Using little’s formula we can compute one of the three variables, if we know the other two.
2.6.3 Simulations
More accurate results or evaluations of scheduling algorithms can be done with simulations.
Simulation involves programming a model of the computer system.
Software data structures represent the major components of the system. When the simulation
executes, statistics that indicate algorithm performance are gathered.

Fig: Evaluation of CPU schedulers by simulation

6
2.7 PROCESS SYNCHRONIZATION
A cooperating process is one that can affect or be affected by other processes executing in the
system. Cooperating processes may either directly share a logical address space(code and data) or
be allowed to share data only through files.
Concurrent access to shared data may result in data consistency.
A situation where several processes access and manipulate the same data concurrently and the
outcome of the execution depends on the particular order in which the access takes place, is called a
race condition. To guard against race condition, we need to ensure that only one process at a time can
be manipulating. i.e) there must be synchronization and coordination between processes.

2.8 CRITICAL SECTION PROBLEM


Consider a system consisting of n processes {P0, P1,……,Pn-1}. Each process has a segment of
code, called a critical section, in which the process may be changing common variables, updating a
table, writing a file, and so on.
When one process is executing in its critical section, no other process is to be allowed to
execute in its critical section i.e) mutually exclusive.
The critical section problem is to design a protocol that the processes can use to cooperate. Each
process can request permission to enter its critical section. The section of code implementing this
request is the entry section. The critical section may be followed by an exit section. The remaining
code is the remainder section.

Fig: General structure of a typical process Pi


A solution to the critical section problem must satisfy the following three requirements
1. Mutual Exclusion. If process Pi is executing in its critical section, then no other processes can
be executing in their critical sections.
2. Progress. If no process is executing in its critical section and some processes wish to enter their
critical sections, then only those processes that are not executing in their remainder section can
participate in the decision on which will enter the critical section next, and this selection cannot
be postponed indefinitely.
3. Bounded Waiting. There exists a bound on the number of times that other processes are allowed
to enter their critical sections after a process has made a request to enter its critical section and
before that request is granted.
2.8.1 Two Process Solutions
Algorithms are applicable to only two processes at a time. The processes are numbered P0 to P1
When presenting Pi , we use Pj to denote the other process; that is j=1-i.

7
2.8.1.1 Algorithm 1
The processes share a common integer variable turn initialized to 0(or 1). If turn==i, process Pi
is allowed to execute in its critical section.

Fig: The structure of process Pi in algorithm 1


Algorithm 1 satisfies mutual exclusion. However it does not satisfy the progress requirement.
2.8.1.2 Algorithm 2
In algorithm 2, the variable turn can be replaced with the array boolean flag[2] to indicate the
state of each process. The elements of the array are initialized to false. If flag[i] is true, Pi is ready to
enter the critical section.

Fig: The structure of process Pi in algorithm 2


Algorithm 2 satisfies mutual exclusion. However it does not satisfy the progress requirement.
2.8.1.3 Algorithm 3
In algorithm 3, the processes share two variables:
boolean flag[2];
int turn;
Initially flag[0]=flag[1]=false; and the value of turn is either 0 or 1.

Fig: The structure of process Pi in algorithm 2


Algorithm 3 satisfies mutual exclusion, progress and bounded waiting requirement.

8
2.8.2 Multiple Process Solutions
Bakery algorithm is used for solving the critical section problem for n processes.
If Pi and Pi receive the same number and if i < j then Pi is served first. i.e) process with the
lowest name is served first.
The common variables used are boolean choosing[n]; and int number[n]; Initialilly, these
variables are initialized to false and 0.

Fig: The structure of process Pi in bakery algorithm


Bakery algorithm satisfies mutual exclusion, progress and bounded waiting requirement.

2.9 SYNCHRONIZATION HARDWARE


Simple hardware instructions can be used effectively in solving the critical section problem.
These special hardware instructions are used to test and modify the content of a word, or to swap the
contents of two words atomically.
2.9.1TestAndSet instruction

Fig: The definition of the TestAndSet instruction


If the machine supports the TestAndSet instruction, we can implement mutual exclusion by
declaring a boolean variable lock, initialized to false.

Fig: Mutual-exclusion implementation with TestAndSet instruction

9
2.9.2 Swap instruction

Fig: The definition of the Swap instruction


If the machine supports the Swap instruction, we can implement mutual exclusion by
declaring boolean variables lock, initialized to false and key.

Fig: Mutual-exclusion implementation with Swap instruction


The above algorithms do not satisfy the bounded waiting requirement.
So a special algorithm that uses the TestAndSet instruction satisfies all the critical section
requirements is developed. The common data structures are boolean waiting[n]; and boolean lock;
These data structures are initialized to false.
do {
waiting[i]=true;
key=true;
while(waiting[i]&&key)
key=TestAndSet(lock);
waiting[i]=false;
critical section
j=(i+1)%n;
while((j!=i)&&!waiting[j])
j=(j+1)%n;
if(j==i)
lock=false
else
waiting[j]=false;
remainder section
}while(1);
Fig: Bounded Waiting Mutual-exclusion with TestAndSet instruction

10
2.10 SEMAPHORES
Semaphore is a synchronization tool used to solve more complex critical section problems.
A semaphore S is an integer variable that is accessed only through two atomic operations:
P (wait) and V(signal).
The wait and signal operations must be executed indivisibly i.e) when one process modifies
the semaphore value, no other process can simultaneously modify that same semaphore value.
wait(S)
{
while(S<=0)
;
S--;
}
Fig: Definition of wait operation
signal(S)
{
S++;
}
Fig: Definition of signal operation
2.10.1 Usage
Semaphores can also be used to solve the n-process critical section problem. The n processes
share a semaphore mutex, initialized to 1

Fig: Mutual-exclusion implementation with TestAndSet instruction


2.10.2 Implementation
While a process is in its critical section, any other process that tries to enter its critical section
must loop continuously in the entry code. It is known as busy waiting. Busy waiting wastes CPU
cycles and it is a problem in a multiprogramming system.
The semaphore that is used to handle busy waiting problem is called as a spinlock. The
advantage of a spinlock is that no context switch is required when a process is wait on a lock.
Each semaphore has an integer value and a list of processes.
struct semaphore{
int value;
struct process *L; } Fig: Definition of Semaphore
When a process must wait on a semaphore, it is added to the list of processes. The block
operation suspends the process that invokes it.

11
A signal operation removes one process from the list of waiting processes and awakens that
process. The wakeup(P) operation resumes the execution of a blocked process P.

If the semaphore value is negative, its magnitude is number of processes waiting on semaphore.
2.10.3 Deadlocks and starvation
When two or more processes are waiting definitely for an event that can be caused only by one
of the waiting processes, these processes are said to be deadlocked.
When many processes are waiting indefinitely within the semaphore, it is known as indefinite
blocking or starvation.
2.10.4 Binary Semaphores
A binary semaphore is a semaphore with an integer value that can range only between 0 and 1.
A counting semaphore value can range over an unrestricted domain.
The common data structures are binary-semaphore S1,S2; and int C;
Initially S1=1, S2=0 and C=initial value of counting semaphore.

12
2.11. CLASSIC PROBLEMS OF SYNCHRONIZATION
The different synchronization problems are
1. The Bounded-Buffer Problem
2. The Readers-Writers Problem
3. The Dining-Philosophers Problem.
1. The Bounded-Buffer Problem
Assume a pool consists of n buffers, each capable of holding one item. The mutex semaphore
provides mutual exclusion for access to the buffer pool and is initialized to the value 1.
The empty and full semaphores count the number of empty and full buffers. The semaphore
empty is initialized to the value n; the semaphore full is initialized to the value 0.

Fig: The structure of the producer process

Fig: The structure of the consumer process


2. The Readers-Writers Problem
A data object(such as file or record) is to be shared among several concurrent processes. Process
reader reads the content of the shared object. Process writer updates(read and write) the shared object.
If a writer and some other processes access the shared object simultaneously, chaos may occur.
For this, we require that the writers have exclusive access to the shared object. This synchronization
problem is known as readers-writers problem.
The first readers-writers problem requires that no reader should wait for other readers to finish
simply because a writer is waiting.
The reader process share the following data structures:
semaphore mutex,wrt;
int readcount;
Initially mutex=wrt=1 and readcount=0

13
The mutex semaphore is used to ensure mutual exclusion when the variable readcount is
updated. The readcount variable keeps track of how many processes are currently reading the object.
The semaphore wrt functions as a mutual-exclusion semaphore for the writers.

Fig: The structure of the reader process

Fig: The structure of the writer process


3. The Dining-Philosophers Problem
Consider five philosophers who spend their lives thinking and eating. A philosopher may pick
up only one chopstick at a time. When a hungry philosopher has both her chopsticks at the same time,
she eats without releasing her chopsticks. This is known as dining-philosophers problem.

One simple solution is to represent chopstick by a semaphore. A philosopher tries to grab the
chopstick by executing a wait operation on that semaphore. She releases her chopsticks by executing
the signal operation on that semaphore.
The shared data is semaphore chopstick[5]; where all the elements are initialized to 1.

Fig: The structure of Philosopher i.

14
2.11 CRITICAL REGIONS
A critical region (or) conditional critical region is a language construct that allows a process
to wait till a certain condition is satisfied within a critical section without preventing other eligible
processes from accessing the shared resources.
The critical-region construct guards against certain simple errors associated with the semaphore
solution to the critical-section problem.
Assume that a process consists of some local data, and a sequential program that can operate on
the data. One process cannot directly access the local data of another process. Processes can, however,
share global data.
The variable v of type T, which is to be shared among many processes, be declared as
v: shared T;
The variable v can be accessed only inside a region statement of the following form:
region v when B do S;
When a process tries to enter the critical-section region, the Boolean expression B is evaluated.
If the expression is true, statement S is executed.
The critical-region construct can be effectively used to solve general synchronization problems.
Example: Bounded Buffer Scheme

Fig: Definition of Buffer


The producer process inserts a new item nextp into the shared buffer.

Fig: The structure of the producer process


The consumer process removes an item from the shared buffer and puts it in nextc.

Fig: The structure of the consumer process

15
Implementation of the conditional-region construct
With each shared variable, the following variables are associated:
semaphore mutex, first_delay, second_delay;
int first_count, second_count;
Initially mutex=1; first_delay=second_delay=first_count=second_count=0.
wait(mutex) ;
while(!B) {
first_count++;
if(second_count>0)
signal (second_delay) ;
else
signal(mutex) ;
wait (first_delay) ;
first_count--;
second_count++ ;
if(first_count > 0)
signal (first_delay) ;
else
signal(second_delay);
wait (second_delay) ;
second_count-- ; }
S;
if(first_count > 0)
signal (first_delay) ;
else if (second_count > 0)
signal( second_delay) ;
else
signal(mutex);
Mutually exclusive access to the critical section is provided by mutex. If a process cannot enter
the critical section, it initially waits on the first_delay semaphore. A process waiting on the first _delay
semaphore is moved to the second_delay semaphore before it reevaluate its Boolean condition B.
The number of processes waiting on first _delay and second_delay is stored in the variables
first_count and second_count respectively.
2.12 MONITORS
A monitor is defined as a module or package that consists of a collection of procedures,
variables and initialization code. A monitor is characterized by a set of programmer defined operators.

Fig: Syntax of a monitor

16
Characteristics of monitors
1. Only one process may be executed in monitor at a time.
2. A process enters the monitor by invoking one of its procedures.
3. Local variables are accessible only by monitor procedure and not by the external procedures.

Fig: Schematic view of a monitor


A monitor supports synchronization by the use of condition variables that are accessible only
within monitor. condition x, y;
The only operations that can be invoked on a condition variable are wait and signal.
The operation x.wait( ) suspends the process that invokes it. The x.signal( ) operation resumes
exactly one suspended process.

Fig: Monitor with condition variables


2.12.1 Monitor solution to the dining philosophers problem
A philosopher is allowed to pick up her chopsticks only if both of them are available.
The distribution of the chopsticks is controlled by the monitor dp.
monitor dp
{
enum {thinking, hungry, eating} state[5];
condition self[5];
void pickup(int i)
{ state[i] = hungry;
test(i);
if (state[i] != eating)
self[i].wait();
}
void putdown(int i)
{ state[i] = thinking;
test((i+4) % 5);
test((i+1) % 5); }
17
void test(int i)
{
if((state[(i+ 4) % 5] != eating) && (state[i] == hungry) &&
(state[(i + 1) % 5] != eating ))
{ state[i] = eating;
self[i].signal();
}
}
void init( )
{
for (int i = 0; i < 5; i++)
state[i] = thinking;
}
}
Fig: A monitor solution to the dining philosophers problem
This solution ensures that no two neighbors are eating simultaneously, no deadlocks will occur.
2.12.2 Monitor Implementation Using Semaphores
For each monitor, a semaphore mutex (initialized to 1) is provided. A process must execute
wait (mutex) before entering the monitor, and must execute signal(mutex) after leaving the monitor.
Each external procedure F will be replaced by
wait(mutex);

body of F;

if (next-count > 0)
signal(next)
else
signal(mutex);
Mutual exclusion within a monitor is ensured.
2.12.3Implementation of condition variables
The operation x.wait( ) can now be implemented as
x_count++;
if (next_count > 0)
signal(next);
else
signal(mutex);
wait(x_sem);
x_count--;
The operation x.signal( ) can now be implemented as
if (x_count > 0)
{
next_count++;
signal(x_sem);
wait(next);
next_count--; }

18
A monitor controls the allocation of a single resource among competing processes. Each
process, when requesting an allocation of its resources, specifies the maximum time it plans to use the
resource. The monitor allocates the resource to that process that has the shortest time-allocation request.
monitor ResourceAllocation
{
boolean busy;
condition x;
void acquire(int time)
{
if (busy)
x.wait (time);
busy = true;
}
void release( )
{ busy = false;
x. signal( );
}
void init( )
{
busy = false;
}
}
Fig: A monitor to allocate a single resource.

2.13 DEADLOCKS
A process requests resources; if the resources are not available at that time, the process enters
a wait state. Waiting processes may never again change state, because the resources they have
requested are held by other waiting processes. This situation is called a deadlock.
2.13.1 System Model
A system consists of a finite number of resources to be distributed among a number of
competing processes.
A process must request a resource before using it, and must release the resource after using it.
Also, the number of resources requested may not exceed the total number of resources available in the
system. i.e) a process cannot request three printers if the system has only two.
A process may utilize a resource in only the following sequence:
1. Request: If the request cannot be granted immediately, then the requesting process must
wait until it can acquire the resource.
2. Use: The process can operate on the resource
3. Release: The process releases the resource.

19
2.13.2 Deadlock Characterization
2.13.2.1 Necessary Conditions
1. Mutual exclusion: Only one process at a time can use the resource. If another process requests
that resource, the requesting process must be delayed until the resource has been released.
2. Hold and wait: A process must be holding at least one resource and waiting to acquire
additional resources that are currently being held by other processes.
3. No preemption: Resources cannot be preempted; that is, a resource can be released only
voluntarily by the process holding it, after that process has completed its task.
4. Circular wait: A set {Po, P1, ..., Pn} of waiting processes must exist such that Po is waiting for
a resource that is held by P1 , P1 is waiting for a resource that is held by P2, Pn-1 is waiting for a
resource that is held by Pn and Pn is waiting for a resource that is held by Po.
2.13.2.2 Resource Allocation Graph
Deadlocks can be described using a directed graph called a resource-allocation graph. This
graph consists of a set of vertices V and a set of edges E.
The set of vertices V is partitioned into two different types of nodes P = {P1, P2, ..., Pn}, the set
consisting of all the active processes, and R = { R1, R2, ..., Rm}, the set consisting of all resource types.
A directed edge Pi Rj is called a request edge which indicates that a process Pi requested an
instance of resource type Rj and is currently waiting for that resource.
A directed edge Rj Pi is called an assignment edge which indicates that an instance of
resource type Rj has been allocated to process Pi.
Each process Pi is represented as a circle, and each resource type Rj is represented as a square.
Since resource type Rj may have more than one instance, each such instance is represented as a dot
within the square.

i) Resource-allocation graph ii) Resource-allocation graph with a deadlock iii)Resource-allocation graph with a cycle but no deadlock
Process states:
1. Process P1 is holding an instance of resource type R2, and is waiting for an instance of
resource type R1
2. Process P2 is holding an instance of R1 and R2, and is waiting for an instance of resource R3.
3. Process P3 is holding an instance of R3
If a resource-allocation graph does not have a cycle, then the system is not in a deadlock state.
On the other hand, if there is a cycle, then the system may or may not be in a deadlock state.

20
2.14 Methods for Handling Deadlocks
We can use a protocol to prevent or avoid deadlocks, ensuring that the system will never enter a
deadlock state.
We can allow the system to enter a deadlock state, detect it, and recover.
We can ignore the problem altogether, and pretend that deadlocks never occur in the system.
\To ensure that deadlocks never occur, the system can use either a deadlock prevention or a
deadlock-avoidance scheme. Deadlock prevention is a set of methods for ensuring that at least one of
the necessary conditions cannot hold.
Deadlock avoidance, requires that the operating system be given in advance
additional information concerning which resources a process will request and use during its lifetime.
With this additional knowledge, we can decide for each request whether or not the process should wait.
If a system does not employ either a deadlock-prevention or a deadlock avoidance algorithm,
then the system can provide an algorithm that examines the state of the system to determine whether a
deadlock has occurred, and an algorithm to recover from the deadlock (if a deadlock has occurred).
If a system does not ensure that a deadlock will never occur, and also does not provide a
mechanism for deadlock detection and recovery, then the system is in a deadlock state. Eventually, the
system will stop functioning and will need to be restarted manually.
2.15 Deadlock Prevention
For a deadlock to occur, each of the four necessary conditions must hold. By ensuring that
at least one of these conditions cannot hold, we can prevent the occurrence of a deadlock.
2.15.1 Mutual Exclusion
The mutual-exclusion condition must hold for nonsharable resources. For example, a printer
cannot be simultaneously shared by several processes.
Sharable resources cannot be involved in a deadlock. Read-only files are a good example
of a sharable resource. A process never needs to wait for a sharable resource.
2.15.2 Hold and Wait
To ensure that the hold-and-wait condition never occurs, whenever a process requests a
resource, it does not hold any other resources.
One protocol that can be used requires each process to request and be allocated all its resources
before it begins execution.
An alternative protocol allows a process to request resources only when the process has none.
Disadvantages:
1. Resource utilization may be low, since many of the resources may be allocated but unused
for a long period. 2. Starvation is possible.
2.15.3 No Preemption
If a process is holding some resources and requests another resource that cannot be immediately
allocated to it (that is, the process must wait), then all resources currently being held are preempted.
If a process requests some resources, we first check whether they are available. If they are, we
allocate them. If they are not available, we check whether they are allocated to some other process that
is waiting for additional resources. If so, we preempt the desired resources from the waiting process and
allocate them to the requesting process. If the resources are not either available or held by a waiting
process, the requesting process must wait.

21
2.15.4 Circular Wait
One way to ensure that this condition never holds is to impose a total ordering of all resource
types, and to require that each process requests resources in an increasing order only.
Whenever a process requests an instance of resource type Rj it has released any resources Ri
such that F(Rj)>=F(Ri).
2.16 Deadlock Avoidance
The side effects of preventing deadlocks are low device utilization and reduced system
throughput. An alternative method for avoiding deadlock requires additional information about how
resources are to be requested.
The simplest and most useful model requires that each process declare the maximum number of
resources of each type that it may need.
A deadlock-avoidance algorithm ensures that the system will never enter a deadlock state. It
dynamically examines the resource-allocation state to ensure that a circular wait condition can never
exist. The resource-allocation state is defined by the number of available and allocated resources, and
the maximum demands of the processes.
2.16.1 Safe State
A state is safe if the system can allocate resources to each process in some order and still avoid
a deadlock. More formally, a system is in a safe state only if there exists a safe sequence. If no such
sequence exists, then the system state is said to be unsafe.
A safe state is not a deadlock state. Conversely, a deadlock state is an unsafe state.

Fig1: Safe, unsafe, and deadlock state Fig2: Resource allocation graph for deadlock avoidance
Fig 3: An unsafe state in a resource allocation graph
2.16.2 Resource-Allocation Graph Algorithm
A claim edge PiRj indicates that process Pi may request resource Rj at some time in the
future. This edge resembles a request edge in direction, but is represented by a dashed line.
When process Pi requests resource Rj, the claim edge PiRj is converted to a request edge.
When a resource Rj is released by Pi, the assignment edge Rj Pi is reconverted to a claim edge PiRj
The request can be granted only if converting the request edge PiRj to an assignment edge
Rj Pi does not result in the formation of a cycle in the resource-allocation graph.
If no cycle exists, then the allocation of the resource will leave the system in a safe state.
If a cycle is found, then the allocation will put the system in an unsafe state. Therefore, process Pi will
have to wait for its requests to be satisfied.
2.16.3 Banker's Algorithm
The resource-allocation graph algorithm is not applicable to a resource allocation system with
multiple instances of each resource type. The algorithm that is applicable to system with multiple
instances, is known as the banker's algorithm.
Several data structures must be maintained to implement the banker's algorithm. Let n be the
number of processes in the system and m be the number of resource types.

22
1. Available: A vector of length m indicates the number of available resources of each type.
2. Max: An n x m matrix defines the maximum demand of each process.
3. Allocation: An n x m matrix defines number of resources currently allocated to each process.
4. Need: An n x m matrix indicates the remaining resource need of each process.
Need[i,j] = Max[i,j] - Allocafion[i,j].
2.16.3.1 Safety Algorithm
The algorithm for finding out whether or not a system is in a safe state
1. Let Work and Finish be vectors of length m and n, respectively. Initialize:
Work = Available
Finish[i] = false for i=1,2,…,n.
2. Find an i such that both:
(a) Finish[i] = false
(b) Needi  Work
If no such i exists, go to step 4.
3. Work = Work + Allocationi
Finish[i] = true
go to step 2.
4. If Finish [i] == true for all i, then the system is in a safe state.
2.16.3.2 Resource-Request Algorithm
Let Requesti be the request vector for process Pi. If Requesti [j] = k then process Pi wants k
instances of resource type Rj.
1. If Requesti  Needi go to step 2. Otherwise, raise error, since process exceeded its maximum claim.
2. If Requesti  Available, go to step 3. Otherwise Pi must wait, since resources are not available.
3. Available = Available - Requesti;
Allocationi = Allocationi + Requesti;
Needi = Needi – Requesti;;
If the state is safe, the resources are allocated to Pi.
If the state is unsafe then Pi must wait, and the old resource-allocation state is restored
2.17 Deadlock Detection
If a system does not employ either a deadlock-prevention or a deadlock avoidance algorithm,
then a deadlock situation may occur. The system must provide:
1. An algorithm that examines the state of system to determine whether a deadlock has occurred
2. An algorithm to recover from the deadlock
2.17.1 Single Instance of Each Resource Type
If all resources have only a single instance, then a wait-for graph can be obtained from the
resource-allocation graph by removing the nodes of type resource and collapsing the appropriate edges.
A deadlock exists if and only if the wait-for graph contains a cycle. To detect deadlocks, system
has to maintain wait-for graph and periodically use an algorithm that searches for a cycle in the graph.

Fig: a) Resource-allocation graph b)Corresponding wait-for-graph

23
2.17.2 Several Instances of a Resource Type
The wait-for graph scheme is not applicable to a resource-allocation system with multiple
instances of each resource type. The deadlock detection algorithm employs several data structures.
1. Available: A vector of length m indicates the number of available resources of each type.
2. Allocation: An n x m matrix defines the number of resources currently allocated to each process
3. Request: An n x m matrix indicates the current request of each process.
Algorithm
1. Let Work and Finish be vectors of length m and n, respectively Initialize:
(a) Work = Available
(b) For i = 1,2, …, n, if Allocationi  0, then Finish[i] = false;
otherwise, Finish[i] = true.
2. Find an index i such that both:
(a) Finish[i] == false
(b) Requesti  Work
If no such i exists, go to step 4.
3. Work = Work + Allocationi
Finish[i] = true
go to step 2.
4. If Finish[i] == false, for some i, 1  i  n, then the system is in deadlock state. Moreover, if
Finish[i] == false, then Pi is deadlocked.

2.18 Recovery from Deadlock


When a detection algorithm determines that a deadlock exists, there are two options for
breaking a deadlock. One solution is simply to abort one or more processes to break the circular wait.
The second option is to preempt some resources from one or more of the deadlocked processes.
2.18.1 Process Termination
To eliminate deadlocks by aborting a process, we use one of two methods.
Abort all deadlocked processes: This method clearly will break the deadlock cycle, and the results of
these partial computations must be discarded and probably recomputed later.
Abort one process at a time until the deadlock cycle is eliminated: After each process is aborted,
a deadlock-detection algorithm must be invoked to determine whether any processes are deadlocked.
2.18.2Resource Preemption
To eliminate deadlocks using resource preemption, we successively preempt some resources
from processes and give these resources to other processes until the deadlock cycle is broken.
Issues related to preemption
1. Selecting a victim: Selecting the resources and processes to be preempted.
2. Rollback: If we preempt a resource from a process, we must roll back the process to some safe state,
and restart it from that state. This method requires the system to keep more information about the state
of all the running processes.
3. Starvation: It may happen that the same process is always picked as a victim. As a result, this
process never completes its designated task, a starvation situation happens. Clearly, we must ensure
that a process can be picked as a victim only a (small) finite number of times.

24

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy