0% found this document useful (0 votes)
6 views56 pages

OS unit 3

CPU scheduling is a critical function of operating systems that determines which process uses the CPU at any given time, aiming to maximize utilization and minimize response and waiting times. Various algorithms exist, including FCFS, SJF, and Round Robin, each with unique characteristics and performance metrics. Effective scheduling is essential for efficient resource utilization in multiprogramming environments.

Uploaded by

deder50551
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views56 pages

OS unit 3

CPU scheduling is a critical function of operating systems that determines which process uses the CPU at any given time, aiming to maximize utilization and minimize response and waiting times. Various algorithms exist, including FCFS, SJF, and Round Robin, each with unique characteristics and performance metrics. Effective scheduling is essential for efficient resource utilization in multiprogramming environments.

Uploaded by

deder50551
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 56

1.

CPU Scheduling in Operating Systems


CPU scheduling is a process used by the operating system to decide which task or process gets to use the
CPU at a particular time. This is important because a CPU can only handle one task at a time, but there are
usually many tasks that need to be processed. The following are different purposes of a CPU scheduling
time.

• Maximize the CPU utilization


• Minimize the response and waiting time of the process.
What is the Need for a CPU Scheduling Algorithm?

CPU scheduling is the process of deciding which process will own the CPU to use while another process is
suspended. The main function of CPU scheduling is to ensure that whenever the CPU remains idle, the OS
has at least selected one of the processes available in the ready-to-use line.

In Multiprogramming, if the long-term scheduler selects multiple I/O binding processes then most of the
time, the CPU remains idle. The function of an effective program is to improve resource utilization.

Terminologies Used in CPU Scheduling

• Arrival Time: The time at which the process arrives in the ready queue.
• Completion Time: The time at which the process completes its execution.
• Burst Time: Time required by a process for CPU execution.
• Turn Around Time: Time Difference between completion time and arrival time.
Turn Around Time = Completion Time – Arrival Time

• Waiting Time (W.T): Time Difference between turnaround time and burst time.

Waiting Time = Turn Around Time – Burst Time

Things to Take Care While Designing a CPU Scheduling Algorithm

Different CPU Scheduling algorithms have different structures and the choice of a particular algorithm
depends on a variety of factors.

• CPU Utilization: The main purpose of any CPU algorithm is to keep the CPU as busy as possible.
Theoretically, CPU usage can range from 0 to 100 but in a real-time system, it varies from 40 to 90
percent depending on the system load.

• Throughput: The average CPU performance is the number of processes performed and completed
during each unit. This is called throughput. The output may vary depending on the length or
duration of the processes.

• Turn Round Time: For a particular process, the important conditions are how long it takes to
perform that process. The time elapsed from the time of process delivery to the time of
completion is known as the conversion time. Conversion time is the amount of time spent waiting
for memory access, waiting in line, using CPU and waiting for I/O.
• Waiting Time: The Scheduling algorithm does not affect the time required to complete the process
once it has started performing. It only affects the waiting time of the process i.e. the time spent in
the waiting process in the ready queue.

• Response Time: In a collaborative system, turnaround time is not the best option. The process
may produce something early and continue to computing the new results while the previous
results are released to the user. Therefore, another method is the time taken in the submission of
the application process until the first response is issued. This measure is called response time.

Different Types of CPU Scheduling Algorithms

There are mainly two types of scheduling methods:

• Preemptive Scheduling: Preemptive scheduling is used when a process switches from running
state to ready state or from the waiting state to the ready state.

• Non-Preemptive Scheduling: Non-Preemptive scheduling is used when a process terminates , or


when a process switches from running state to waiting state.

CPU Scheduling

Please refer Preemptive vs Non-Preemptive Scheduling for details.

CPU Scheduling Algorithms

Let us now learn about these CPU scheduling algorithms in operating systems one by one:

• FCFS – First Come, First Serve


• SJF – Shortest Job First
• SRTF – Shortest Remaining Time First
• Round Robin
• Priority Scheduling
• HRRN – Highest Response Ratio Next
• Multiple Queue Scheduling
• Multilevel Feedback Queue Scheduling
Comparison of CPU Scheduling Algorithms

Here is a brief comparison between different CPU scheduling algorithms:


Average waiting time
Algorithm Allocation Complexity Preemption Starvation Performance
(AWT)
According to the
arrival time of
Simple and easy Slow
FCFS the processes, Large. No No
to implement performance
the CPU is
allocated.
Based on the Minimum
More complex
SJF lowest CPU burst Smaller than FCFS No Yes Average Waiting
than FCFS
time (BT). Time
Same as SJF the
allocation of the
CPU is based on Depending on some The preference is
More complex
SRTF the lowest CPU measures e.g., arrival Yes Yes given to the
than FCFS
burst time (BT). time, process size, etc short jobs
But it is
preemptive.
According to the
The complexity
order of the Large as compared to Each process has
depends on
RR process arrives SJF and Priority Yes No given a fairly
Time Quantum
with fixed time scheduling. fixed time
size
quantum (TQ)
According to the Well
priority. The performance but
Priority Pre- This type is less
bigger priority Smaller than FCFS Yes Yes contain a
emptive complex
task executes starvation
first problem
According to the
priority with This type is less
Most beneficial
Priority non- monitoring the complex than Preemptive Smaller
No Yes with batch
preemptive new incoming Priority than FCFS
systems
higher priority preemptive
jobs
According to the Good
More complex
process that performance but
than the priority
MLQ resides in the Smaller than FCFS No Yes contain a
scheduling
bigger queue starvation
algorithms
priority problem
It is the most
According to the
Complex but its Smaller than all
process of a Good
MFLQ complexity rate scheduling types in No No
bigger priority performance
depends on the many cases
queue.
TQ size
Questions for Practice

Question: Which of the following is false about SJF?

S1: It causes minimum average waiting time

S2: It can cause starvation


(A) Only S1
(B) Only S2
(C) Both S1 and S2
(D) Neither S1 nor S2
Answer: (D) S1 is true SJF will always give minimum average waiting time. S2 is true SJF can cause
starvation.

Question: Consider the following table of arrival time and burst time for three processes P0, P1 and P2.

PROCESS ARRIVAL TIME BURST TIME


P0 0 ms 9 ms
P1 1 ms 4 ms
P2 2 ms 9 ms
The pre-emptive shortest job first scheduling algorithm is used. Scheduling is carried out only at arrival
or completion of processes. What is the average waiting time for the three processes?

(A) 5.0 ms
(B) 4.33 ms
(C) 6.33
(D) 7.33
Solution: (A)
Process P0 is allocated processor at 0 ms as there is no other process in the ready queue. P0 is preempted
after 1 ms as P1 arrives at 1 ms and burst time for P1 is less than remaining time of P0. P1 runs for 4ms.
P2 arrived at 2 ms but P1 continued as burst time of P2 is longer than P1. After P1 completes, P0 is
scheduled again as the remaining time for P0 is less than the burst time of P2. P0 waits for 4 ms, P1 waits
for 0 ms and P2 waits for 11 ms. So average waiting time is (0+4+11)/3 = 5.

Question: Consider the following set of processes, with the arrival times and the CPU-burst times given
in milliseconds.
PROCESS ARRIVAL TIME BURST TIME
P1 0 ms 5 ms
P2 1 ms 3 ms
P3 2 ms 3 ms
P4 4 ms 1 ms
What is the average turnaround time for these processes with the preemptive Shortest Remaining
Processing Time First algorithm ?

(A) 5.50
(B) 5.75
(C) 6.00
(D) 6.25
Answer: (A)
Solution: The following is Gantt Chart of execution
P1 P2 P4 P3 P1
1 4 5 8 12
Turn Around Time = Completion Time – Arrival Time Avg Turn Around Time = (12 + 3 + 6+ 1)/4 = 5.50

Question: An operating system uses the Shortest Remaining Time First (SRTF) process scheduling
algorithm. Consider the arrival times and execution times for the following processes:

Process Burst Time Arrival Time


P1 20 ms 0 ms
P2 25 ms 15 ms
P3 10 ms 30 ms
P4 15 ms 45 ms
What is the total waiting time for process P2?

(A) 5
(B) 15
(C) 40
(D) 55
Answer (B)
Solution: At time 0, P1 is the only process, P1 runs for 15 time units. At time 15, P2 arrives, but P1 has the
shortest remaining time. So P1 continues for 5 more time units. At time 20, P2 is the only process. So it
runs for 10 time units At time 30, P3 is the shortest remaining time process. So it runs for 10 time units At
time 40, P2 runs as it is the only process. P2 runs for 5 time units. At time 45, P3 arrives, but P2 has the
shortest remaining time. So P2 continues for 10 more time units. P2 completes its execution at time 55.

Total waiting time for P2


= Completion time – (Arrival time + Execution time)
= 55 – (15 + 25)
= 15

2. Operating System Scheduling algorithms


A Process Scheduler schedules different processes to be assigned to the CPU based on particular
scheduling algorithms. There are six popular process scheduling algorithms which we are going to discuss
in this chapter −

• First-Come, First-Served (FCFS) Scheduling


• Shortest-Job-Next (SJN) Scheduling
• Priority Scheduling
• Shortest Remaining Time
• Round Robin (RR) Scheduling
• Multiple-Level Queues Scheduling
These algorithms are either non-preemptive or preemptive. Non-preemptive algorithms are designed so
that once a process enters the running state, it cannot be preempted until it completes its allotted time,
whereas the preemptive scheduling is based on priority where a scheduler may preempt a low priority
running process anytime when a high priority process enters into a ready state.

First Come First Serve (FCFS)


• Jobs are executed on first come, first serve basis.
• It is a non-preemptive, pre-emptive scheduling algorithm.
• Easy to understand and implement.
• Its implementation is based on FIFO queue.
• Poor in performance as average wait time is high.

Wait time of each process is as follows –

PROCESS WAIT TIME : SERVICE TIME - ARRIVAL TIME


P0 0-0=0
P1 5-1=4
P2 8-2=6
P3 16 - 3 = 13
Average Wait Time: (0+4+6+13) / 4 = 5.75

Shortest Job Next (SJN)

• This is also known as shortest job first, or SJF


• This is a non-preemptive, pre-emptive scheduling algorithm.
• Best approach to minimize waiting time.
• Easy to implement in Batch systems where required CPU time is known in advance.
• Impossible to implement in interactive systems where required CPU time is not known.
• The processer should know in advance how much time process will take.
Given: Table of processes, and their Arrival time, Execution time

Process Arrival Time Execution Time Service Time


P0 0 5 0
P1 1 3 5
P2 2 8 14
P3 3 6 8
Waiting time of each process is as follows −

Process Waiting Time


P0 0-0=0
P1 5-1=4
P2 14 - 2 = 12
P3 8-3=5
Average Wait Time: (0 + 4 + 12 + 5)/4 = 21 / 4 = 5.25

Priority Based Scheduling

• Priority scheduling is a non-preemptive algorithm and one of the most common scheduling
algorithms in batch systems.

• Each process is assigned a priority. Process with highest priority is to be executed first and so on.

• Processes with same priority are executed on first come first served basis.

• Priority can be decided based on memory requirements, time requirements or any other resource
requirement.

Given: Table of processes, and their Arrival time, Execution time, and priority. Here we are considering 1
is the lowest priority.

Process Arrival Time Execution Time Priority Service Time


P0 0 5 1 0
P1 1 3 2 11
P2 2 8 1 14
P3 3 6 3 5
Waiting time of each process is as follows −

Process Waiting Time


P0 0-0=0
P1 11 - 1 = 10
P2 14 - 2 = 12
P3 5-3=2
Average Wait Time: (0 + 10 + 12 + 2)/4 = 24 / 4 = 6

Shortest Remaining Time

• Shortest remaining time (SRT) is the preemptive version of the SJN algorithm.
• The processor is allocated to the job closest to completion but it can be preempted by a
newer ready job with shorter time to completion.
• Impossible to implement in interactive systems where required CPU time is not known.
• It is often used in batch environments where short jobs need to give preference.
Round Robin Scheduling

• Round Robin is the preemptive process scheduling algorithm.


• Each process is provided a fix time to execute, it is called a quantum.
• Once a process is executed for a given time period, it is preempted and other process
executes for a given time period.
• Context switching is used to save states of preempted processes.
Wait time of each process is as follows −

Process Wait Time : Service Time - Arrival Time


P0 (0 - 0) + (12 - 3) = 9
P1 (3 - 1) = 2
P2 (6 - 2) + (14 - 9) + (20 - 17) = 12
P3 (9 - 3) + (17 - 12) = 11
Average Wait Time: (9+2+12+11) / 4 = 8.5

Multiple-Level Queues Scheduling

Multiple-level queues are not an independent scheduling algorithm. They make use of other existing
algorithms to group and schedule jobs with common characteristics.

• Multiple queues are maintained for processes with common characteristics.

• Each queue can have its own scheduling algorithms.

• Priorities are assigned to each queue.

For example, CPU-bound jobs can be scheduled in one queue and all I/O-bound jobs in another queue.
The Process Scheduler then alternately selects jobs from each queue and assigns them to the CPU based
on the algorithm assigned to the queue
3. Operating System - Process Scheduling
The process scheduling is the activity of the process manager that handles the removal of the running
process from the CPU and the selection of another process on the basis of a particular strategy.

Process scheduling is an essential part of a Multiprogramming operating systems. Such operating systems
allow more than one process to be loaded into the executable memory at a time and the loaded process
shares the CPU using time multiplexing.

Categories of Scheduling

There are two categories of scheduling:

1. Non-preemptive: Here the resource cant be taken from a process until the process completes
execution. The switching of resources occurs when the running process terminates and moves to a
waiting state.

2. Preemptive: Here the OS allocates the resources to a process for a fixed amount of time. During
resource allocation, the process switches from running state to ready state or from waiting state to
ready state. This switching occurs as the CPU may give priority to other processes and replace the
process with higher priority with the running process.

Process Scheduling Queues

The OS maintains all Process Control Blocks (PCBs) in Process Scheduling Queues. The OS maintains a
separate queue for each of the process states and PCBs of all processes in the same execution state are
placed in the same queue. When the state of a process is changed, its PCB is unlinked from its current
queue and moved to its new state queue.

The Operating System maintains the following important process scheduling queues −

• Job queue − This queue keeps all the processes in the system.

• Ready queue − This queue keeps a set of all processes residing in main memory, ready and waiting to
execute. A new process is always put in this queue.

• Device queues − The processes which are blocked due to unavailability of an I/O device constitute this
queue.
The OS can use different policies to manage each queue (FIFO, Round Robin, Priority, etc.). The OS
scheduler determines how to move processes between the ready and run queues which can only have
one entry per processor core on the system; in the above diagram, it has been merged with the CPU.

Two-State Process Model

Two-state process model refers to running and non-running states which are described below −

S.N. STATE & DESCRIPTION


1 Running
When a new process is created, it enters into the system as in the running state.
2 Not Running
Processes that are not running are kept in queue, waiting for their turn to execute. Each entry in the
queue is a pointer to a particular process. Queue is implemented by using linked list. Use of dispatcher
is as follows. When a process is interrupted, that process is transferred in the waiting queue. If the
process has completed or aborted, the process is discarded. In either case, the dispatcher then selects
a process from the queue to execute.
Schedulers

Schedulers are special system software which handle process scheduling in various ways. Their main task
is to select the jobs to be submitted into the system and to decide which process to run. Schedulers are of
three types −

• Long-Term Scheduler

• Short-Term Scheduler

• Medium-Term Scheduler

Long Term Scheduler

It is also called a job scheduler. A long-term scheduler determines which programs are admitted to the
system for processing. It selects processes from the queue and loads them into memory for execution.
Process loads into the memory for CPU scheduling.
The primary objective of the job scheduler is to provide a balanced mix of jobs, such as I/O bound and
processor bound. It also controls the degree of multiprogramming. If the degree of multiprogramming is
stable, then the average rate of process creation must be equal to the average departure rate of processes
leaving the system.

On some systems, the long-term scheduler may not be available or minimal. Time-sharing operating
systems have no long term scheduler. When a process changes the state from new to ready, then there is
use of long-term scheduler.

Short Term Scheduler

It is also called as CPU scheduler. Its main objective is to increase system performance in accordance with
the chosen set of criteria. It is the change of ready state to running state of the process. CPU scheduler
selects a process among the processes that are ready to execute and allocates CPU to one of them.

Short-term schedulers, also known as dispatchers, make the decision of which process to execute next.
Short-term schedulers are faster than long-term schedulers.

Medium Term Scheduler

Medium-term scheduling is a part of swapping. It removes the processes from the memory. It reduces the
degree of multiprogramming. The medium-term scheduler is in-charge of handling the swapped out-
processes.

A running process may become suspended if it makes an I/O request. A suspended processes cannot make
any progress towards completion. In this condition, to remove the process from memory and make space
for other processes, the suspended process is moved to the secondary storage. This process is
called swapping, and the process is said to be swapped out or rolled out. Swapping may be necessary to
improve the process mix.

Comparison among Scheduler

S.N. Long-Term Scheduler Short-Term Scheduler Medium-Term Scheduler


1 It is a job scheduler It is a CPU scheduler It is a process swapping scheduler.
2 Speed is lesser than short term Speed is fastest among other two Speed is in between both short and long
scheduler term scheduler.
3 It controls the degree of It provides lesser control over It reduces the degree of
multiprogramming degree of multiprogramming multiprogramming.
4 It is almost absent or minimal in time It is also minimal in time sharing It is a part of Time sharing systems.
sharing system system
5 It selects processes from pool and loads It selects those processes which It can re-introduce the process into
them into memory for execution are ready to execute memory and execution can be continued.
Context Switching

A context switching is the mechanism to store and restore the state or context of a CPU in Process Control
block so that a process execution can be resumed from the same point at a later time. Using this technique,
a context switcher enables multiple processes to share a single CPU. Context switching is an essential part
of a multitasking operating system features.
When the scheduler switches the CPU from executing one process to execute another, the state from the
current running process is stored into the process control block. After this, the state for the process to run
next is loaded from its own PCB and used to set the PC, registers, etc. At that point, the second process
can start executing.

Context switches are computationally intensive since register and memory state must be saved and
restored. To avoid the amount of context switching time, some hardware systems employ two or more
sets of processor registers. When the process is switched, the following information is stored for later use.

• Program Counter
• Scheduling information
• Base and limit register value
• Currently used register
• Changed State
• I/O State information
• Accounting information
4. CPU Scheduling Criteria
CPU scheduling is essential for the system’s performance and ensures that processes are executed
correctly and on time. Different CPU scheduling algorithms have other properties and the choice of a
particular algorithm depends on various factors. Many criteria have been suggested for comparing CPU
scheduling algorithms.

What is CPU scheduling?

CPU Scheduling is a process that allows one process to use the CPU while another process is delayed due
to unavailability of any resources such as I / O etc, thus making full use of the CPU. In short, CPU scheduling
decides the order and priority of the processes to run and allocates the CPU time based on various
parameters such as CPU usage, throughput, turnaround, waiting time, and response time. The purpose of
CPU Scheduling is to make the system more efficient, faster, and fairer.

Criteria of CPU Scheduling

CPU scheduling criteria, such as turnaround time, waiting time, and throughput, are essential metrics used
to evaluate the efficiency of scheduling algorithms.

Now let’s discuss CPU Scheduling has several criteria. Some of them are mentioned below.

1. CPU utilization

The main objective of any CPU scheduling algorithm is to keep the CPU as busy as possible. Theoretically,
CPU utilization can range from 0 to 100 but in a real-time system, it varies from 40 to 90 percent depending
on the load upon the system.

2. Throughput

A measure of the work done by the CPU is the number of processes being executed and completed per
unit of time. This is called throughput. The throughput may vary depending on the length or duration of
the processes.
CPU Scheduling Criteria

3. Turnaround Time

For a particular process, an important criterion is how long it takes to execute that process. The time
elapsed from the time of submission of a process to the time of completion is known as the turnaround
time. Turn-around time is the sum of times spent waiting to get into memory, waiting in the ready queue,
executing in CPU, and waiting for I/O.

Turn Around Time = Completion Time – Arrival Time.

4. Waiting Time

A scheduling algorithm does not affect the time required to complete the process once it starts execution.
It only affects the waiting time of a process i.e. time spent by a process waiting in the ready queue.

Waiting Time = Turnaround Time – Burst Time.

5. Response Time

In an interactive system, turn-around time is not the best criterion. A process may produce some output
fairly early and continue computing new results while previous results are being output to the user. Thus,
another criterion is the time taken from submission of the process of the request until the first response
is produced. This measure is called response time.

Response Time = CPU Allocation Time(when the CPU was allocated for the first) – Arrival Time

6. Completion Time

The completion time is the time when the process stops executing, which means that the process has
completed its burst time and is completely executed.

7. Priority
If the operating system assigns priorities to processes, the scheduling mechanism should favor the higher-
priority processes.

8. Predictability

A given process always should run in about the same amount of time under a similar system load.

Importance of Selecting the Right CPU Scheduling Algorithm for Specific Situations

It is important to choose the correct CPU scheduling algorithm because different algorithms have different
priorities for different CPU scheduling criteria. Different algorithms have different strengths and
weaknesses. Choosing the wrong CPU scheduling algorithm in a given situation can result in suboptimal
performance of the system.

Example: Here are some examples of CPU scheduling algorithms that work well in different situations.

Round Robin scheduling algorithm works well in a time-sharing system where tasks have to be completed
in a short period of time. SJF scheduling algorithm works best in a batch processing system where shorter
jobs have to be completed first in order to increase throughput. Priority scheduling algorithm works better
in a real-time system where certain tasks have to be prioritized so that they can be completed in a timely
manner.

Factors Influencing CPU Scheduling Algorithms

There are many factors that influence the choice of CPU scheduling algorithm. Some of them are listed
below.

• The number of processes.


• The processing time required.
• The urgency of tasks.
• The system requirements.
Selecting the correct algorithm will ensure that the system will use system resources efficiently, increase
productivity, and improve user satisfaction.

CPU Scheduling Algorithms

There are several CPU Scheduling Algorithms, that are listed below.

• First Come First Served (FCFS)


• Shortest Job First (SJF)
• Longest Job First (LJF)
• Priority Scheduling
• Round Robin (RR)
• Shortest Remaining Time First (SRTF)
• Longest Remaining Time First (LRTF)
Conclusion
In Conclusion, CPU scheduling criteria play an important role in improving system performance. CPU
scheduling techniques encourage efficient use of system resource and effective task processing by
analysing and prioritising criteria such as CPU Utilization, Throughput, Turnaround Time, Waiting Time,
and Response Time. Selecting the appropriate algorithm for a given situation is crucial for increasing
system efficiency and production.

Question for Practice

Which of the following process scheduling algorithm may lead to starvation?

(A) FIFO
(B) Round Robin
(C) Shortest Job Next
(D) None of the above
Correct option is (C)

Explanation: Shortest job next may lead to process starvation for processes which will require
a long time to complete if short processes are continually added.
5. States of a Process in Operating Systems
In an operating system, a process is a program that is being executed. During its execution, a process goes
through different states. Understanding these states helps us see how the operating system manages
processes, ensuring that the computer runs efficiently. Please refer Process in Operating System to
understand more details about processes.

Each process goes through several stages throughout its life cycle. In this article, We discuss different states
of process in detail.

Process Lifecycle

When you run a program (which becomes a process), it goes through different phases before it completion.
These phases, or states, can vary depending on the operating system, but the most common process
lifecycle includes two, five, or seven states. Here’s a simple explanation of these states:

The Two-State Model

The simplest way to think about a process’s lifecycle is with just two states:

1. Running: This means the process is actively using the CPU to do its work.

2. Not Running: This means the process is not currently using the CPU. It could be waiting for
something, like user input or data, or it might just be paused.

Two State Process Model

When a new process is created, it starts in the not running state. Initially, this process is kept in a program
called the dispatcher.

Here’s what happens step by step:

1. Not Running State: When the process is first created, it is not using the CPU.
2. Dispatcher Role: The dispatcher checks if the CPU is free (available for use).
3. Moving to Running State: If the CPU is free, the dispatcher lets the process use the CPU,
and it moves into the running state.
4. CPU Scheduler Role: When the CPU is available, the CPU scheduler decides which process
gets to run next. It picks the process based on a set of rules called the scheduling scheme,
which varies from one operating system to another.
The Five-State Model

The five-state process lifecycle is an expanded version of the two-state model. The two-state model works
well when all processes in the not running state are ready to run. However, in some operating systems, a
process may not be able to run because it is waiting for something, like input or data from an external
device. To handle this situation better, the not running state is divided into two separate states:

Five state Process Model

Here’s a simple explanation of the five-state process model:

• New: This state represents a newly created process that hasn’t started running yet. It has not been
loaded into the main memory, but its process control block (PCB) has been created, which holds
important information about the process.

• Ready: A process in this state is ready to run as soon as the CPU becomes available. It is waiting
for the operating system to give it a chance to execute.

• Running: This state means the process is currently being executed by the CPU. Since we’re
assuming there is only one CPU, at any time, only one process can be in this state.

• Blocked/Waiting: This state means the process cannot continue executing right now. It is waiting
for some event to happen, like the completion of an input/output operation (for example, reading
data from a disk).

• Exit/Terminate: A process in this state has finished its execution or has been stopped by the user
for some reason. At this point, it is released by the operating system and removed from memory.

The Seven-State Model


The states of a process are as follows:

• New State: In this step, the process is about to be created but not yet created. It is the program
that is present in secondary memory that will be picked up by the OS to create the process.

• Ready State: New -> Ready to run. After the creation of a process, the process enters the ready
state i.e. the process is loaded into the main memory. The process here is ready to run and is
waiting to get the CPU time for its execution. Processes that are ready for execution by the CPU
are maintained in a queue called a ready queue for ready processes.

• Run State: The process is chosen from the ready queue by the OS for execution and the
instructions within the process are executed by any one of the available processors.

• Blocked or Wait State: Whenever the process requests access to I/O needs input from the user or
needs access to a critical region(the lock for which is already acquired) it enters the blocked or
waits state. The process continues to wait in the main memory and does not require CPU. Once
the I/O operation is completed the process goes to the ready state.

• Terminated or Completed State: Process is killed as well as PCB is deleted. The resources allocated
to the process will be released or deallocated.

• Suspend Ready: Process that was initially in the ready state but was swapped out of main
memory(refer to Virtual Memory topic) and placed onto external storage by the scheduler is said
to be in suspend ready state. The process will transition back to a ready state whenever the
process is again brought onto the main memory.

• Suspend Wait or Suspend Blocked: Similar to suspend ready but uses the process which was
performing I/O operation and lack of main memory caused them to move to secondary memory.
When work is finished it may go to suspend ready.
• CPU and I/O Bound Processes: If the process is intensive in terms of CPU operations, then it is
called CPU bound process. Similarly, If the process is intensive in terms of I/O operations then it is
called I/O bound process.

How Does a Process Move From One State to Other State?

A process can move between different states in an operating system based on its execution status and
resource availability. Here are some examples of how a process can move between different states:

• New to Ready: When a process is created, it is in a new state. It moves to the ready state
when the operating system has allocated resources to it and it is ready to be executed.
• Ready to Running: When the CPU becomes available, the operating system selects a process
from the ready queue depending on various scheduling algorithms and moves it to the
running state.
• Running to Blocked: When a process needs to wait for an event to occur (I/O operation
or system call), it moves to the blocked state. For example, if a process needs to wait for user
input, it moves to the blocked state until the user provides the input.
• Running to Ready: When a running process is preempted by the operating system, it moves
to the ready state. For example, if a higher-priority process becomes ready, the operating
system may preempt the running process and move it to the ready state.
• Blocked to Ready: When the event a blocked process was waiting for occurs, the process
moves to the ready state. For example, if a process was waiting for user input and the input
is provided, it moves to the ready state.
• Running to Terminated: When a process completes its execution or is terminated by the
operating system, it moves to the terminated state.
Types of Schedulers

• Long-Term Scheduler: Decides how many processes should be made to stay in the ready
state. This decides the degree of multiprogramming. Once a decision is taken it lasts for a long
time which also indicates that it runs infrequently. Hence it is called a long-term scheduler.
• Short-Term Scheduler: Short-term scheduler will decide which process is to be executed next
and then it will call the dispatcher. A dispatcher is a software that moves the process from
ready to run and vice versa. In other words, it is context switching. It runs frequently. Short-
term scheduler is also called CPU scheduler.
• Medium Scheduler: Suspension decision is taken by the medium-term scheduler. The medium-
term scheduler is used for swapping which is moving the process from main memory to
secondary and vice versa. The swapping is done to reduce degree of multiprogramming.
Multiprogramming

We have many processes ready to run. There are two types of multiprogramming:

• Preemption: Process is forcefully removed from CPU. Pre-emption is also called time
sharing or multitasking.
• Non-Preemption: Processes are not removed until they complete the execution. Once
control is given to the CPU for a process execution, till the CPU releases the control by
itself, control cannot be taken back forcibly from the CPU.
Degree of Multiprogramming

The number of processes that can reside in the ready state at maximum decides the degree of
multiprogramming, e.g., if the degree of programming = 100, this means 100 processes can reside in the
ready state at maximum.

Operation on The Process

• Creation: The process will be ready once it has been created, enter the ready queue (main memory),
and be prepared for execution.
• Planning: The operating system picks one process to begin executing from among the numerous
processes that are currently in the ready queue. Scheduling is the process of choosing the next process
to run.
• Application: The processor begins running the process as soon as it is scheduled to run. During
execution, a process may become blocked or wait, at which point the processor switches to executing
the other processes.
• Killing or Deletion: The OS will terminate the process once its purpose has been fulfilled. The process’s
context will be over there.
• Blocking: When a process is waiting for an event or resource, it is blocked. The operating system will
place it in a blocked state, and it will not be able to execute until the event or resource becomes
available.
• Resumption: When the event or resource that caused a process to block becomes available, the
process is removed from the blocked state and added back to the ready queue.
• Context Switching: When the operating system switches from executing one process to another, it
must save the current process’s context and load the context of the next process to execute. This is
known as context switching.
• Inter-Process Communication: Processes may need to communicate with each other to share data or
coordinate actions. The operating system provides mechanisms for inter-process communication, such
as shared memory, message passing, and synchronization primitives.
• Process Synchronization: Multiple processes may need to access a shared resource or critical section
of code simultaneously. The operating system provides synchronization mechanisms to ensure that only
one process can access the resource or critical section at a time.
• Process States: Processes may be in one of several states, including ready, running, waiting, and
terminated. The operating system manages the process states and transitions between them.
Features of The Process State

• A process can move from the running state to the waiting state if it needs to wait for a
resource to become available.
• A process can move from the waiting state to the ready state when the resource it was
waiting for becomes available.
• A process can move from the ready state to the running state when it is selected by the
operating system for execution.
• The scheduling algorithm used by the operating system determines which process is
selected to execute from the ready state.
• The operating system may also move a process from the running state to the ready state
to allow other processes to execute.
• A process can move from the running state to the terminated state when it completes its
execution.
• A process can move from the waiting state directly to the terminated state if it is aborted
or killed by the operating system or another process.
• A process can go through ready, running and waiting state any number of times in its
lifecycle but new and terminated happens only once.
• The process state includes information about the program counter, CPU registers, memory
allocation, and other resources used by the process.
• The operating system maintains a process control block (PCB) for each process, which
contains information about the process state, priority, scheduling information, and other
process-related data.
• The process state diagram is used to represent the transitions between different states of
a process and is an essential concept in process management in operating systems.
Conclusion

In conclusion, understanding the states of a process in an operating system is essential for comprehending
how the system efficiently manages multiple processes. These states—new, ready, running, waiting, and
terminated—represent different stages in a process’s life cycle. By transitioning through these states, the
operating system ensures that processes are executed smoothly, resources are allocated effectively, and
the overall performance of the computer is optimized. This knowledge helps us appreciate the complexity
and efficiency behind the scenes of modern computing.
6. Process Transition Diagram,
7. Process Schedulers in Operating System
A process is the instance of a computer program in execution.

• Scheduling is important in operating systems with multiprogramming as multiple


processes might be eligible for running at a time.
• One of the key responsibilities of an Operating System (OS) is to decide which programs
will execute on the CPU.
• Process Schedulers are fundamental components of operating systems responsible for
deciding the order in which processes are executed by the CPU. In simpler terms, they
manage how the CPU allocates its time among multiple tasks or processes that are
competing for its attention.
What is Process Scheduling?

Process scheduling is the activity of the process manager that handles the removal of the running process
from the CPU and the selection of another process based on a particular strategy. Throughout its lifetime,
a process moves between various scheduling queues, such as the ready queue, waiting queue, or devices
queue.

Process scheduler

Categories of Scheduling

Scheduling falls into one of two categories:


• Non-Preemptive: In this case, a process’s resource cannot be taken before the process has
finished running. When a running process finishes and transitions to a waiting state, resources
are switched.
• Preemptive: In this case, the OS can switch a process from running state to ready state. This
switching happens because the CPU may give other processes priority and substitute the
currently active process for the higher priority process.
Please refer Preemptive vs Non-Preemptive Scheduling for details.

Types of Process Schedulers

There are three types of process schedulers:

1. Long Term or Job Scheduler

Long Term Scheduler loads a process from disk to main memory for execution. The new process to the
‘Ready State’.

• It mainly moves processes from Job Queue to Ready Queue.


• It controls the Degree of Multi-programming, i.e., the number of processes present in a
ready state or in main memory at any point in time.
• It is important that the long-term scheduler make a careful selection of both I/O and CPU-
bound processes. I/O-bound tasks are which use much of their time in input and output
operations while CPU-bound processes are which spend their time on the CPU. The job
scheduler increases efficiency by maintaining a balance between the two.
• In some systems, the long-term scheduler might not even exist. For example, in time-
sharing systems like Microsoft Windows, there is usually no long-term scheduler. Instead,
every new process is directly added to memory for the short-term scheduler to handle.
• Slowest among the three (that is why called long term).
2. Short-Term or CPU Scheduler

CPU Scheduler is responsible for selecting one process from the ready state for running (or assigning CPU
to it).

• STS (Short Term Scheduler) must select a new process for the CPU frequently to avoid
starvation.
• The CPU scheduler uses different scheduling algorithms to balance the allocation of CPU
time.
• It picks a process from ready queue.
• Its main objective is to make the best use of CPU.
• It mainly calls dispatcher.
• Fastest among the three (that is why called Short Term).
The dispatcher is responsible for loading the process selected by the Short-term scheduler on the CPU
(Ready to Running State). Context switching is done by the dispatcher only. A dispatcher does the following
work:

• Saving context (process control block) of previously running process if not finished.
• Switching system mode to user mode.
• Jumping to the proper location in the newly loaded program.
Time taken by dispatcher is called dispatch latency or process context switch time.

3. Medium-Term Scheduler

Medium Term Scheduler (MTS) is responsible for moving a process from memory to disk (or swapping).

• It reduces the degree of multiprogramming (Number of processes present in main


memory).
• A running process may become suspended if it makes an I/O request. A suspended
processes cannot make any progress towards completion. In this condition, to remove the
process from memory and make space for other processes, the suspended process is
moved to the secondary storage. This process is called swapping, and the process is said
to be swapped out or rolled out. Swapping may be necessary to improve the process mix
(of CPU bound and IO bound)
• When needed, it brings process back into memory and pick up right where it left off.
• It is faster than long term and slower than short term.

Medium-Term Scheduler
Some Other Schedulers
• I/O Schedulers: I/O schedulers are in charge of managing the execution of I/O operations such as
reading and writing to discs or networks. They can use various algorithms to determine the order
in which I/O operations are executed, such as FCFS (First-Come, First-Served) or RR (Round Robin).

• Real-Time Schedulers: In real-time systems, real-time schedulers ensure that critical tasks are
completed within a specified time frame. They can prioritize and schedule tasks using various
algorithms such as EDF (Earliest Deadline First) or RM (Rate Monotonic).

Comparison Among Scheduler

Long Term Scheduler Short Term Schedular Medium Term Scheduler


It is a job scheduler It is a CPU scheduler It is a process-swapping scheduler.
The slowest scheduler. Speed is the fastest among all of Speed lies in between both short
them. and long-term schedulers.
It controls the degree of It gives less control over how much It reduces the degree of
multiprogramming multiprogramming is done. multiprogramming.
It is barely present or nonexistent It is a minimal time-sharing system. It is a component of systems for
in the time-sharing system. time sharing.
It can re-enter the process into It selects those processes which are It can re-introduce the process into
memory, allowing for the ready to execute memory and execution can be
continuation of execution. continued.
Context Switching

In order for a process execution to be continued from the same point at a later time, context switching is
a mechanism to store and restore the state or context of a CPU in the Process Control block. A context
switcher makes it possible for multiple processes to share a single CPU using this method. A multitasking
operating system must include context switching among its features.

The state of the currently running process is saved into the process control block when the scheduler
switches the CPU from executing one process to another. The state used to set the computer, registers,
etc. for the process that will run next is then loaded from its own PCB. After that, the second can start
processing.

Context Switching
In order for a process execution to be continued from the same point at a later time, context switching is
a mechanism to store and restore the state or context of a CPU in the Process Control block. A context
switcher makes it possible for multiple processes to share a single CPU using this method. A multitasking
operating system must include context switching among its features.
• Program Counter
• Scheduling information
• The base and limit register value
• Currently used register
• Changed State
• I/O State information
• Accounting information
Read more: Context Switching in Operating System

Conclusion

Process schedulers are the essential parts of operating system that manage how the CPU handles multiple
tasks or processes. They ensure that processes are executed efficiently, making the best use of CPU
resources and maintaining system responsiveness. By choosing the right process to run at the right time,
schedulers help optimize overall system performance, improve user experience, and ensure fair access to
CPU resources among competing processes.
8. Process Control Block (PCB)
A Process Control Block (PCB) is a data structure used by the Operating System to store
essential information about a process. Each process has a unique PCB identified by a Process ID
(PID). It helps the OS manage and track processes.

Here is a concise table of key PCB components:

S.N. INFORMATION DESCRIPTION


1 Process State Current state: Ready, Running, Waiting, etc.
2 Process Privileges Access rights to system resources.
3 Process ID (PID) Unique ID for each process.
4 Pointer Points to the parent process.
5 Program Counter Address of the next instruction to execute.
6 CPU Registers Stores CPU register values for execution.
7 CPU Scheduling Info Includes priority and scheduling parameters.
8 Memory Management Info Includes page tables, segment tables, and memory limits.
9 Accounting Info CPU usage, execution time, and process ID info.
10 I/O Status Info List of I/O devices allocated to the process.
Note: PCB structure may vary across different operating systems.

The PCB is maintained for a process throughout its lifetime, and is deleted once the process
terminates.
9. Process Address Space in Operating Systems
What is a Process Address Space?

A process address space refers to the set of memory addresses that a process can use to execute
and store data. It is the logical view of memory assigned to a process by the operating system.

Each process operates in its own address space, isolated from other processes, ensuring memory
protection and security.

Structure of Process Address Space

The address space of a process is typically divided into several segments:


Segment Description
Text Segment Contains the executable code (also called code segment)
Data Segment Stores initialized global and static variables
BSS Segment Stores uninitialized global and static variables
Heap Used for dynamic memory allocation (via malloc() or new)
Stack Stores function parameters, local variables, and return addresses
Diagram: Process Address Space Layout
+-----------------------------+ <-- High Memory Address
| Stack Segment |
| (grows downward) |
+-----------------------------+
| Heap Segment |
| (grows upward) |
+-----------------------------+
| BSS Segment |
+-----------------------------+
| Data Segment |
+-----------------------------+
| Text Segment |
+-----------------------------+ <-- Low Memory Address

Characteristics of Process Address Space

• Isolated: Each process has its own unique address space.


• Virtualized: OS uses virtual memory to give each process the illusion of using a large,
continuous block of memory.
• Protected: One process cannot access or modify another process's address space directly.
• Dynamic Allocation: The heap and stack grow/shrink at runtime as needed.
Role of Operating System

The OS is responsible for:

• Creating the process address space when a process starts.


• Mapping virtual addresses to physical memory using the MMU (Memory Management Unit).
• Handling memory protection and segmentation faults.
• Swapping processes in and out of memory using paging or segmentation.

Example (Linux/x86 Address Space - 32-bit)

0xC0000000 - 0xFFFFFFFF --> Kernel space (not accessible to user process)


0x00000000 - 0xBFFFFFFF --> User space (text, data, heap, stack)

Conclusion

The process address space is a critical abstraction that allows:

• Efficient multitasking,
• Safe isolation of processes,
• Virtual memory support.
It is essential for process management and memory protection in modern operating systems.
10. Operating System - multi-threading
What is Thread?

A thread is a flow of execution through the process code, with its own program counter that keeps track
of which instruction to execute next, system registers which hold its current working variables, and a stack
which contains the execution history.

A thread shares with its peer threads few information like code segment, data segment and open files.
When one thread alters a code segment memory item, all other threads see that.

A thread is also called a lightweight process. Threads provide a way to improve application performance
through parallelism. Threads represent a software approach to improving performance of operating
system by reducing the overhead thread is equivalent to a classical process.

Each thread belongs to exactly one process and no thread can exist outside a process. Each thread
represents a separate flow of control. Threads have been successfully used in implementing network
servers and web server. They also provide a suitable foundation for parallel execution of applications on
shared memory multiprocessors. The following figure shows the working of a single-threaded and a
multithreaded process.

Difference between Process and Thread

Process Thread
Process is heavy weight or resource intensive. Thread is light weight, taking lesser resources than a process.
Process switching needs interaction with operating system. Thread switching does not need to interact with operating
system.
In multiple processing environments, each process executes the All threads can share same set of open files, child processes.
same code but has its own memory and file resources.
If one process is blocked, then no other process can execute until While one thread is blocked and waiting, a second thread in the
the first process is unblocked. same task can run.
Multiple processes without using threads use more resources. Multiple threaded processes use fewer resources.
In multiple processes each process operates independently of the One thread can read, write or change another thread's data.
others.
Advantages of Thread

• Threads minimize the context switching time.


• Use of threads provides concurrency within a process.
• Efficient communication.
• It is more economical to create and context switch threads.
• Threads allow utilization of multiprocessor architectures to a greater scale and efficiency.
Types of Thread

Threads are implemented in following two ways −

• User Level Threads − User managed threads.


• Kernel Level Threads − Operating System managed threads acting on kernel, an operating
system core.

User Level Threads

In this case, the thread management kernel is not aware of the existence of threads. The thread library
contains code for creating and destroying threads, for passing message and data between threads, for
scheduling thread execution and for saving and restoring thread contexts. The application starts with a
single thread.

Advantages

• Thread switching does not require Kernel mode privileges.


• User level thread can run on any operating system.
• Scheduling can be application specific in the user level thread.
• User level threads are fast to create and manage.
Disadvantages

• In a typical operating system, most system calls are blocking.


• Multithreaded application cannot take advantage of multiprocessing.
Kernel Level Threads

In this case, thread management is done by the Kernel. There is no thread management code in the
application area. Kernel threads are supported directly by the operating system. Any application can be
programmed to be multithreaded. All of the threads within an application are supported within a single
process.

The Kernel maintains context information for the process as a whole and for individuals threads within the
process. Scheduling by the Kernel is done on a thread basis. The Kernel performs thread creation,
scheduling and management in Kernel space. Kernel threads are generally slower to create and manage
than the user threads.

Advantages

• Kernel can simultaneously schedule multiple threads from the same process on multiple
processes.
• If one thread in a process is blocked, the Kernel can schedule another thread of the same
process.
• Kernel routines themselves can be multithreaded.
Disadvantages

• Kernel threads are generally slower to create and manage than the user threads.
• Transfer of control from one thread to another within the same process requires a mode
switch to the Kernel.
Multithreading Models

Some operating system provide a combined user level thread and Kernel level thread facility. Solaris is a
good example of this combined approach. In a combined system, multiple threads within the same
application can run in parallel on multiple processors and a blocking system call need not block the entire
process. Multithreading models are three types

• Many to many relationship.


• Many to one relationship.
• One to one relationship.
Many to Many Model

The many-to-many model multiplexes any number of user threads onto an equal or smaller number of
kernel threads.

The following diagram shows the many-to-many threading model where 6 user level threads are
multiplexing with 6 kernel level threads. In this model, developers can create as many user threads as
necessary and the corresponding Kernel threads can run in parallel on a multiprocessor machine. This
model provides the best accuracy on concurrency and when a thread performs a blocking system call, the
kernel can schedule another thread for execution.

Many to One Model

Many-to-one model maps many user level threads to one Kernel-level thread. Thread management is done
in user space by the thread library. When thread makes a blocking system call, the entire process will be
blocked. Only one thread can access the Kernel at a time, so multiple threads are unable to run in parallel
on multiprocessors.

If the user-level thread libraries are implemented in the operating system in such a way that the system
does not support them, then the Kernel threads use the many-to-one relationship modes.
One to One Model

There is one-to-one relationship of user-level thread to the kernel-level thread. This model provides more
concurrency than the many-to-one model. It also allows another thread to run when a thread makes a
blocking system call. It supports multiple threads to execute in parallel on microprocessors.

Disadvantage of this model is that creating user thread requires the corresponding Kernel thread. OS/2,
windows NT and windows 2000 use one to one relationship model.

Difference between User-Level & Kernel-Level Thread


User-Level Threads Kernel-Level Thread
User-level threads are faster to create and manage. Kernel-level threads are slower to create and manage.
Implementation is by a thread library at the user level. Operating system supports creation of Kernel threads.
User-level thread is generic and can run on any operating
Kernel-level thread is specific to the operating system.
system.
Multi-threaded applications cannot take advantage of
Kernel routines themselves can be multithreaded.
multiprocessing.
11. Introduction of Deadlock in Operating System
A deadlock is a situation where a set of processes is blocked because each process is holding a resource
and waiting for another resource acquired by some other process. In this article, we will discuss deadlock,
its necessary conditions, etc. in detail.

• Deadlock is a situation in computing where two or more processes are unable to proceed because
each is waiting for the other to release resources.

• Key concepts include mutual exclusion, resource holding, circular wait, and no preemption.

Consider an example when two trains are coming toward each other on the same track and there is only
one track, none of the trains can move once they are in front of each other. This is a practical example of
deadlock.

How Does Deadlock occur in the Operating System?

Before going into detail about how deadlock occurs in the Operating System, let’s first discuss how the
Operating System uses the resources present. A process in an operating system uses resources in the
following way.

• Requests a resource
• Use the resource
• Releases the resource
A situation occurs in operating systems when there are two or more processes that hold some resources
and wait for resources held by other(s). For example, in the below diagram, Process 1 is holding Resource
1 and waiting for resource 2 which is acquired by process 2, and process 2 is waiting for resource 1.

Examples of Deadlock

There are several examples of deadlock. Some of them are mentioned below.

1. The system has 2 tape drives. P0 and P1 each hold one tape drive and each needs another one.

2. Semaphores A and B, initialized to 1, P0, and P1 are in deadlock as follows:


• P0 executes wait(A) and preempts.

• P1 executes wait(B).

• Now P0 and P1 enter in deadlock.

P0 P1
wait(A); wait(B)
wait(B); wait(A)
3. Assume the space is available for allocation of 200K bytes, and the following sequence of events occurs.

P0 P1
Request 80KB; Request 70KB;
Request 60KB; Request 80KB;
Deadlock occurs if both processes progress to their second request.

Necessary Conditions for Deadlock in OS

Deadlock can arise if the following four conditions hold simultaneously (Necessary Conditions)

• Mutual Exclusion: Only one process can use a resource at any given time i.e. the resources are
non-sharable.

• Hold and Wait: A process is holding at least one resource at a time and is waiting to acquire other
resources held by some other process.

• No Preemption: A resource cannot be taken from a process unless the process releases the
resource.

• Circular Wait: set of processes are waiting for each other in a circular fashion. For example, lets
say there are a set of processes {P0P0,P1P1,P2P2,P3P3} such that P0P0 depends on P1P1, P1P1
depends on P2P2, P2P2 depends on P3P3 and P3P3 depends on P0P0. This creates a circular
relation between all these processes and they have to wait forever to be executed.

Methods of Handling Deadlocks in Operating System

There are three ways to handle deadlock:

1. Deadlock Prevention or Avoidance

2. Deadlock Detection and Recovery

3. Deadlock Ignorance

Deadlock Prevention or Avoidance

Deadlock Prevention and Avoidance is the one of the methods for handling deadlock. First, we will discuss
Deadlock Prevention, then Deadlock Avoidance.

Deadlock Prevention
In deadlock prevention the aim is to not let full-fill one of the required condition of the deadlock. This can
be done by this method:

(i) Mutual Exclusion

We only use the Lock for the non-share-able resources and if the resource is share- able (like read only
file) then we not use the locks here. That ensure that in case of share -able resource , multiple process can
access it at same time. Problem- Here the problem is that we can only do it in case of share-able resources
but in case of no-share-able resources like printer , we have to use Mutual exclusion.

(ii) Hold and Wait

To ensure that Hold and wait never occurs in the system, we must guarantee that whenever process
request for resource , it does not hold any other resources.

• we can provide the all resources to the process that is required for it’s execution before starting
it’s execution . problem – for example if there are three resource that is required by a process and
we have given all that resource before starting execution of process then there might be a situation
that initially we required only two resource and after one hour we want third resources and this
will cause starvation for the another process that wants this resources and in that waiting time
that resource can allocated to other process and complete their execution.

• We can ensure that when a process request for any resources that time the process does not hold
any other resources. Ex- Let there are three resources DVD, File and Printer . First the process
request for DVD and File for the copying data into the file and let suppose it is going to take 1 hour
and after it the process free all resources then again request for File and Printer to print that file.

(iii) No Preemption

If a process is holding some resource and requestion other resources that are acquired and these resource
are not available immediately then the resources that current process is holding are preempted. After
some time process again request for the old resources and other required resources to re-start.

For example – Process p1 have resource r1 and requesting for r2 that is hold by process p2. then process
p1 preempt r1 and after some time it try to restart by requesting both r1 and r2 resources.

Problem – This can cause the Live Lock Problem .

Live Lock : Live lock is the situation where two or more processes continuously changing their state in
response to each other without making any real progress.

Example:

• suppose there are two processes p1 and p2 and two resources r1 and r2.

• Now, p1 acquired r1 and need r2 & p2 acquired r2 and need r1.

• so according to above method- Both p1 and p2 detect that they can’t acquire second resource, so
they release resource that they are holding and then try again.
• continuous cycle- p1 again acquired r1 and requesting to r2 p2 again acquired r2 and requesting
to r1 so there is no overall progress still process are changing there state as they preempt
resources and then again holding them. This the situation of Live Lock.

(iv) Circular Wait:

To remove the circular wait in system we can give the ordering of resources in which a process needs to
acquire.

Ex: If there are process p1 and p2 and resources r1 and r2 then we can fix the resource acquiring order like
the process first need to acquire resource r1 and then resource r2. so the process that acquired r1 will be
allowed to acquire r2 , other process needs to wait until r1 is free.

This is the Deadlock prevention methods but practically only fourth method is used as all other three
condition removal method have some disadvantages with them.

Deadlock Avoidance

Avoidance is kind of futuristic. By using the strategy of “Avoidance”, we have to make an assumption. We
need to ensure that all information about resources that the process will need is known to us before the
execution of the process. We use Banker’s algorithm to avoid deadlock.

In prevention and avoidance, we get the correctness of data but performance decreases.

Deadlock Detection and Recovery

If Deadlock prevention or avoidance is not applied to the software then we can handle this by deadlock
detection and recovery. which consist of two phases:

1. In the first phase, we examine the state of the process and check whether there is a deadlock or
not in the system.

2. If found deadlock in the first phase then we apply the algorithm for recovery of the deadlock.

In Deadlock detection and recovery, we get the correctness of data but performance decreases.

Deadlock Detection

Deadlock detection is a process in computing where the system checks if there are any sets of processes
that are stuck waiting for each other indefinitely, preventing them from moving forward. In simple words,
deadlock detection is the process of finding out whether any process are stuck in loop or not. There are
several algorithms like;

• Resource Allocation Graph

• Banker’s Algorithm

These algorithms helps in detection of deadlock in Operating System.

Deadlock Recovery
There are several Deadlock Recovery Techniques:

• Manual Intervention

• Automatic Recovery

• Process Termination

• Resource Preemption

1. Manual Intervention

When a deadlock is detected, one option is to inform the operator and let them handle the situation
manually. While this approach allows for human judgment and decision-making, it can be time-consuming
and may not be feasible in large-scale systems.

2. Automatic Recovery

An alternative approach is to enable the system to recover from deadlock automatically. This method
involves breaking the deadlock cycle by either aborting processes or preempting resources. Let’s delve
into these strategies in more detail.

3. Process Termination

• Abort all Deadlocked Processes This approach breaks the deadlock cycle, but it comes at a
significant cost. The processes that were aborted may have executed for a considerable amount
of time, resulting in the loss of partial computations. These computations may need to be
recomputed later.

• Abort one process at a time Instead of aborting all deadlocked processes simultaneously, this
strategy involves selectively aborting one process at a time until the deadlock cycle is eliminated.
However, this incurs overhead as a deadlock-detection algorithm must be invoked after each
process termination to determine if any processes are still deadlocked.

• Factors for choosing the termination order:


The process’s priority
Completion time and the progress made so far
Resources consumed by the process
Resources required to complete the process
Number of processes to be terminated
Process type (interactive or batch)

4. Resource Preemption

• Selecting a Victim Resource preemption involves choosing which resources and processes should
be preempted to break the deadlock. The selection order aims to minimize the overall cost of
recovery. Factors considered for victim selection may include the number of resources held by a
deadlocked process and the amount of time the process has consumed.
• Rollback
If a resource is preempted from a process, the process cannot continue its normal execution as it
lacks the required resource. Rolling back the process to a safe state and restarting it is a common
approach. Determining a safe state can be challenging, leading to the use of total rollback, where
the process is aborted and restarted from scratch.

• Starvation Prevention To prevent resource starvation, it is essential to ensure that the same
process is not always chosen as a victim. If victim selection is solely based on cost factors, one
process might repeatedly lose its resources and never complete its designated task. To address
this, it is advisable to limit the number of times a process can be chosen as a victim, including the
number of rollbacks in the cost factor.

Deadlock Ignorance

If a deadlock is very rare, then let it happen and reboot the system. This is the approach that both Windows
and UNIX take. we use the ostrich algorithm for deadlock ignorance.

“In Deadlock, ignorance performance is better than the above two methods but the correctness of data is
not there.”

Safe State

A safe state can be defined as a state in which there is no deadlock. It is achievable if:

• If a process needs an unavailable resource, it may wait until the same has been released by a
process to which it has already been allocated. if such a sequence does not exist, it is an unsafe
state.

• All the requested resources are allocated to the process.

Difference between Starvation and Deadlocks

Aspect Deadlock Starvation


Definition A condition where two or more A condition where a process is perpetually
processes are blocked forever, each denied necessary resources, despite
waiting for a resource held by resources being available.
another.
Resource Resources are held by processes Resources are available but are
Availability involved in the deadlock. continuously allocated to other processes.
Cause Circular dependency between Continuous preference or priority given to
processes, where each process is other processes, causing a process to wait
waiting for a resource from another. indefinitely.
Resolution Requires intervention, such as Can be mitigated by adjusting scheduling
aborting processes or preempting policies to ensure fair resource allocation.
resources to break the cycle.
Deadlock System model:
A deadlock occurs when a set of processes is stalled because each process is holding a resource and
waiting for another process to acquire another resource. In the diagram below, for example, Process 1 is
holding Resource 1 while Process 2 acquires Resource 2, and Process 2 is waiting for Resource 1.

System Model :

• For the purposes of deadlock discussion, a system can be modeled as a collection of limited
resources that can be divided into different categories and allocated to a variety of processes, each
with different requirements.

• Memory, printers, CPUs, open files, tape drives, CD-ROMs, and other resources are examples of
resource categories.

• By definition, all resources within a category are equivalent, and any of the resources within that
category can equally satisfy a request from that category. If this is not the case (i.e. if there is some
difference between the resources within a category), then that category must be subdivided
further. For example, the term “printers” may need to be subdivided into “laser printers” and
“color inkjet printers.”

• Some categories may only have one resource.


• The kernel keeps track of which resources are free and which are allocated, to which process they
are allocated, and a queue of processes waiting for this resource to become available for all kernel-
managed resources. Mutexes or wait() and signal() calls can be used to control application-
managed resources (i.e. binary or counting semaphores. )

• When every process in a set is waiting for a resource that is currently assigned to another process
in the set, the set is said to be deadlocked.

Operations:

In normal operation, a process must request a resource before using it and release it when finished, as
shown below.

1. Request – If the request cannot be granted immediately, the process must wait until the
resource(s) required to become available. The system, for example, uses the functions open (),
malloc (), new (), and request ().

2. Use – The process makes use of the resource, such as printing to a printer or reading from a file.

3. Release – The process relinquishes the resource, allowing it to be used by other processes.

Necessary Conditions:

There are four conditions that must be met in order to achieve deadlock as follows.

• Mutual Exclusion – At least one resource must be kept in a non-shareable state; if another
process requests it, it must wait for it to be released.

• Hold and Wait – A process must hold at least one resource while also waiting for at least one
resource that another process is currently holding.

• No preemption – Once a process holds a resource (i.e. after its request is granted), that
resource cannot be taken away from that process until the process voluntarily releases it.

• Circular Wait – There must be a set of processes P0, P1, P2,…, PN such that every P[I] is
waiting for P[(I + 1) percent (N + 1)]. (It is important to note that this condition implies the
hold-and-wait condition, but dealing with the four conditions is easier if they are considered
separately).
Methods for Handling Deadlocks:

In general, there are three approaches to dealing with deadlocks as follows.

1. Preventing or avoiding deadlock by Avoid allowing the system to become stuck in a loop.

2. Detection and recovery of deadlocks, When deadlocks are detected, abort the process or preempt
some resources.
3. Ignore the problem entirely.

4. To avoid deadlocks, the system requires more information about all processes. The system, in
particular, must understand what resources a process will or may request in the future. (
Depending on the algorithm, this can range from a simple worst-case maximum to a complete
resource request and release plan for each process. )

5. Deadlock detection is relatively simple, but deadlock recovery necessitates either aborting
processes or preempting resources, neither of which is an appealing option.

6. If deadlocks are not avoided or detected, the system will gradually slow down as more processes
become stuck waiting for resources that the deadlock has blocked and other waiting processes.
Unfortunately, when the computing requirements of a real-time process are high, this slowdown
can be confused with a general system slowdown.

Deadlock Prevention: Deadlocks can be avoided by avoiding at least one of the four necessary conditions:
as follows.

Condition-1: Mutual Exclusion:

• Read-only files, for example, do not cause deadlocks.

• Unfortunately, some resources, such as printers and tape drives, require a single process to have
exclusive access to them.

Condition-2: Hold and Wait:


To avoid this condition, processes must be prevented from holding one or more resources while also
waiting for one or more others. There are a few possibilities here:

• Make it a requirement that all processes request all resources at the same time. This can be a
waste of system resources if a process requires one resource early in its execution but does not
require another until much later.

• Processes that hold resources must release them prior to requesting new ones, and then re-
acquire the released resources alongside the new ones in a single new request. This can be a
problem if a process uses a resource to partially complete an operation and then fails to re-allocate
it after it is released.

• If a process necessitates the use of one or more popular resources, either of the methods
described above can result in starvation.

Condition-3:
No Preemption: When possible, preemption of process resource allocations can help to avoid deadlocks.

• One approach is that if a process is forced to wait when requesting a new resource, all other
resources previously held by this process are implicitly released (preempted), forcing this process
to re-acquire the old resources alongside the new resources in a single request, as discussed
previously.
• Another approach is that when a resource is requested, and it is not available, the system looks to
see what other processes are currently using those resources and are themselves blocked while
waiting for another resource. If such a process is discovered, some of their resources may be
preempted and added to the list of resources that the process is looking for.

• Either of these approaches may be appropriate for resources whose states can be easily saved and
restored, such as registers and memory, but they are generally inapplicable to other devices, such
as printers and tape drives.

Condition-4:
Circular Wait :

• To avoid circular waits, number all resources and insist that processes request resources
is strictly increasing ( or decreasing) order.
• To put it another way, before requesting resource Rj, a process must first release all Ri
such that I >= j.
• The relative ordering of the various resources is a significant challenge in this scheme.
Deadlock Avoidance :

• The general idea behind deadlock avoidance is to avoid deadlocks by avoiding at least one
of the aforementioned conditions.
• This necessitates more information about each process AND results in low device
utilization. (This is a conservative approach.)
• The scheduler only needs to know the maximum number of each resource that a process
could potentially use in some algorithms. In more complex algorithms, the scheduler can
also use the schedule to determine which resources are required and in what order.
• When a scheduler determines that starting a process or granting resource requests will
result in future deadlocks, the process is simply not started or the request is denied.
• The number of available and allocated resources, as well as the maximum requirements
of all processes in the system, define a resource allocation state.
Deadlock Detection :

• If deadlocks cannot be avoided, another approach is to detect them and recover in some way.

• Aside from the performance hit of constantly checking for deadlocks, a policy/algorithm for
recovering from deadlocks must be in place, and when processes must be aborted or have their
resources preempted, there is the possibility of lost work.

Recovery From Deadlock: There are three basic approaches to getting out of a bind:

1. Inform the system operator and give him/her permission to intervene manually.
2. Stop one or more of the processes involved in the deadlock.
3. Prevent the use of resources.
Approach of Recovery from Deadlock: Here, we will discuss the approach of Recovery from Deadlock as
follows.
Approach-1: Process Termination: There are two basic approaches for recovering resources allocated to
terminated processes as follows.

1. Stop all processes that are involved in the deadlock. This does break the deadlock, but at
the expense of terminating more processes than are absolutely necessary.
2. Processes should be terminated one at a time until the deadlock is broken. This method
is more conservative, but it necessitates performing deadlock detection after each step.
In the latter case, many factors can influence which processes are terminated next as follows.

1. Priorities in the process


2. How long has the process been running and how close it is to completion.
3. How many and what kind of resources does the process have? (Are they simple to
anticipate and restore? )
4. How many more resources are required for the process to be completed?
5. How many processes will have to be killed?
6. Whether the process is batch or interactive.
Approach-2 : Resource Preemption:
When allocating resources to break the deadlock, three critical issues must be addressed:

1. Selecting a victim – Many of the decision criteria outlined above apply to determine which
resources to preempt from which processes.

2. Rollback – A preempted process should ideally be rolled back to a safe state before the point at
which that resource was originally assigned to the process. Unfortunately, determining such a safe
state can be difficult or impossible, so the only safe rollback is to start from the beginning. (In
other words, halt and restart the process.)

3. Starvation – How do you ensure that a process does not go hungry because its resources are
constantly being preempted? One option is to use a priority system and raise the priority of a
process whenever its resources are preempted. It should eventually gain a high enough priority
that it will no longer be preempted.
Deadlock Prevention and Avoidance
Deadlock prevention and avoidance are strategies used in computer systems to ensure that different
processes can run smoothly without getting stuck waiting for each other forever. Think of it like a traffic
system where cars (processes) must move through intersections (resources) without getting into a
gridlock.

Necessary Conditions for Deadlock

• Mutual Exclusion
• Hold and Wait
• No Preemption
• Circular Wait
Please refer Conditions for Deadlock in OS for details.

Deadlock Prevention

We can prevent a Deadlock by eliminating any of the above four conditions.

Eliminate Mutual Exclusion

It is not possible to violate mutual exclusion because some resources, such as the tape drive, are inherently
non-shareable. For other resources, like printers, we can use a technique called Spooling (Simultaneous
Peripheral Operations Online).

In spooling, when multiple processes request the printer, their jobs (instructions of the processes that
require printer access) are added to the queue in the spooler directory. The printer is allocated to jobs on
a First-Come, First-Served (FCFS) basis. In this way, a process does not have to wait for the printer and can
continue its work after adding its job to the queue.

Eliminate Hold and Wait

Hold and wait is a condition in which a process holds one resource while simultaneously waiting for
another resource that is being held by a different process. The process cannot continue until it gets all the
required resources.
Hold & Wait

There are two ways to eliminate hold and wait:

• By eliminating wait: The process specifies the resources it requires in advance so that it does not
have to wait for allocation after execution starts.
For Example, Process1 declares in advance that it requires both Resource1 and Resource2.

• By eliminating hold: The process has to release all resources it is currently holding before making
new request.

For Example: Process1 must release Resource2 and Resource3 before requesting Resource1.

Eliminate No Preemption

Preemption is temporarily interrupting an executing task and later resuming it. Two ways to eliminate No
Preemption:

• Processes must release resources voluntarily: A process should only give up resources it holds
when it completes its task or no longer needs them.

• Avoid partial allocation: Allocate all required resources to a process at once before it begins
execution. If not, all resources are available, the process must wait.

Eliminate Circular Wait

To eliminate circular wait for deadlock prevention, we can use order on resource allocation.

• Assign a unique number to each resource.


• Processes can only request resources in an increasing order of their numbers.

This prevents circular chains of processes waiting for resources, as no process can request a resource lower
than what it already holds.

Detection and Recovery

Another approach to dealing with deadlocks is to detect and recover from them when they occur. This can
involve killing one or more of the processes involved in the deadlock or releasing some of the resources
they hold.

Deadlock Avoidance

Deadlock avoidance ensures that a resource request is only granted if it won’t lead to deadlock, either
immediately or in the future. Since the kernel can’t predict future process behavior, it uses a conservative
approach. Each process declares the maximum number of resources it may need. The kernel allows
requests in stages, checking for potential deadlocks before granting them. A request is granted only if no
deadlock is possible; otherwise, it stays pending. This approach is conservative, as a process may finish
without using the maximum resources it declared.

Banker’s Algorithm is the technique used for Deadlock Avoidance.

Banker’s Algorithm

Bankers’ Algorithm is a resource allocation and deadlock avoidance algorithm that tests all resource
requests made by processes. It checks for the safe state, and if granting a request keeps the system in safe
state, the request is allowed. However, if no safe state exists, the request is denied.

Inputs to Banker’s Algorithm

• Max needs of resources by each process.

• Currently, allocated resources by each process.

• Max free available resources in the system.

The request will only be granted under the below condition

• If the request made by the process is less than equal to the max needed for that process.

• If the request made by the process is less than equal to the freely available resource in the system.

Timeouts

To avoid deadlocks caused by indefinite waiting, a timeout mechanism can be used to limit the amount of
time a process can wait for a resource. If the help is unavailable within the timeout period, the process
can be forced to release its current resources and try again later.

Example

Below is an example of a Banker’s algorithm


Total resources in system:

A B C D
6 5 7 6
Available system resources are:

A B C D
3 1 1 2
Processes (currently allocated resources):

A B C D
P1 1 2 2 1
P2 1 0 3 3
P3 1 2 1 0
Maximum resources we have for a process:

A B C D
P1 3 3 2 2
P2 1 2 3 4
P3 1 3 5 0
Need = Maximum Resources Requirement – Currently Allocated Resources

A B C D
P1 2 1 0 1
P2 0 2 0 1
P3 0 1 4 0
Recovery from Deadlock in Operating System
In today’s world of computer systems and multitasking environments, deadlock is an undesirable situation
that can bring operations to a halt. When multiple processes compete for exclusive access to resources
and end up in a circular waiting pattern, a deadlock occurs. To maintain the smooth functioning of an
operating system, it is crucial to implement recovery mechanisms that can break these deadlocks and
restore the system’s productivity.

What is Recovery from Deadlock in OS?

“Recovery from Deadlock in Operating Systems” refers to the set of techniques and algorithms designed
to detect, resolve, or mitigate deadlock situations. These methods ensure that the system can continue
processing tasks efficiently without being trapped in an eternal standstill. Let’s take a closer look at some
of the key strategies employed.

There is no mechanism implemented by the OS to avoid or prevent deadlocks. The system, therefore,
assumes that a deadlock will undoubtedly occur. The OS periodically checks the system for any deadlocks
in an effort to break them. The OS will use various recovery techniques to restore the system if it
encounters any deadlocks. When a Deadlock Detection Algorithm determines that a deadlock has
occurred in the system, the system must recover from that deadlock.

What is Deadlock?

In an operating system, a deadlock is a situation where a group of processes is stuck and unable to proceed
because each one is waiting for a resource that another process in the group is holding.

Imagine four people at a round table, each with one fork, and they need two forks to eat. If everyone picks
up the fork to their left and waits for the fork on their right, no one can eat. They are all stuck, waiting
forever. This is a deadlock.

In technical terms, deadlock involves four conditions:

• Mutual Exclusion : Resources cannot be shared; they can only be used by one process at a time.

• Hold and Wait : Processes holding resources can request additional resources.

• No Preemption : Resources cannot be forcibly taken away from processes holding them.

• Circular Wait : A closed chain of processes exists, where each process holds at least one resource
needed by the next process in the chain.

Having a deep understanding of deadlock recovery techniques is vital for anyone

Ways of Handling a Deadlock

There are several ways of handling a deadlock, some of which are mentioned below:

1. Process Termination
To eliminate the deadlock, we can simply kill one or more processes. For this, we use two methods:

• Abort all the Deadlocked Processes : Aborting all the processes will certainly break the deadlock
but at a great expense. The deadlocked processes may have been computed for a long time, and
the result of those partial computations must be discarded and there is a probability of
recalculating them later.

• Abort one process at a time until the deadlock is eliminated : Abort one deadlocked process at a
time, until the deadlock cycle is eliminated from the system. Due to this method, there may be
considerable overhead, because, after aborting each process, we have to run a deadlock detection
algorithm to check whether any processes are still deadlocked.

Advantages of Process Termination

• It is a simple method for breaking a deadlock.

• It ensures that the deadlock will be resolved quickly, as all processes involved in the deadlock are
terminated simultaneously.

• It frees up resources that were being used by the deadlocked processes , making those resources
available for other processes.

Disadvantages of Process Termination

• It can result in the loss of data and other resources that were being used by the terminated
processes.

• It may cause further problems in the system if the terminated processes were critical to the
system’s operation.

• It may result in a waste of resources , as the terminated processes may have already completed a
significant amount of work before being terminated.

2. Resource Preemption

To eliminate deadlocks using resource preemption, we preempt some resources from processes and give
those resources to other processes. This method will raise three issues:

• Selecting a Victim : We must determine which resources and which processes are to be
preempted and also in order to minimize the cost.

• Rollback : We must determine what should be done with the process from which resources are
preempted. One simple idea is total rollback. That means aborting the process and restarting it.

• Starvation : In a system, it may happen that the same process is always picked as a victim. As a
result, that process will never complete its designated task. This situation is called Starvation and
must be avoided. One solution is that a process must be picked as a victim only a finite number of
times.
Advantages of Resource Preemption

1. It can help in breaking a deadlock without terminating any processes, thus preserving data and
resources.

2. It is more efficient than process termination as it targets only the resources that are causing
the deadlock .

3. It can potentially avoid the need for restarting the system.

Disadvantages of Resource Preemption

1. It may lead to increased overhead due to the need for determining which resources and processes
should be preempted.

2. It may cause further problems if the preempted resources were critical to the system’s operation.

3. It may cause delays in the completion of processes if resources are frequently preempted.

3. Priority Inversion

A technique for breaking deadlocks in real-time systems is called priority inversion. This approach alters
the order of the processes to prevent stalemates. A higher priority is given to the process that already has
the needed resources, and a lower priority is given to the process that is still awaiting them. The inversion
of priorities that can result from this approach can impair system performance and cause performance
issues. Additionally, because higher-priority processes may continue to take precedence over lower-
priority processes, this approach may starve lower-priority processes of resources.

4. RollBack

In database systems, rolling back is a common technique for breaking deadlocks. When using this
technique, the system reverses the transactions of the involved processes to a time before the deadlock.
The system must keep a log of all transactions and the system’s condition at various points in time in order
to use this method. The transactions can then be rolled back to the initial state and executed again by the
system. This approach may result in significant delays in the transactions’ execution and data loss.

Resource Allocation Graph (RAG) for Deadlock Detection

The resource allocation graph (RAG) is a popular technique for computer system deadlock detection. The
RAG is a visual representation of the processes holding the resources and their current state of allocation.
The resources and processes are represented by the graph’s nodes, while their allocation relationships are
shown by the graph’s edges. A cycle in the graph of the RAG method denotes the presence of a deadlock.
When a cycle is discovered, at least one resource needed by another process in the cycle is being held by
each process in the cycle, causing a deadlock. The RAG method is a crucial tool in contemporary operating
systems due to its high efficiency and ability to spot deadlocks quickly.
Conclusion

In conclusion, recovering from a deadlock in an operating system involves detecting the deadlock and then
resolving it by either terminating some of the involved processes or preempting resources. These actions
help free up the system and restore normal operations, though they may cause temporary disruptions.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy