OS unit 3
OS unit 3
CPU scheduling is the process of deciding which process will own the CPU to use while another process is
suspended. The main function of CPU scheduling is to ensure that whenever the CPU remains idle, the OS
has at least selected one of the processes available in the ready-to-use line.
In Multiprogramming, if the long-term scheduler selects multiple I/O binding processes then most of the
time, the CPU remains idle. The function of an effective program is to improve resource utilization.
• Arrival Time: The time at which the process arrives in the ready queue.
• Completion Time: The time at which the process completes its execution.
• Burst Time: Time required by a process for CPU execution.
• Turn Around Time: Time Difference between completion time and arrival time.
Turn Around Time = Completion Time – Arrival Time
• Waiting Time (W.T): Time Difference between turnaround time and burst time.
Different CPU Scheduling algorithms have different structures and the choice of a particular algorithm
depends on a variety of factors.
• CPU Utilization: The main purpose of any CPU algorithm is to keep the CPU as busy as possible.
Theoretically, CPU usage can range from 0 to 100 but in a real-time system, it varies from 40 to 90
percent depending on the system load.
• Throughput: The average CPU performance is the number of processes performed and completed
during each unit. This is called throughput. The output may vary depending on the length or
duration of the processes.
• Turn Round Time: For a particular process, the important conditions are how long it takes to
perform that process. The time elapsed from the time of process delivery to the time of
completion is known as the conversion time. Conversion time is the amount of time spent waiting
for memory access, waiting in line, using CPU and waiting for I/O.
• Waiting Time: The Scheduling algorithm does not affect the time required to complete the process
once it has started performing. It only affects the waiting time of the process i.e. the time spent in
the waiting process in the ready queue.
• Response Time: In a collaborative system, turnaround time is not the best option. The process
may produce something early and continue to computing the new results while the previous
results are released to the user. Therefore, another method is the time taken in the submission of
the application process until the first response is issued. This measure is called response time.
• Preemptive Scheduling: Preemptive scheduling is used when a process switches from running
state to ready state or from the waiting state to the ready state.
CPU Scheduling
Let us now learn about these CPU scheduling algorithms in operating systems one by one:
Question: Consider the following table of arrival time and burst time for three processes P0, P1 and P2.
(A) 5.0 ms
(B) 4.33 ms
(C) 6.33
(D) 7.33
Solution: (A)
Process P0 is allocated processor at 0 ms as there is no other process in the ready queue. P0 is preempted
after 1 ms as P1 arrives at 1 ms and burst time for P1 is less than remaining time of P0. P1 runs for 4ms.
P2 arrived at 2 ms but P1 continued as burst time of P2 is longer than P1. After P1 completes, P0 is
scheduled again as the remaining time for P0 is less than the burst time of P2. P0 waits for 4 ms, P1 waits
for 0 ms and P2 waits for 11 ms. So average waiting time is (0+4+11)/3 = 5.
Question: Consider the following set of processes, with the arrival times and the CPU-burst times given
in milliseconds.
PROCESS ARRIVAL TIME BURST TIME
P1 0 ms 5 ms
P2 1 ms 3 ms
P3 2 ms 3 ms
P4 4 ms 1 ms
What is the average turnaround time for these processes with the preemptive Shortest Remaining
Processing Time First algorithm ?
(A) 5.50
(B) 5.75
(C) 6.00
(D) 6.25
Answer: (A)
Solution: The following is Gantt Chart of execution
P1 P2 P4 P3 P1
1 4 5 8 12
Turn Around Time = Completion Time – Arrival Time Avg Turn Around Time = (12 + 3 + 6+ 1)/4 = 5.50
Question: An operating system uses the Shortest Remaining Time First (SRTF) process scheduling
algorithm. Consider the arrival times and execution times for the following processes:
(A) 5
(B) 15
(C) 40
(D) 55
Answer (B)
Solution: At time 0, P1 is the only process, P1 runs for 15 time units. At time 15, P2 arrives, but P1 has the
shortest remaining time. So P1 continues for 5 more time units. At time 20, P2 is the only process. So it
runs for 10 time units At time 30, P3 is the shortest remaining time process. So it runs for 10 time units At
time 40, P2 runs as it is the only process. P2 runs for 5 time units. At time 45, P3 arrives, but P2 has the
shortest remaining time. So P2 continues for 10 more time units. P2 completes its execution at time 55.
• Priority scheduling is a non-preemptive algorithm and one of the most common scheduling
algorithms in batch systems.
• Each process is assigned a priority. Process with highest priority is to be executed first and so on.
• Processes with same priority are executed on first come first served basis.
• Priority can be decided based on memory requirements, time requirements or any other resource
requirement.
Given: Table of processes, and their Arrival time, Execution time, and priority. Here we are considering 1
is the lowest priority.
• Shortest remaining time (SRT) is the preemptive version of the SJN algorithm.
• The processor is allocated to the job closest to completion but it can be preempted by a
newer ready job with shorter time to completion.
• Impossible to implement in interactive systems where required CPU time is not known.
• It is often used in batch environments where short jobs need to give preference.
Round Robin Scheduling
Multiple-level queues are not an independent scheduling algorithm. They make use of other existing
algorithms to group and schedule jobs with common characteristics.
For example, CPU-bound jobs can be scheduled in one queue and all I/O-bound jobs in another queue.
The Process Scheduler then alternately selects jobs from each queue and assigns them to the CPU based
on the algorithm assigned to the queue
3. Operating System - Process Scheduling
The process scheduling is the activity of the process manager that handles the removal of the running
process from the CPU and the selection of another process on the basis of a particular strategy.
Process scheduling is an essential part of a Multiprogramming operating systems. Such operating systems
allow more than one process to be loaded into the executable memory at a time and the loaded process
shares the CPU using time multiplexing.
Categories of Scheduling
1. Non-preemptive: Here the resource cant be taken from a process until the process completes
execution. The switching of resources occurs when the running process terminates and moves to a
waiting state.
2. Preemptive: Here the OS allocates the resources to a process for a fixed amount of time. During
resource allocation, the process switches from running state to ready state or from waiting state to
ready state. This switching occurs as the CPU may give priority to other processes and replace the
process with higher priority with the running process.
The OS maintains all Process Control Blocks (PCBs) in Process Scheduling Queues. The OS maintains a
separate queue for each of the process states and PCBs of all processes in the same execution state are
placed in the same queue. When the state of a process is changed, its PCB is unlinked from its current
queue and moved to its new state queue.
The Operating System maintains the following important process scheduling queues −
• Job queue − This queue keeps all the processes in the system.
• Ready queue − This queue keeps a set of all processes residing in main memory, ready and waiting to
execute. A new process is always put in this queue.
• Device queues − The processes which are blocked due to unavailability of an I/O device constitute this
queue.
The OS can use different policies to manage each queue (FIFO, Round Robin, Priority, etc.). The OS
scheduler determines how to move processes between the ready and run queues which can only have
one entry per processor core on the system; in the above diagram, it has been merged with the CPU.
Two-state process model refers to running and non-running states which are described below −
Schedulers are special system software which handle process scheduling in various ways. Their main task
is to select the jobs to be submitted into the system and to decide which process to run. Schedulers are of
three types −
• Long-Term Scheduler
• Short-Term Scheduler
• Medium-Term Scheduler
It is also called a job scheduler. A long-term scheduler determines which programs are admitted to the
system for processing. It selects processes from the queue and loads them into memory for execution.
Process loads into the memory for CPU scheduling.
The primary objective of the job scheduler is to provide a balanced mix of jobs, such as I/O bound and
processor bound. It also controls the degree of multiprogramming. If the degree of multiprogramming is
stable, then the average rate of process creation must be equal to the average departure rate of processes
leaving the system.
On some systems, the long-term scheduler may not be available or minimal. Time-sharing operating
systems have no long term scheduler. When a process changes the state from new to ready, then there is
use of long-term scheduler.
It is also called as CPU scheduler. Its main objective is to increase system performance in accordance with
the chosen set of criteria. It is the change of ready state to running state of the process. CPU scheduler
selects a process among the processes that are ready to execute and allocates CPU to one of them.
Short-term schedulers, also known as dispatchers, make the decision of which process to execute next.
Short-term schedulers are faster than long-term schedulers.
Medium-term scheduling is a part of swapping. It removes the processes from the memory. It reduces the
degree of multiprogramming. The medium-term scheduler is in-charge of handling the swapped out-
processes.
A running process may become suspended if it makes an I/O request. A suspended processes cannot make
any progress towards completion. In this condition, to remove the process from memory and make space
for other processes, the suspended process is moved to the secondary storage. This process is
called swapping, and the process is said to be swapped out or rolled out. Swapping may be necessary to
improve the process mix.
A context switching is the mechanism to store and restore the state or context of a CPU in Process Control
block so that a process execution can be resumed from the same point at a later time. Using this technique,
a context switcher enables multiple processes to share a single CPU. Context switching is an essential part
of a multitasking operating system features.
When the scheduler switches the CPU from executing one process to execute another, the state from the
current running process is stored into the process control block. After this, the state for the process to run
next is loaded from its own PCB and used to set the PC, registers, etc. At that point, the second process
can start executing.
Context switches are computationally intensive since register and memory state must be saved and
restored. To avoid the amount of context switching time, some hardware systems employ two or more
sets of processor registers. When the process is switched, the following information is stored for later use.
• Program Counter
• Scheduling information
• Base and limit register value
• Currently used register
• Changed State
• I/O State information
• Accounting information
4. CPU Scheduling Criteria
CPU scheduling is essential for the system’s performance and ensures that processes are executed
correctly and on time. Different CPU scheduling algorithms have other properties and the choice of a
particular algorithm depends on various factors. Many criteria have been suggested for comparing CPU
scheduling algorithms.
CPU Scheduling is a process that allows one process to use the CPU while another process is delayed due
to unavailability of any resources such as I / O etc, thus making full use of the CPU. In short, CPU scheduling
decides the order and priority of the processes to run and allocates the CPU time based on various
parameters such as CPU usage, throughput, turnaround, waiting time, and response time. The purpose of
CPU Scheduling is to make the system more efficient, faster, and fairer.
CPU scheduling criteria, such as turnaround time, waiting time, and throughput, are essential metrics used
to evaluate the efficiency of scheduling algorithms.
Now let’s discuss CPU Scheduling has several criteria. Some of them are mentioned below.
1. CPU utilization
The main objective of any CPU scheduling algorithm is to keep the CPU as busy as possible. Theoretically,
CPU utilization can range from 0 to 100 but in a real-time system, it varies from 40 to 90 percent depending
on the load upon the system.
2. Throughput
A measure of the work done by the CPU is the number of processes being executed and completed per
unit of time. This is called throughput. The throughput may vary depending on the length or duration of
the processes.
CPU Scheduling Criteria
3. Turnaround Time
For a particular process, an important criterion is how long it takes to execute that process. The time
elapsed from the time of submission of a process to the time of completion is known as the turnaround
time. Turn-around time is the sum of times spent waiting to get into memory, waiting in the ready queue,
executing in CPU, and waiting for I/O.
4. Waiting Time
A scheduling algorithm does not affect the time required to complete the process once it starts execution.
It only affects the waiting time of a process i.e. time spent by a process waiting in the ready queue.
5. Response Time
In an interactive system, turn-around time is not the best criterion. A process may produce some output
fairly early and continue computing new results while previous results are being output to the user. Thus,
another criterion is the time taken from submission of the process of the request until the first response
is produced. This measure is called response time.
Response Time = CPU Allocation Time(when the CPU was allocated for the first) – Arrival Time
6. Completion Time
The completion time is the time when the process stops executing, which means that the process has
completed its burst time and is completely executed.
7. Priority
If the operating system assigns priorities to processes, the scheduling mechanism should favor the higher-
priority processes.
8. Predictability
A given process always should run in about the same amount of time under a similar system load.
Importance of Selecting the Right CPU Scheduling Algorithm for Specific Situations
It is important to choose the correct CPU scheduling algorithm because different algorithms have different
priorities for different CPU scheduling criteria. Different algorithms have different strengths and
weaknesses. Choosing the wrong CPU scheduling algorithm in a given situation can result in suboptimal
performance of the system.
Example: Here are some examples of CPU scheduling algorithms that work well in different situations.
Round Robin scheduling algorithm works well in a time-sharing system where tasks have to be completed
in a short period of time. SJF scheduling algorithm works best in a batch processing system where shorter
jobs have to be completed first in order to increase throughput. Priority scheduling algorithm works better
in a real-time system where certain tasks have to be prioritized so that they can be completed in a timely
manner.
There are many factors that influence the choice of CPU scheduling algorithm. Some of them are listed
below.
There are several CPU Scheduling Algorithms, that are listed below.
(A) FIFO
(B) Round Robin
(C) Shortest Job Next
(D) None of the above
Correct option is (C)
Explanation: Shortest job next may lead to process starvation for processes which will require
a long time to complete if short processes are continually added.
5. States of a Process in Operating Systems
In an operating system, a process is a program that is being executed. During its execution, a process goes
through different states. Understanding these states helps us see how the operating system manages
processes, ensuring that the computer runs efficiently. Please refer Process in Operating System to
understand more details about processes.
Each process goes through several stages throughout its life cycle. In this article, We discuss different states
of process in detail.
Process Lifecycle
When you run a program (which becomes a process), it goes through different phases before it completion.
These phases, or states, can vary depending on the operating system, but the most common process
lifecycle includes two, five, or seven states. Here’s a simple explanation of these states:
The simplest way to think about a process’s lifecycle is with just two states:
1. Running: This means the process is actively using the CPU to do its work.
2. Not Running: This means the process is not currently using the CPU. It could be waiting for
something, like user input or data, or it might just be paused.
When a new process is created, it starts in the not running state. Initially, this process is kept in a program
called the dispatcher.
1. Not Running State: When the process is first created, it is not using the CPU.
2. Dispatcher Role: The dispatcher checks if the CPU is free (available for use).
3. Moving to Running State: If the CPU is free, the dispatcher lets the process use the CPU,
and it moves into the running state.
4. CPU Scheduler Role: When the CPU is available, the CPU scheduler decides which process
gets to run next. It picks the process based on a set of rules called the scheduling scheme,
which varies from one operating system to another.
The Five-State Model
The five-state process lifecycle is an expanded version of the two-state model. The two-state model works
well when all processes in the not running state are ready to run. However, in some operating systems, a
process may not be able to run because it is waiting for something, like input or data from an external
device. To handle this situation better, the not running state is divided into two separate states:
• New: This state represents a newly created process that hasn’t started running yet. It has not been
loaded into the main memory, but its process control block (PCB) has been created, which holds
important information about the process.
• Ready: A process in this state is ready to run as soon as the CPU becomes available. It is waiting
for the operating system to give it a chance to execute.
• Running: This state means the process is currently being executed by the CPU. Since we’re
assuming there is only one CPU, at any time, only one process can be in this state.
• Blocked/Waiting: This state means the process cannot continue executing right now. It is waiting
for some event to happen, like the completion of an input/output operation (for example, reading
data from a disk).
• Exit/Terminate: A process in this state has finished its execution or has been stopped by the user
for some reason. At this point, it is released by the operating system and removed from memory.
• New State: In this step, the process is about to be created but not yet created. It is the program
that is present in secondary memory that will be picked up by the OS to create the process.
• Ready State: New -> Ready to run. After the creation of a process, the process enters the ready
state i.e. the process is loaded into the main memory. The process here is ready to run and is
waiting to get the CPU time for its execution. Processes that are ready for execution by the CPU
are maintained in a queue called a ready queue for ready processes.
• Run State: The process is chosen from the ready queue by the OS for execution and the
instructions within the process are executed by any one of the available processors.
• Blocked or Wait State: Whenever the process requests access to I/O needs input from the user or
needs access to a critical region(the lock for which is already acquired) it enters the blocked or
waits state. The process continues to wait in the main memory and does not require CPU. Once
the I/O operation is completed the process goes to the ready state.
• Terminated or Completed State: Process is killed as well as PCB is deleted. The resources allocated
to the process will be released or deallocated.
• Suspend Ready: Process that was initially in the ready state but was swapped out of main
memory(refer to Virtual Memory topic) and placed onto external storage by the scheduler is said
to be in suspend ready state. The process will transition back to a ready state whenever the
process is again brought onto the main memory.
• Suspend Wait or Suspend Blocked: Similar to suspend ready but uses the process which was
performing I/O operation and lack of main memory caused them to move to secondary memory.
When work is finished it may go to suspend ready.
• CPU and I/O Bound Processes: If the process is intensive in terms of CPU operations, then it is
called CPU bound process. Similarly, If the process is intensive in terms of I/O operations then it is
called I/O bound process.
A process can move between different states in an operating system based on its execution status and
resource availability. Here are some examples of how a process can move between different states:
• New to Ready: When a process is created, it is in a new state. It moves to the ready state
when the operating system has allocated resources to it and it is ready to be executed.
• Ready to Running: When the CPU becomes available, the operating system selects a process
from the ready queue depending on various scheduling algorithms and moves it to the
running state.
• Running to Blocked: When a process needs to wait for an event to occur (I/O operation
or system call), it moves to the blocked state. For example, if a process needs to wait for user
input, it moves to the blocked state until the user provides the input.
• Running to Ready: When a running process is preempted by the operating system, it moves
to the ready state. For example, if a higher-priority process becomes ready, the operating
system may preempt the running process and move it to the ready state.
• Blocked to Ready: When the event a blocked process was waiting for occurs, the process
moves to the ready state. For example, if a process was waiting for user input and the input
is provided, it moves to the ready state.
• Running to Terminated: When a process completes its execution or is terminated by the
operating system, it moves to the terminated state.
Types of Schedulers
• Long-Term Scheduler: Decides how many processes should be made to stay in the ready
state. This decides the degree of multiprogramming. Once a decision is taken it lasts for a long
time which also indicates that it runs infrequently. Hence it is called a long-term scheduler.
• Short-Term Scheduler: Short-term scheduler will decide which process is to be executed next
and then it will call the dispatcher. A dispatcher is a software that moves the process from
ready to run and vice versa. In other words, it is context switching. It runs frequently. Short-
term scheduler is also called CPU scheduler.
• Medium Scheduler: Suspension decision is taken by the medium-term scheduler. The medium-
term scheduler is used for swapping which is moving the process from main memory to
secondary and vice versa. The swapping is done to reduce degree of multiprogramming.
Multiprogramming
We have many processes ready to run. There are two types of multiprogramming:
• Preemption: Process is forcefully removed from CPU. Pre-emption is also called time
sharing or multitasking.
• Non-Preemption: Processes are not removed until they complete the execution. Once
control is given to the CPU for a process execution, till the CPU releases the control by
itself, control cannot be taken back forcibly from the CPU.
Degree of Multiprogramming
The number of processes that can reside in the ready state at maximum decides the degree of
multiprogramming, e.g., if the degree of programming = 100, this means 100 processes can reside in the
ready state at maximum.
• Creation: The process will be ready once it has been created, enter the ready queue (main memory),
and be prepared for execution.
• Planning: The operating system picks one process to begin executing from among the numerous
processes that are currently in the ready queue. Scheduling is the process of choosing the next process
to run.
• Application: The processor begins running the process as soon as it is scheduled to run. During
execution, a process may become blocked or wait, at which point the processor switches to executing
the other processes.
• Killing or Deletion: The OS will terminate the process once its purpose has been fulfilled. The process’s
context will be over there.
• Blocking: When a process is waiting for an event or resource, it is blocked. The operating system will
place it in a blocked state, and it will not be able to execute until the event or resource becomes
available.
• Resumption: When the event or resource that caused a process to block becomes available, the
process is removed from the blocked state and added back to the ready queue.
• Context Switching: When the operating system switches from executing one process to another, it
must save the current process’s context and load the context of the next process to execute. This is
known as context switching.
• Inter-Process Communication: Processes may need to communicate with each other to share data or
coordinate actions. The operating system provides mechanisms for inter-process communication, such
as shared memory, message passing, and synchronization primitives.
• Process Synchronization: Multiple processes may need to access a shared resource or critical section
of code simultaneously. The operating system provides synchronization mechanisms to ensure that only
one process can access the resource or critical section at a time.
• Process States: Processes may be in one of several states, including ready, running, waiting, and
terminated. The operating system manages the process states and transitions between them.
Features of The Process State
• A process can move from the running state to the waiting state if it needs to wait for a
resource to become available.
• A process can move from the waiting state to the ready state when the resource it was
waiting for becomes available.
• A process can move from the ready state to the running state when it is selected by the
operating system for execution.
• The scheduling algorithm used by the operating system determines which process is
selected to execute from the ready state.
• The operating system may also move a process from the running state to the ready state
to allow other processes to execute.
• A process can move from the running state to the terminated state when it completes its
execution.
• A process can move from the waiting state directly to the terminated state if it is aborted
or killed by the operating system or another process.
• A process can go through ready, running and waiting state any number of times in its
lifecycle but new and terminated happens only once.
• The process state includes information about the program counter, CPU registers, memory
allocation, and other resources used by the process.
• The operating system maintains a process control block (PCB) for each process, which
contains information about the process state, priority, scheduling information, and other
process-related data.
• The process state diagram is used to represent the transitions between different states of
a process and is an essential concept in process management in operating systems.
Conclusion
In conclusion, understanding the states of a process in an operating system is essential for comprehending
how the system efficiently manages multiple processes. These states—new, ready, running, waiting, and
terminated—represent different stages in a process’s life cycle. By transitioning through these states, the
operating system ensures that processes are executed smoothly, resources are allocated effectively, and
the overall performance of the computer is optimized. This knowledge helps us appreciate the complexity
and efficiency behind the scenes of modern computing.
6. Process Transition Diagram,
7. Process Schedulers in Operating System
A process is the instance of a computer program in execution.
Process scheduling is the activity of the process manager that handles the removal of the running process
from the CPU and the selection of another process based on a particular strategy. Throughout its lifetime,
a process moves between various scheduling queues, such as the ready queue, waiting queue, or devices
queue.
Process scheduler
Categories of Scheduling
Long Term Scheduler loads a process from disk to main memory for execution. The new process to the
‘Ready State’.
CPU Scheduler is responsible for selecting one process from the ready state for running (or assigning CPU
to it).
• STS (Short Term Scheduler) must select a new process for the CPU frequently to avoid
starvation.
• The CPU scheduler uses different scheduling algorithms to balance the allocation of CPU
time.
• It picks a process from ready queue.
• Its main objective is to make the best use of CPU.
• It mainly calls dispatcher.
• Fastest among the three (that is why called Short Term).
The dispatcher is responsible for loading the process selected by the Short-term scheduler on the CPU
(Ready to Running State). Context switching is done by the dispatcher only. A dispatcher does the following
work:
• Saving context (process control block) of previously running process if not finished.
• Switching system mode to user mode.
• Jumping to the proper location in the newly loaded program.
Time taken by dispatcher is called dispatch latency or process context switch time.
3. Medium-Term Scheduler
Medium Term Scheduler (MTS) is responsible for moving a process from memory to disk (or swapping).
Medium-Term Scheduler
Some Other Schedulers
• I/O Schedulers: I/O schedulers are in charge of managing the execution of I/O operations such as
reading and writing to discs or networks. They can use various algorithms to determine the order
in which I/O operations are executed, such as FCFS (First-Come, First-Served) or RR (Round Robin).
• Real-Time Schedulers: In real-time systems, real-time schedulers ensure that critical tasks are
completed within a specified time frame. They can prioritize and schedule tasks using various
algorithms such as EDF (Earliest Deadline First) or RM (Rate Monotonic).
In order for a process execution to be continued from the same point at a later time, context switching is
a mechanism to store and restore the state or context of a CPU in the Process Control block. A context
switcher makes it possible for multiple processes to share a single CPU using this method. A multitasking
operating system must include context switching among its features.
The state of the currently running process is saved into the process control block when the scheduler
switches the CPU from executing one process to another. The state used to set the computer, registers,
etc. for the process that will run next is then loaded from its own PCB. After that, the second can start
processing.
Context Switching
In order for a process execution to be continued from the same point at a later time, context switching is
a mechanism to store and restore the state or context of a CPU in the Process Control block. A context
switcher makes it possible for multiple processes to share a single CPU using this method. A multitasking
operating system must include context switching among its features.
• Program Counter
• Scheduling information
• The base and limit register value
• Currently used register
• Changed State
• I/O State information
• Accounting information
Read more: Context Switching in Operating System
Conclusion
Process schedulers are the essential parts of operating system that manage how the CPU handles multiple
tasks or processes. They ensure that processes are executed efficiently, making the best use of CPU
resources and maintaining system responsiveness. By choosing the right process to run at the right time,
schedulers help optimize overall system performance, improve user experience, and ensure fair access to
CPU resources among competing processes.
8. Process Control Block (PCB)
A Process Control Block (PCB) is a data structure used by the Operating System to store
essential information about a process. Each process has a unique PCB identified by a Process ID
(PID). It helps the OS manage and track processes.
The PCB is maintained for a process throughout its lifetime, and is deleted once the process
terminates.
9. Process Address Space in Operating Systems
What is a Process Address Space?
A process address space refers to the set of memory addresses that a process can use to execute
and store data. It is the logical view of memory assigned to a process by the operating system.
Each process operates in its own address space, isolated from other processes, ensuring memory
protection and security.
Conclusion
• Efficient multitasking,
• Safe isolation of processes,
• Virtual memory support.
It is essential for process management and memory protection in modern operating systems.
10. Operating System - multi-threading
What is Thread?
A thread is a flow of execution through the process code, with its own program counter that keeps track
of which instruction to execute next, system registers which hold its current working variables, and a stack
which contains the execution history.
A thread shares with its peer threads few information like code segment, data segment and open files.
When one thread alters a code segment memory item, all other threads see that.
A thread is also called a lightweight process. Threads provide a way to improve application performance
through parallelism. Threads represent a software approach to improving performance of operating
system by reducing the overhead thread is equivalent to a classical process.
Each thread belongs to exactly one process and no thread can exist outside a process. Each thread
represents a separate flow of control. Threads have been successfully used in implementing network
servers and web server. They also provide a suitable foundation for parallel execution of applications on
shared memory multiprocessors. The following figure shows the working of a single-threaded and a
multithreaded process.
Process Thread
Process is heavy weight or resource intensive. Thread is light weight, taking lesser resources than a process.
Process switching needs interaction with operating system. Thread switching does not need to interact with operating
system.
In multiple processing environments, each process executes the All threads can share same set of open files, child processes.
same code but has its own memory and file resources.
If one process is blocked, then no other process can execute until While one thread is blocked and waiting, a second thread in the
the first process is unblocked. same task can run.
Multiple processes without using threads use more resources. Multiple threaded processes use fewer resources.
In multiple processes each process operates independently of the One thread can read, write or change another thread's data.
others.
Advantages of Thread
In this case, the thread management kernel is not aware of the existence of threads. The thread library
contains code for creating and destroying threads, for passing message and data between threads, for
scheduling thread execution and for saving and restoring thread contexts. The application starts with a
single thread.
Advantages
In this case, thread management is done by the Kernel. There is no thread management code in the
application area. Kernel threads are supported directly by the operating system. Any application can be
programmed to be multithreaded. All of the threads within an application are supported within a single
process.
The Kernel maintains context information for the process as a whole and for individuals threads within the
process. Scheduling by the Kernel is done on a thread basis. The Kernel performs thread creation,
scheduling and management in Kernel space. Kernel threads are generally slower to create and manage
than the user threads.
Advantages
• Kernel can simultaneously schedule multiple threads from the same process on multiple
processes.
• If one thread in a process is blocked, the Kernel can schedule another thread of the same
process.
• Kernel routines themselves can be multithreaded.
Disadvantages
• Kernel threads are generally slower to create and manage than the user threads.
• Transfer of control from one thread to another within the same process requires a mode
switch to the Kernel.
Multithreading Models
Some operating system provide a combined user level thread and Kernel level thread facility. Solaris is a
good example of this combined approach. In a combined system, multiple threads within the same
application can run in parallel on multiple processors and a blocking system call need not block the entire
process. Multithreading models are three types
The many-to-many model multiplexes any number of user threads onto an equal or smaller number of
kernel threads.
The following diagram shows the many-to-many threading model where 6 user level threads are
multiplexing with 6 kernel level threads. In this model, developers can create as many user threads as
necessary and the corresponding Kernel threads can run in parallel on a multiprocessor machine. This
model provides the best accuracy on concurrency and when a thread performs a blocking system call, the
kernel can schedule another thread for execution.
Many-to-one model maps many user level threads to one Kernel-level thread. Thread management is done
in user space by the thread library. When thread makes a blocking system call, the entire process will be
blocked. Only one thread can access the Kernel at a time, so multiple threads are unable to run in parallel
on multiprocessors.
If the user-level thread libraries are implemented in the operating system in such a way that the system
does not support them, then the Kernel threads use the many-to-one relationship modes.
One to One Model
There is one-to-one relationship of user-level thread to the kernel-level thread. This model provides more
concurrency than the many-to-one model. It also allows another thread to run when a thread makes a
blocking system call. It supports multiple threads to execute in parallel on microprocessors.
Disadvantage of this model is that creating user thread requires the corresponding Kernel thread. OS/2,
windows NT and windows 2000 use one to one relationship model.
• Deadlock is a situation in computing where two or more processes are unable to proceed because
each is waiting for the other to release resources.
• Key concepts include mutual exclusion, resource holding, circular wait, and no preemption.
Consider an example when two trains are coming toward each other on the same track and there is only
one track, none of the trains can move once they are in front of each other. This is a practical example of
deadlock.
Before going into detail about how deadlock occurs in the Operating System, let’s first discuss how the
Operating System uses the resources present. A process in an operating system uses resources in the
following way.
• Requests a resource
• Use the resource
• Releases the resource
A situation occurs in operating systems when there are two or more processes that hold some resources
and wait for resources held by other(s). For example, in the below diagram, Process 1 is holding Resource
1 and waiting for resource 2 which is acquired by process 2, and process 2 is waiting for resource 1.
Examples of Deadlock
There are several examples of deadlock. Some of them are mentioned below.
1. The system has 2 tape drives. P0 and P1 each hold one tape drive and each needs another one.
• P1 executes wait(B).
P0 P1
wait(A); wait(B)
wait(B); wait(A)
3. Assume the space is available for allocation of 200K bytes, and the following sequence of events occurs.
P0 P1
Request 80KB; Request 70KB;
Request 60KB; Request 80KB;
Deadlock occurs if both processes progress to their second request.
Deadlock can arise if the following four conditions hold simultaneously (Necessary Conditions)
• Mutual Exclusion: Only one process can use a resource at any given time i.e. the resources are
non-sharable.
• Hold and Wait: A process is holding at least one resource at a time and is waiting to acquire other
resources held by some other process.
• No Preemption: A resource cannot be taken from a process unless the process releases the
resource.
• Circular Wait: set of processes are waiting for each other in a circular fashion. For example, lets
say there are a set of processes {P0P0,P1P1,P2P2,P3P3} such that P0P0 depends on P1P1, P1P1
depends on P2P2, P2P2 depends on P3P3 and P3P3 depends on P0P0. This creates a circular
relation between all these processes and they have to wait forever to be executed.
3. Deadlock Ignorance
Deadlock Prevention and Avoidance is the one of the methods for handling deadlock. First, we will discuss
Deadlock Prevention, then Deadlock Avoidance.
Deadlock Prevention
In deadlock prevention the aim is to not let full-fill one of the required condition of the deadlock. This can
be done by this method:
We only use the Lock for the non-share-able resources and if the resource is share- able (like read only
file) then we not use the locks here. That ensure that in case of share -able resource , multiple process can
access it at same time. Problem- Here the problem is that we can only do it in case of share-able resources
but in case of no-share-able resources like printer , we have to use Mutual exclusion.
To ensure that Hold and wait never occurs in the system, we must guarantee that whenever process
request for resource , it does not hold any other resources.
• we can provide the all resources to the process that is required for it’s execution before starting
it’s execution . problem – for example if there are three resource that is required by a process and
we have given all that resource before starting execution of process then there might be a situation
that initially we required only two resource and after one hour we want third resources and this
will cause starvation for the another process that wants this resources and in that waiting time
that resource can allocated to other process and complete their execution.
• We can ensure that when a process request for any resources that time the process does not hold
any other resources. Ex- Let there are three resources DVD, File and Printer . First the process
request for DVD and File for the copying data into the file and let suppose it is going to take 1 hour
and after it the process free all resources then again request for File and Printer to print that file.
(iii) No Preemption
If a process is holding some resource and requestion other resources that are acquired and these resource
are not available immediately then the resources that current process is holding are preempted. After
some time process again request for the old resources and other required resources to re-start.
For example – Process p1 have resource r1 and requesting for r2 that is hold by process p2. then process
p1 preempt r1 and after some time it try to restart by requesting both r1 and r2 resources.
Live Lock : Live lock is the situation where two or more processes continuously changing their state in
response to each other without making any real progress.
Example:
• suppose there are two processes p1 and p2 and two resources r1 and r2.
• so according to above method- Both p1 and p2 detect that they can’t acquire second resource, so
they release resource that they are holding and then try again.
• continuous cycle- p1 again acquired r1 and requesting to r2 p2 again acquired r2 and requesting
to r1 so there is no overall progress still process are changing there state as they preempt
resources and then again holding them. This the situation of Live Lock.
To remove the circular wait in system we can give the ordering of resources in which a process needs to
acquire.
Ex: If there are process p1 and p2 and resources r1 and r2 then we can fix the resource acquiring order like
the process first need to acquire resource r1 and then resource r2. so the process that acquired r1 will be
allowed to acquire r2 , other process needs to wait until r1 is free.
This is the Deadlock prevention methods but practically only fourth method is used as all other three
condition removal method have some disadvantages with them.
Deadlock Avoidance
Avoidance is kind of futuristic. By using the strategy of “Avoidance”, we have to make an assumption. We
need to ensure that all information about resources that the process will need is known to us before the
execution of the process. We use Banker’s algorithm to avoid deadlock.
In prevention and avoidance, we get the correctness of data but performance decreases.
If Deadlock prevention or avoidance is not applied to the software then we can handle this by deadlock
detection and recovery. which consist of two phases:
1. In the first phase, we examine the state of the process and check whether there is a deadlock or
not in the system.
2. If found deadlock in the first phase then we apply the algorithm for recovery of the deadlock.
In Deadlock detection and recovery, we get the correctness of data but performance decreases.
Deadlock Detection
Deadlock detection is a process in computing where the system checks if there are any sets of processes
that are stuck waiting for each other indefinitely, preventing them from moving forward. In simple words,
deadlock detection is the process of finding out whether any process are stuck in loop or not. There are
several algorithms like;
• Banker’s Algorithm
Deadlock Recovery
There are several Deadlock Recovery Techniques:
• Manual Intervention
• Automatic Recovery
• Process Termination
• Resource Preemption
1. Manual Intervention
When a deadlock is detected, one option is to inform the operator and let them handle the situation
manually. While this approach allows for human judgment and decision-making, it can be time-consuming
and may not be feasible in large-scale systems.
2. Automatic Recovery
An alternative approach is to enable the system to recover from deadlock automatically. This method
involves breaking the deadlock cycle by either aborting processes or preempting resources. Let’s delve
into these strategies in more detail.
3. Process Termination
• Abort all Deadlocked Processes This approach breaks the deadlock cycle, but it comes at a
significant cost. The processes that were aborted may have executed for a considerable amount
of time, resulting in the loss of partial computations. These computations may need to be
recomputed later.
• Abort one process at a time Instead of aborting all deadlocked processes simultaneously, this
strategy involves selectively aborting one process at a time until the deadlock cycle is eliminated.
However, this incurs overhead as a deadlock-detection algorithm must be invoked after each
process termination to determine if any processes are still deadlocked.
4. Resource Preemption
• Selecting a Victim Resource preemption involves choosing which resources and processes should
be preempted to break the deadlock. The selection order aims to minimize the overall cost of
recovery. Factors considered for victim selection may include the number of resources held by a
deadlocked process and the amount of time the process has consumed.
• Rollback
If a resource is preempted from a process, the process cannot continue its normal execution as it
lacks the required resource. Rolling back the process to a safe state and restarting it is a common
approach. Determining a safe state can be challenging, leading to the use of total rollback, where
the process is aborted and restarted from scratch.
• Starvation Prevention To prevent resource starvation, it is essential to ensure that the same
process is not always chosen as a victim. If victim selection is solely based on cost factors, one
process might repeatedly lose its resources and never complete its designated task. To address
this, it is advisable to limit the number of times a process can be chosen as a victim, including the
number of rollbacks in the cost factor.
Deadlock Ignorance
If a deadlock is very rare, then let it happen and reboot the system. This is the approach that both Windows
and UNIX take. we use the ostrich algorithm for deadlock ignorance.
“In Deadlock, ignorance performance is better than the above two methods but the correctness of data is
not there.”
Safe State
A safe state can be defined as a state in which there is no deadlock. It is achievable if:
• If a process needs an unavailable resource, it may wait until the same has been released by a
process to which it has already been allocated. if such a sequence does not exist, it is an unsafe
state.
System Model :
• For the purposes of deadlock discussion, a system can be modeled as a collection of limited
resources that can be divided into different categories and allocated to a variety of processes, each
with different requirements.
• Memory, printers, CPUs, open files, tape drives, CD-ROMs, and other resources are examples of
resource categories.
• By definition, all resources within a category are equivalent, and any of the resources within that
category can equally satisfy a request from that category. If this is not the case (i.e. if there is some
difference between the resources within a category), then that category must be subdivided
further. For example, the term “printers” may need to be subdivided into “laser printers” and
“color inkjet printers.”
• When every process in a set is waiting for a resource that is currently assigned to another process
in the set, the set is said to be deadlocked.
Operations:
In normal operation, a process must request a resource before using it and release it when finished, as
shown below.
1. Request – If the request cannot be granted immediately, the process must wait until the
resource(s) required to become available. The system, for example, uses the functions open (),
malloc (), new (), and request ().
2. Use – The process makes use of the resource, such as printing to a printer or reading from a file.
3. Release – The process relinquishes the resource, allowing it to be used by other processes.
Necessary Conditions:
There are four conditions that must be met in order to achieve deadlock as follows.
• Mutual Exclusion – At least one resource must be kept in a non-shareable state; if another
process requests it, it must wait for it to be released.
• Hold and Wait – A process must hold at least one resource while also waiting for at least one
resource that another process is currently holding.
• No preemption – Once a process holds a resource (i.e. after its request is granted), that
resource cannot be taken away from that process until the process voluntarily releases it.
• Circular Wait – There must be a set of processes P0, P1, P2,…, PN such that every P[I] is
waiting for P[(I + 1) percent (N + 1)]. (It is important to note that this condition implies the
hold-and-wait condition, but dealing with the four conditions is easier if they are considered
separately).
Methods for Handling Deadlocks:
1. Preventing or avoiding deadlock by Avoid allowing the system to become stuck in a loop.
2. Detection and recovery of deadlocks, When deadlocks are detected, abort the process or preempt
some resources.
3. Ignore the problem entirely.
4. To avoid deadlocks, the system requires more information about all processes. The system, in
particular, must understand what resources a process will or may request in the future. (
Depending on the algorithm, this can range from a simple worst-case maximum to a complete
resource request and release plan for each process. )
5. Deadlock detection is relatively simple, but deadlock recovery necessitates either aborting
processes or preempting resources, neither of which is an appealing option.
6. If deadlocks are not avoided or detected, the system will gradually slow down as more processes
become stuck waiting for resources that the deadlock has blocked and other waiting processes.
Unfortunately, when the computing requirements of a real-time process are high, this slowdown
can be confused with a general system slowdown.
Deadlock Prevention: Deadlocks can be avoided by avoiding at least one of the four necessary conditions:
as follows.
• Unfortunately, some resources, such as printers and tape drives, require a single process to have
exclusive access to them.
• Make it a requirement that all processes request all resources at the same time. This can be a
waste of system resources if a process requires one resource early in its execution but does not
require another until much later.
• Processes that hold resources must release them prior to requesting new ones, and then re-
acquire the released resources alongside the new ones in a single new request. This can be a
problem if a process uses a resource to partially complete an operation and then fails to re-allocate
it after it is released.
• If a process necessitates the use of one or more popular resources, either of the methods
described above can result in starvation.
Condition-3:
No Preemption: When possible, preemption of process resource allocations can help to avoid deadlocks.
• One approach is that if a process is forced to wait when requesting a new resource, all other
resources previously held by this process are implicitly released (preempted), forcing this process
to re-acquire the old resources alongside the new resources in a single request, as discussed
previously.
• Another approach is that when a resource is requested, and it is not available, the system looks to
see what other processes are currently using those resources and are themselves blocked while
waiting for another resource. If such a process is discovered, some of their resources may be
preempted and added to the list of resources that the process is looking for.
• Either of these approaches may be appropriate for resources whose states can be easily saved and
restored, such as registers and memory, but they are generally inapplicable to other devices, such
as printers and tape drives.
Condition-4:
Circular Wait :
• To avoid circular waits, number all resources and insist that processes request resources
is strictly increasing ( or decreasing) order.
• To put it another way, before requesting resource Rj, a process must first release all Ri
such that I >= j.
• The relative ordering of the various resources is a significant challenge in this scheme.
Deadlock Avoidance :
• The general idea behind deadlock avoidance is to avoid deadlocks by avoiding at least one
of the aforementioned conditions.
• This necessitates more information about each process AND results in low device
utilization. (This is a conservative approach.)
• The scheduler only needs to know the maximum number of each resource that a process
could potentially use in some algorithms. In more complex algorithms, the scheduler can
also use the schedule to determine which resources are required and in what order.
• When a scheduler determines that starting a process or granting resource requests will
result in future deadlocks, the process is simply not started or the request is denied.
• The number of available and allocated resources, as well as the maximum requirements
of all processes in the system, define a resource allocation state.
Deadlock Detection :
• If deadlocks cannot be avoided, another approach is to detect them and recover in some way.
• Aside from the performance hit of constantly checking for deadlocks, a policy/algorithm for
recovering from deadlocks must be in place, and when processes must be aborted or have their
resources preempted, there is the possibility of lost work.
Recovery From Deadlock: There are three basic approaches to getting out of a bind:
1. Inform the system operator and give him/her permission to intervene manually.
2. Stop one or more of the processes involved in the deadlock.
3. Prevent the use of resources.
Approach of Recovery from Deadlock: Here, we will discuss the approach of Recovery from Deadlock as
follows.
Approach-1: Process Termination: There are two basic approaches for recovering resources allocated to
terminated processes as follows.
1. Stop all processes that are involved in the deadlock. This does break the deadlock, but at
the expense of terminating more processes than are absolutely necessary.
2. Processes should be terminated one at a time until the deadlock is broken. This method
is more conservative, but it necessitates performing deadlock detection after each step.
In the latter case, many factors can influence which processes are terminated next as follows.
1. Selecting a victim – Many of the decision criteria outlined above apply to determine which
resources to preempt from which processes.
2. Rollback – A preempted process should ideally be rolled back to a safe state before the point at
which that resource was originally assigned to the process. Unfortunately, determining such a safe
state can be difficult or impossible, so the only safe rollback is to start from the beginning. (In
other words, halt and restart the process.)
3. Starvation – How do you ensure that a process does not go hungry because its resources are
constantly being preempted? One option is to use a priority system and raise the priority of a
process whenever its resources are preempted. It should eventually gain a high enough priority
that it will no longer be preempted.
Deadlock Prevention and Avoidance
Deadlock prevention and avoidance are strategies used in computer systems to ensure that different
processes can run smoothly without getting stuck waiting for each other forever. Think of it like a traffic
system where cars (processes) must move through intersections (resources) without getting into a
gridlock.
• Mutual Exclusion
• Hold and Wait
• No Preemption
• Circular Wait
Please refer Conditions for Deadlock in OS for details.
Deadlock Prevention
It is not possible to violate mutual exclusion because some resources, such as the tape drive, are inherently
non-shareable. For other resources, like printers, we can use a technique called Spooling (Simultaneous
Peripheral Operations Online).
In spooling, when multiple processes request the printer, their jobs (instructions of the processes that
require printer access) are added to the queue in the spooler directory. The printer is allocated to jobs on
a First-Come, First-Served (FCFS) basis. In this way, a process does not have to wait for the printer and can
continue its work after adding its job to the queue.
Hold and wait is a condition in which a process holds one resource while simultaneously waiting for
another resource that is being held by a different process. The process cannot continue until it gets all the
required resources.
Hold & Wait
• By eliminating wait: The process specifies the resources it requires in advance so that it does not
have to wait for allocation after execution starts.
For Example, Process1 declares in advance that it requires both Resource1 and Resource2.
• By eliminating hold: The process has to release all resources it is currently holding before making
new request.
For Example: Process1 must release Resource2 and Resource3 before requesting Resource1.
Eliminate No Preemption
Preemption is temporarily interrupting an executing task and later resuming it. Two ways to eliminate No
Preemption:
• Processes must release resources voluntarily: A process should only give up resources it holds
when it completes its task or no longer needs them.
• Avoid partial allocation: Allocate all required resources to a process at once before it begins
execution. If not, all resources are available, the process must wait.
To eliminate circular wait for deadlock prevention, we can use order on resource allocation.
This prevents circular chains of processes waiting for resources, as no process can request a resource lower
than what it already holds.
Another approach to dealing with deadlocks is to detect and recover from them when they occur. This can
involve killing one or more of the processes involved in the deadlock or releasing some of the resources
they hold.
Deadlock Avoidance
Deadlock avoidance ensures that a resource request is only granted if it won’t lead to deadlock, either
immediately or in the future. Since the kernel can’t predict future process behavior, it uses a conservative
approach. Each process declares the maximum number of resources it may need. The kernel allows
requests in stages, checking for potential deadlocks before granting them. A request is granted only if no
deadlock is possible; otherwise, it stays pending. This approach is conservative, as a process may finish
without using the maximum resources it declared.
Banker’s Algorithm
Bankers’ Algorithm is a resource allocation and deadlock avoidance algorithm that tests all resource
requests made by processes. It checks for the safe state, and if granting a request keeps the system in safe
state, the request is allowed. However, if no safe state exists, the request is denied.
• If the request made by the process is less than equal to the max needed for that process.
• If the request made by the process is less than equal to the freely available resource in the system.
Timeouts
To avoid deadlocks caused by indefinite waiting, a timeout mechanism can be used to limit the amount of
time a process can wait for a resource. If the help is unavailable within the timeout period, the process
can be forced to release its current resources and try again later.
Example
A B C D
6 5 7 6
Available system resources are:
A B C D
3 1 1 2
Processes (currently allocated resources):
A B C D
P1 1 2 2 1
P2 1 0 3 3
P3 1 2 1 0
Maximum resources we have for a process:
A B C D
P1 3 3 2 2
P2 1 2 3 4
P3 1 3 5 0
Need = Maximum Resources Requirement – Currently Allocated Resources
A B C D
P1 2 1 0 1
P2 0 2 0 1
P3 0 1 4 0
Recovery from Deadlock in Operating System
In today’s world of computer systems and multitasking environments, deadlock is an undesirable situation
that can bring operations to a halt. When multiple processes compete for exclusive access to resources
and end up in a circular waiting pattern, a deadlock occurs. To maintain the smooth functioning of an
operating system, it is crucial to implement recovery mechanisms that can break these deadlocks and
restore the system’s productivity.
“Recovery from Deadlock in Operating Systems” refers to the set of techniques and algorithms designed
to detect, resolve, or mitigate deadlock situations. These methods ensure that the system can continue
processing tasks efficiently without being trapped in an eternal standstill. Let’s take a closer look at some
of the key strategies employed.
There is no mechanism implemented by the OS to avoid or prevent deadlocks. The system, therefore,
assumes that a deadlock will undoubtedly occur. The OS periodically checks the system for any deadlocks
in an effort to break them. The OS will use various recovery techniques to restore the system if it
encounters any deadlocks. When a Deadlock Detection Algorithm determines that a deadlock has
occurred in the system, the system must recover from that deadlock.
What is Deadlock?
In an operating system, a deadlock is a situation where a group of processes is stuck and unable to proceed
because each one is waiting for a resource that another process in the group is holding.
Imagine four people at a round table, each with one fork, and they need two forks to eat. If everyone picks
up the fork to their left and waits for the fork on their right, no one can eat. They are all stuck, waiting
forever. This is a deadlock.
• Mutual Exclusion : Resources cannot be shared; they can only be used by one process at a time.
• Hold and Wait : Processes holding resources can request additional resources.
• No Preemption : Resources cannot be forcibly taken away from processes holding them.
• Circular Wait : A closed chain of processes exists, where each process holds at least one resource
needed by the next process in the chain.
There are several ways of handling a deadlock, some of which are mentioned below:
1. Process Termination
To eliminate the deadlock, we can simply kill one or more processes. For this, we use two methods:
• Abort all the Deadlocked Processes : Aborting all the processes will certainly break the deadlock
but at a great expense. The deadlocked processes may have been computed for a long time, and
the result of those partial computations must be discarded and there is a probability of
recalculating them later.
• Abort one process at a time until the deadlock is eliminated : Abort one deadlocked process at a
time, until the deadlock cycle is eliminated from the system. Due to this method, there may be
considerable overhead, because, after aborting each process, we have to run a deadlock detection
algorithm to check whether any processes are still deadlocked.
• It ensures that the deadlock will be resolved quickly, as all processes involved in the deadlock are
terminated simultaneously.
• It frees up resources that were being used by the deadlocked processes , making those resources
available for other processes.
• It can result in the loss of data and other resources that were being used by the terminated
processes.
• It may cause further problems in the system if the terminated processes were critical to the
system’s operation.
• It may result in a waste of resources , as the terminated processes may have already completed a
significant amount of work before being terminated.
2. Resource Preemption
To eliminate deadlocks using resource preemption, we preempt some resources from processes and give
those resources to other processes. This method will raise three issues:
• Selecting a Victim : We must determine which resources and which processes are to be
preempted and also in order to minimize the cost.
• Rollback : We must determine what should be done with the process from which resources are
preempted. One simple idea is total rollback. That means aborting the process and restarting it.
• Starvation : In a system, it may happen that the same process is always picked as a victim. As a
result, that process will never complete its designated task. This situation is called Starvation and
must be avoided. One solution is that a process must be picked as a victim only a finite number of
times.
Advantages of Resource Preemption
1. It can help in breaking a deadlock without terminating any processes, thus preserving data and
resources.
2. It is more efficient than process termination as it targets only the resources that are causing
the deadlock .
1. It may lead to increased overhead due to the need for determining which resources and processes
should be preempted.
2. It may cause further problems if the preempted resources were critical to the system’s operation.
3. It may cause delays in the completion of processes if resources are frequently preempted.
3. Priority Inversion
A technique for breaking deadlocks in real-time systems is called priority inversion. This approach alters
the order of the processes to prevent stalemates. A higher priority is given to the process that already has
the needed resources, and a lower priority is given to the process that is still awaiting them. The inversion
of priorities that can result from this approach can impair system performance and cause performance
issues. Additionally, because higher-priority processes may continue to take precedence over lower-
priority processes, this approach may starve lower-priority processes of resources.
4. RollBack
In database systems, rolling back is a common technique for breaking deadlocks. When using this
technique, the system reverses the transactions of the involved processes to a time before the deadlock.
The system must keep a log of all transactions and the system’s condition at various points in time in order
to use this method. The transactions can then be rolled back to the initial state and executed again by the
system. This approach may result in significant delays in the transactions’ execution and data loss.
The resource allocation graph (RAG) is a popular technique for computer system deadlock detection. The
RAG is a visual representation of the processes holding the resources and their current state of allocation.
The resources and processes are represented by the graph’s nodes, while their allocation relationships are
shown by the graph’s edges. A cycle in the graph of the RAG method denotes the presence of a deadlock.
When a cycle is discovered, at least one resource needed by another process in the cycle is being held by
each process in the cycle, causing a deadlock. The RAG method is a crucial tool in contemporary operating
systems due to its high efficiency and ability to spot deadlocks quickly.
Conclusion
In conclusion, recovering from a deadlock in an operating system involves detecting the deadlock and then
resolving it by either terminating some of the involved processes or preempting resources. These actions
help free up the system and restore normal operations, though they may cause temporary disruptions.