0% found this document useful (0 votes)
17 views89 pages

Unit 1 and II Operating System DR - Ashish

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 89

Syllabus

Unit-1
UNi

Unit I

Introduction: Operating system and function, Evolution of operating system, Batch,


Interactive, Time Sharing and Real Time System, System protection. Operating System
Structure: System Components, System structure, Operating System Services.

CPU Scheduling: Scheduling Concept, process scheduling strategies- First-Come, First-


Served (FCFS) Scheduling, Shortest-Job-Next (SJN) Scheduling, Priority Scheduling,
Shortest Remaining Time, Round Robin (RR) Scheduling, Multiple-Level Queues
Scheduling, Performance Criteria of Scheduling Algorithm, Evolution, Multiprocessor
Scheduling.

Unit II

Concurrent Processes: Process concept, Principle of Concurrency, Producer Consumer


Problem, Critical Section problem, Semaphores, Binary and counting semaphores, P() and
V() operations, Classical problems in Concurrency, Inter Process Communication, Process
Generation, Process Scheduling.

Deadlocks: examples of deadlock, resource concepts, necessary conditions for deadlock,


deadlock solution, deadlock prevention, deadlock avoidance with Bankers algorithms,
deadlock detection, deadlock recovery.

Unit III

Memory Organization & Management: Memory Organization, Memory Hierarchy, Memory


Management Strategies, Contiguous versus non- Contiguous memory allocation, Partition
Management Techniques, Logical versus Physical Address space, swapping, Paging,
Segmentation, Segmentation with Paging Virtual Memory: Demand Paging, Page
Replacement, Page-replacement Algorithms, Performance of Demand Paging, Thrashing,
Demand Segmentation, and Overlay Concepts.

Unit IV

I/O Device and the organization: I/O Device and the organization of the I/O function, I/O
Buffering, Disk I/O, Disk Scheduling Algorithms, File system: File Concepts, attributes,
operations, File organization and Access mechanism, disk space allocation methods,
Directory structure, free disk space management, File sharing, Implementation issues. Case
studies: Unix system, Windows XP.

Dr. Ashish
1
Unit-1
UNi

1 INTRODUCTION
• An Operating System acts as a communication bridge (interface) between the user
and computer hardware.

• The purpose of an operating system is to provide a platform on which a user can execute
programs conveniently and efficiently.

• An operating system is a piece of software that manages the allocation of Computer


Hardware. The coordination of the hardware must be appropriate to ensure the correct
working of the computer system and to prevent user programs from interfering with the
proper working of the system.

• The main goal of the Operating System is to make the computer environment more
convenient to use and the Secondary goal is to use the resources most efficiently.

1.1 Why Use an Operating System?


The operating system helps in improving the computer software as well as hardware. Without
OS, it became very difficult for any application to be user-friendly. Operating System provides
a user with an interface that makes any application attractive and user-friendly. The operating
System comes with a large number of device drivers that makes OS services reachable to the
hardware environment. Each and every application present in the system requires the Operating
System. The operating system works as a communication channel between system hardware
and system software. The operating system helps interact an application with the hardware part
without knowing about the actual hardware configuration. It is one of the most important parts
of the system and hence it is present in every device, whether large or small device.

Dr. Ashish
2
Unit-1
UNi

1.2 Goals of Operating System

There are mainly 2 goals of the operating system:

• Convenience

• Efficiency

The other 3 goals are:

• Portability and Reliability

• Hardware Abstraction

• Security

1. Convenience

An Operating System's primary and first goal is to provide a friendly and convenient
environment to the user. It is optional to use Operating System. Still, things become harder
when the user has to perform all the process scheduling and convert user commands to machine
language so that system can perform tasks. So, we use an Operating System to act as a bridge
between us and the computer hardware. We only have to give commands to the system, and OS
will take the instructions and do the rest of the work. Because of this operating system should
be convenient to use and operate by the user.

2. Efficiency

The second and important goal of an Operating System is efficiency. An operating system
should utilize all the resources efficiently. The management of resources and programs should
be done so that no resource is kept idle and memory is used for no use.

3. Portability and Reliability

The operating system can work/operate on different machines with different processors and
memory configurations. This makes the operating system more reliable.

Also, the operating system can protect itself and the user from accidental damage from the user
program.

4. Hardware Abstraction

Dr. Ashish
3
Unit-1
UNi

The operating system can conceal or can be said to control all functions and resources of the
computer. The user can give commands and access any function or resource of the computer
without facing any difficulties. In this way, the Operating system communicates between the
user and computer hardware.

5. Security

An operating system provides the safety and security of data between the user and the hardware.
OS enables multiple users to securely share a computer(system), including files, processes,
memory, and device separately.

2 FUNCTIONS OF AN OPERATING SYSTEM


Following are some of important functions of an operating System.

1. Memory Management

2. Processor Management

3. Device Management

4. File Management

5. Network Management

some of the important activities that an Operating System performs

• Security

• Control over system


performance

• Job accounting

• Error detecting aids

• Coordination between other


software and users

2.1 Memory Management


The operating system manages the Primary Memory or Main Memory. Main memory is made up
of a large array of bytes or words where each byte or word is assigned a certain address. Main
memory is fast storage and it can be accessed directly by the CPU. For a program to be executed,
it should be first loaded in the main memory. An operating system manages the allocation and

Dr. Ashish
4
Unit-1
UNi

deallocation of memory to various processes and ensures that the other process does not consume
the memory allocated to one process. An Operating System performs the following activities for
Memory Management:

• It keeps track of primary memory, i.e., which bytes of memory are used by which user
program. The memory addresses that have already been allocated and the memory
addresses of the memory that has not yet been used.
• In multiprogramming, the OS decides the order in which processes are granted memory
access, and for how long.
• It Allocates the memory to a process when the process requests it and deallocates the
memory when the process has terminated or is performing an I/O operation.

2.2 Processor Management


• In a multi-programming environment, the OS decides the order in which processes have
access to the processor, and how much processing time each process has. This function of
OS is called Process Scheduling. An Operating System performs the following activities
for Processor Management.
• An operating system manages the processor’s work by allocating various jobs to it and
ensuring that each process receives enough time from the processor to function properly.
• Keeps track of the status of processes. The program which performs this task is known as
a traffic controller. Allocates the CPU that is a processor to a process. De-allocates
processor when a process is no longer required.

Dr. Ashish
5
Unit-1
UNi

• When more than one process runs on the system the OS decides how and when a process
will use the CPU. Hence, the name is
also CPU Scheduling. The OS:
• Allocates and deallocates processor to
the processes.
• Keeps record of CPU status.
• Certain algorithms used for CPU
scheduling are as follows:
o First Come First Serve (FCFS)
o Shortest Job First (SJF)
o Round-Robin Scheduling
o Priority-based scheduling etc.

2.3 Device Management


An OS manages device communication via its respective drivers. It
performs the following activities for device management. Keeps track
of all devices connected to the system. designates a program responsible
for every device known as the Input/Output controller. Decide which
process gets access to a certain device and for how long. Allocates
devices effectively and efficiently. Deallocates devices when they are
no longer required. There are various input and output devices. an OS
controls the working of these input-output devices. It receives the
requests from these devices, performs a specific task, and
communicates back to the requesting process.

2.4 File Management


A file system is organized into directories for efficient or easy navigation and usage. These
directories may contain other directories and other files. An Operating System carries out the
following file management activities. It keeps track of where information is stored, user access
settings, the status of every file, and more. These facilities are collectively known as the file
system. An OS keeps track of information regarding the creation, deletion, transfer, copy, and
storage of files in an organized way. It also maintains the integrity of the data stored in these files,
including the file directory structure, by protecting against unauthorized access.

Dr. Ashish
6
Unit-1
UNi

2.5 Storage Management


• Storage management is a procedure that allows users to maximize the utilization of
storage devices while also protecting data integrity on whatever media on which it lives.
Network virtualization, replication, mirroring, security, compression, deduplication,
traffic analysis, process automation, storage provisioning, and memory management are
some of the features that may be included.

• The operating system is in charge of storing and accessing files. The creation of files, the
creation of directories, the reading and writing of data from files and directories, as well
as the copying of the contents of files and directories from one location to another are all
included in storage management.

• The OS uses storage management for:

• Improving the performance of the data storage resources.

• It optimizes the use of various storage devices.

• Assists businesses in storing more data on existing hardware, speeding up the data
retrieval process, preventing data loss, meeting data retention regulations, and lowering
IT costs

2.6 Following are some of the important activities that an Operating System performs −
• Security – For security, modern operating systems employ a firewall. A firewall is a type
of security system that monitors all computer activity and blocks it if it detects a threat.

• Job Accounting – As the operating system keeps track of all the functions of a computer
system. Hence, it makes a record of all the activities taking place on the system. It has an
account of all the information about the memory, resources, errors, etc. Therefore, this
information can be used as and when required.

• Control over system performance – The operating system will collect consumption
statistics for various resources and monitor performance indicators such as reaction time,
which is the time between requesting a service and receiving a response from the system.

• Error detecting aids – While a computer system is running, a variety of errors might occur.
Error detection guarantees that data is delivered reliably across susceptible networks. The
operating system continuously monitors the system to locate or recognize problems and
protects the system from them.

Dr. Ashish
7
Unit-1
UNi

• Coordination between other software and users – The operating system (OS) allows
hardware components to be coordinated and directs and allocates assemblers, interpreters,
compilers, and other software to different users of the computer system.

• Booting process – The process of starting or restarting a computer is referred to as


Booting. Cold booting occurs when a computer is totally turned off and then turned back
on. Warm booting occurs when the computer is restarted. The operating system (OS) is in
charge of booting the computer.

3 Evolution of Operating System - OS Generations


Operating System is not the same as we see today. Let us discuss the evolution of operating
system in detail. In today's world, we can see that a user of the computer directly interacts with
the computer. But in the past years, the users were not able to directly interact with the computer
system.

From a broader perspective, the evolution of operating system can be divided into four
generations. Let us briefly discuss this generation-based evolution of operating system with
their timeline.

1. No Generation(0s to 1940s) In the earliest days of computing, there were no distinct


operating systems as we know them today. Computers were operated manually, often
requiring extensive knowledge of the machine's hardware. Programs were directly fed
into the computer, typically through the use of punch cards or other rudimentary input
methods.

2. First Generation (1945-1955) The first generation of the operating system was used in the
year 19451945 to 19551955 during the time of electronic computing systems development.
It was the era of mechanical computing systems where the users or the programmers used to
provide the instructions (through punch cards, paper tape, and magnetic tape, etc.) and the
computer had to follow them.
Now, due to the human
intervention, the process was
very slow and there were
chances of human mistakes.

We can say that there is no operating


system at that time and users used to
give the programs to the computer

Dr. Ashish
8
Unit-1
UNi

system itself. So, less speed and more errors were the first-generation operating systems'
drawbacks.

3. Second Generation (1955-1965) The second generation of the operating system was used
from the year 19551955 to 19651965 during the time of batch operating system
development. During the second-generation phase, the users used to prepare their
instructions (tasks or jobs) in the form of jobs on an off-line device like punch cards and
submits them to the computer operator. Now, out of these punch cards (these punch cards
were tabulated into instructions for computers), similar punch cards of jobs were grouped and
run as a group to speed up the entire process. The jobs consisted of program and input data
along with the control instructions. The main task of the programmer or developer was to
create jobs or programs and then hand them over to the operator in the form of punch cards.
Now, it was the duty of an operator to sort the programs with similar requirements into
batches.

Some major drawbacks of the second-generation operating system were:

o We could not set the priority of the jobs as jobs were scheduled only basis of
similarities among the jobs.

o The CPU was not utilized to its max potential as the CPU becomes idle (when the
operator was loading jobs).

4. Third Generation (1965-1980) The third generation of the operating system was used in
the year 19651965 to 19801980 during the time of multiprogramming operating
system development. The third-generation operating system was developed to serve
more than one user at a time (multi-users). During this period, users were able to

Dr. Ashish
9
Unit-1
UNi

communicate with the operating systems with the help of a software called command
line interface. So, the computers became multi-user and multiprogramming.

5. Fourth Generation (1980-Now) The fourth generation of the operating system is being
in from the year 19801980 till now. Before the evolution of the fourth generation of the
operating system, the users were able to communicate with the operating system but with
the help of command line interfaces, punch cards, magnetic tapes, etc. So, the user had to
provide commands (that needed to be remembered) which became hectic.

So, the fourth generation of operating systems came into existence with the development of
GUI (Graphical User Interface). The GUI made the user experience more convenient.

This 4th generation can be subdivided into further three categories:

• Networked Systems (1980s to 1990s): This era witnessed the proliferation of networked
computing environments, enabling multiple computers to communicate and share
resources over a network.

• Mobile Operating Systems (Late 1990s to Early 2000s): The late 20th century and early
21st century saw the emergence of mobile operating systems, specifically designed for
handheld devices. This development
fundamentally changed how we interacted
with technology.

• AI Integration (2010s to ongoing): In


recent years, operating systems have

Dr. Ashish
10
Unit-1
UNi

integrated Artificial Intelligence (AI) technologies. This has led to more intelligent and
adaptive computing experiences.

4 Types of Operating Systems


An operating System performs all the basic tasks like managing files, processes, and memory.
Thus operating system acts as the manager of all the resources, i.e. resource manager. Thus,
the operating system becomes an interface between the user and the machine. It is one of the
most required software that is present in the device.
Operating System is a type of software that works as an interface between the system program
and the hardware. There are several types of Operating Systems in which many of which are
mentioned below. Let’s have a look at them.

4.1 Batch Operating System


This type of operating system does not interact with the computer directly. There is an operator
which takes similar jobs having the same requirement and groups them into batches. It is the
responsibility of the operator to sort jobs with similar needs.

Advantages of Batch Operating System


• It is very difficult to guess or know the time required for any job to complete.
Processors of the batch systems know how long the job would be when it is in the
queue.
• Multiple users can share the batch systems.
• The idle time for the batch system is very less.
• It is easy to manage large work repeatedly in batch systems.
Disadvantages of Batch Operating System
• The computer operators should be well known with batch systems.

Dr. Ashish
11
Unit-1
UNi

• Batch systems are hard to debug.


• It is sometimes costly.
• The other jobs will have to wait for an unknown time if any job fails.
Examples of Batch Operating Systems: Payroll Systems, Bank Statements, etc.

4.2 Multi-Programming Operating System

Multiprogramming Operating Systems can be simply illustrated as more than one program is
present in the main memory and any one of them can be kept in execution. This is basically used
for better execution of resources.

Dr. Ashish
12
Unit-1
UNi

Advantages of Multi-Programming Operating System

• Multi Programming increases the Throughput of the System.

• It helps in reducing the response time.

Disadvantages of Multi-Programming Operating System

• There is not any facility for user interaction of system resources with the system.

Dr. Ashish
13
Unit-1
UNi

Dr. Ashish
14
Unit-1
UNi

4.3 Real-Time Operating System


These types of OSs serve real-time systems. The time interval required to process and respond to
inputs is very small. This time interval is called response time. Real-time systems are used when
there are time requirements that are very strict like missile
systems, air traffic control systems, robots, etc.

Types of Real-Time Operating Systems

Hard Real-Time Systems:

Hard Real-Time OSs are meant for applications where time


constraints are very strict and even the shortest possible delay
is not acceptable. These systems are built for saving life like
automatic parachutes or airbags which are required to be
readily available in case of an accident. Virtual memory is
rarely found in these systems.

Soft Real-Time Systems: These OSs are for applications where time-constraint is less strict.

4.4 Distributed OS
This system is based on autonomous but interconnected computers communicating with each
other via communication lines or a shared network. Each autonomous system has its own
processor that may differ in size and function. These operating systems are often used for tasks
such as telecommunication networks, airline reservation controls and peer-to-peer networks.

A distributed operating system serves multiple applications and multiple users in real time. The
data processing function is then distributed across the processors. Potential advantages and
disadvantages of distributed operating systems are:

Advantages Disadvantages

If the primary network fails, the entire system


They allow remote working.
shuts down.

They allow a faster exchange of data among users. They're expensive to install.

Failure in one site may not cause much disruption to the They require a high level of expertise to
system. maintain.

They reduce delays in data processing.

Dr. Ashish
15
Unit-1
UNi

Advantages Disadvantages

They minimize the load on the host computer.

They enhance scalability since more systems can be


added to the network.

4.5 Network OS
Network operating systems are installed on a server providing users with the capability to manage
data, user groups and applications. This operating system enables users to access and share files
and devices such as printers, security software and other applications, mostly in a local area
network.

Examples of network operating systems include Microsoft Windows, Linux and macOS X.
Potential advantages and disadvantages of these systems are:

Advantages Disadvantages

Centralized servers provide high stability. They require regular updates and maintenance.

Security issues are easier to handle through


Servers are expensive to buy and maintain.
the servers.

It's easy to upgrade and integrate new Users' reliance on a central server might be detrimental
technologies. to workflows.

Remote access to the servers is possible.

5 What is an Interactive Operating System?


• An interactive operative system is an operating system that enables the execution of
interactive programs. Almost all PC operating systems are interactive operating systems.

• An interactive operating system allows the user to interacts directly with the computer.
In this type of operating system, the user enters a command into the system, and the
system executes it.

• Programs that allow users to enter data or commands are known as interactive
computer systems. The majority of commonly used software, such as word processors
and spreadsheet applications, are interactive.

Dr. Ashish
16
Unit-1
UNi

• A non-interactive program is one that, once started, continues without the need for human
interaction. A compiler, like all batch processing applications, is a non-interactive
program.

• Properties of Interactive Operating System

1) Batch Processing

2) Multitasking

3) Multiprogramming

4) Distributive Environment

5) Interactivity

The ability of a user to interact with a system is referred to as interactivity. The operating system
(OS) provides an interface for interacting with the system, manages I/O devices, and ensures a
quick response time.

6) Real-Time System

Dedicated embedded systems are real-time systems. To ensure good performance, the OS reads
and reacts to sensor data and provides a response in a fixed time period.

7) Spooling

Spooling is the process of pushing data from various I/O jobs into a buffer, disc, or somewhere
in the memory so that a device can access the data when it is ready.

In order to maintain the spooling buffer, the OS handles I/O device data spooling because the
devices have varying data access rates. Buffer acts as a waiting station for data to rest while
slower devices catch up. Print Spooling is a spooling application.

Example of an Interactive Operating System

Dr. Ashish
17
Unit-1
UNi

UNIX operating system

DOS (Disk Operating System)

6 System Protection in Operating System


System protection in an operating system refers to the mechanisms implemented by the
operating system to ensure the security and integrity of the system. System protection involves
various techniques to prevent unauthorized access, misuse, or modification of the operating
system and its resources.

There are several ways in which an operating system can provide system protection:

User authentication: The operating system requires users to authenticate themselves before
accessing the system. Usernames and passwords are commonly used for this purpose.

Access control: The operating system uses access control lists (ACLs) to determine which users
or processes have permission to access specific resources or perform specific actions.

Encryption: The operating system can use encryption to protect sensitive data and prevent
unauthorized access.

Firewall: A firewall is a software program that monitors and controls incoming and outgoing
network traffic based on predefined security rules.

Antivirus software: Antivirus software is used to protect the system from viruses, malware, and
other malicious software.

System updates and patches: The operating system must be kept up-to-date with the latest
security patches and updates to prevent known vulnerabilities from being exploited.

By implementing these protection mechanisms, the operating system can prevent unauthorized
access to the system, protect sensitive data, and ensure the overall security and integrity of the
system.

Advantages of system protection in an operating system:

1. Ensures the security and integrity of the system

2. Prevents unauthorized access, misuse, or modification of the operating system and its
resources

3. Protects sensitive data

Dr. Ashish
18
Unit-1
UNi

4. Provides a secure environment for users and applications

5. Prevents malware and other security threats from infecting the system

6. Allows for safe sharing of resources and data among users and applications

7. Helps maintain compliance with security regulations and standards

Disadvantages of system protection in an operating system:

1. Can be complex and difficult to implement and manage

2. May slow down system performance due to increased security measures

3. Can cause compatibility issues with some applications or hardware

4. Can create a false sense of security if users are not properly educated on safe computing
practices

5. Can create additional costs for implementing and maintaining security measures.

7 Structures of Operating System


7.1 Simple Structure
The interfaces and levels of functionality are not well separated. MS-DOS is an example of such
operating system. In MS-DOS application programs are able to access the basic I/O routines.
These types of operating system cause the entire system to crash if one of the user programs fails.

• There are four layers that make up the MS-DOS operating


system, and each has its own set of features.
• These layers include ROM BIOS device drivers, MS-DOS
device drivers, application programs, and system programs.
• The MS-DOS operating system benefits from layering
because each level can be defined independently and, when
necessary, can interact with one another.
• If the system is built in layers, it will be simpler to design,
manage, and update. Because of this, simple structures can be
used to build constrained systems that are less complex.
• When a user program fails, the operating system as whole
crashes.
• Because MS-DOS systems have a low level of abstraction,
programs and I/O procedures are visible to end users, giving
them the potential for unwanted access.
Dr. Ashish
19
Unit-1
UNi

Advantages of Simple Structure:


• Because there are only a few interfaces and levels, it is simple to develop.
• Because there are fewer layers between the hardware and the applications, it offers superior
performance.
Disadvantages of Simple Structure:
• The entire operating system breaks if just one user program malfunctions.
• Since the layers are interconnected, and in communication with one another, there is no abstraction
or data hiding.
• The operating system's operations are accessible to layers, which can result in data tampering and
system failure.

7.2 Mono Lithic Structure:


• Monolithic Kernel is another classification of Kernel.

• Like microkernel, this one also manages system resources between application and hardware,
but user services and kernel services are implemented under the same address space.

• It increases the size of the kernel, thus increasing the size of the operating system as well.
• This kernel provides CPU scheduling, memory management, file management, and other
operating system functions through system calls.

• As both services are implemented under the same address space, this makes operating system
execution faster.

• If any service fails the entire system crashes, and it is one of the drawbacks of this kernel. The
entire operating system needs modification if the user adds a new service.

Advantages

Dr. Ashish
20
Unit-1
UNi

• One of the major advantages of having a monolithic kernel is that it provides CPU
scheduling, memory management, file management, and other operating system functions
through system calls.

• The other one is that it is a single large process running entirely in a single address space.

• It is a single static binary file. Examples of some Monolithic Kernel-based OSs are Unix,
Linux, Open VMS, XTS-400, z/TPF.
Disadvantages

• One of the major disadvantages of a monolithic kernel is that if anyone service fails it
leads to an entire system failure.

• If the user has to add any new service. The user needs to modify the entire operating
system.
7.3 Layered Structure:
• To eliminate the disadvantages of simple structure from MS-DOS and mono lithc
structure from UNIX, Layered structure comes into a picture.

• An OS can be broken into pieces and retain much more control on system.
• In this structure the OS is broken into number of layers (levels).

• The bottom layer (layer 0) is the hardware and the topmost layer (layer N) is the user
interface.

• These layers are so designed that each layer uses the functions of the lower level layers
only.

• This simplifies the debugging process as if lower level layers are debugged and an error
occurs during debugging then the error must be on
that layer only as the lower level layers have
already been debugged.
• The main disadvantage of this structure is that at
each layer, the data needs to be modified and
passed on which adds overhead to the system.
Moreover careful planning of the layers is
necessary as a layer can use only lower level layers.
UNIX is an example of this structure

Advantages:

• Layering makes it easier to enhance the operating system as implementation of a layer


can be changed easily without affecting the other layers.

• It is very easy to perform debugging and system verification.

Dr. Ashish
21
Unit-1
UNi

Disadvantages:

• In this structure the application performance is degraded as compared to simple structure.

• It requires careful planning for designing the layers as higher layers use the functionalities
of only the lower layers.

7.4 Micro Kernel Structure:


• This structure designs the operating system by removing all non-essential components
from the kernel and implementing them as system and user programs.

• This result in a smaller kernel called the micro-kernel.

• Advantages of this structure are that all new services need to be added to user space and
does not require the kernel to be modified.

• Thus it is more secure and reliable as if a service fails then rest of the operating system
remains untouched. Mac OS is an example of this type of OS.

Advantages:-

The advantages of microkernel are as follows −

• This is small and isolated so as better functioning

• These are more secure due to space division

Dr. Ashish
22
Unit-1
UNi

• Can add new features without recompiling

• This architecture is more flexible and can coexist in the system

• Fewer system crashes as compared to monolithic system

Disadvantages: -

The disadvantages of microkernel are as follows −

• It is expensive as compared to the monolithic system architecture.

• Function calls needed when drivers are implemented as processes.

Performance of the microkernel system can be indifferent and may sometimes cause problems.

7.5 Modular Operating systems


A modular OS is based on the idea of
dividing the system into smaller and
independent units, called modules,
that can be loaded and unloaded as
needed. Modules can be either user-
level or kernel-level, depending on
whether they run in the same or
separate address space as the kernel.
User-level modules are also called
servers, and they communicate with
the kernel and other servers through message passing or remote procedure calls. Kernel-level
modules are also called extensions, and they communicate with the kernel and other extensions
through direct function calls or shared data structures. A modular OS can have different types of
modules, such as file systems, device drivers, network protocols, security mechanisms, or
graphical user interfaces.

Dr. Ashish
23
Unit-1
UNi

CPU Scheduling
8 CPU Scheduling
• Scheduling is the process of allotting the CPU to the processes present in the ready queue.
We also refer to this procedure as process scheduling.

• The operating system schedules the process so that the CPU always has one process to
execute. This reduces the CPU’s idle time and increases its utilization.

• The part of OS that allots the computer resources to the processes is termed as
a scheduler. It uses scheduling algorithms to decide which process it must allot to the
CPU.

8.1 What is a process?


In computing, a process is the instance of a computer program that is being executed by
one or many threads. It contains the program code and its activity. Depending on the operating
system (OS), a process may be made up of multiple threads of execution that execute
instructions concurrently. A process is more than just a set of instructions. It contains information
such as the process stack, the program counter, and the contents of the process register, among
other things. When a process runs, it modifies the state of the system. The current activity of a
given process determines the state of the process in general.

8.2 What are the Process States in Operating System?


From start to finish, the process goes through a number of stages. A minimum of five states is
required. Even though the process could be in one of these states during execution, the names of
the states are not standardised. Throughout its life cycle, each process goes through various
stages. They are:

Dr. Ashish
24
Unit-1
UNi

8.2.1 New State


When a program in secondary memory is started for execution, the process is said to be in a new
state.

8.2.2 Ready State


After being loaded into the main memory and ready for execution, a process transitions from a
new to a ready state. The process will now be in the ready state, waiting for the processor to
execute it. Many processes may be in the ready stage in a multiprogramming environment.

8.2.3 Run State


After being allotted the CPU for execution, a process passes from the ready state to the run state.

8.2.4 Terminate State


When a process’s execution is finished, it goes from the run state to the terminate state. The
operating system deletes the process control box (or PCB) after it enters the terminate state.

8.2.5 Block or Wait State


• If a process requires an Input/Output operation or a blocked resource during execution, it
changes from run to block or the wait state.

• The process advances to the ready state after the I/O operation is completed or the resource
becomes available.

8.2.6 Suspend Ready State


If a process with a higher priority needs to be executed while the main memory is full, the process
goes from ready to suspend ready state. Moving a lower-priority process from the ready state to
the suspend ready state frees up space in the ready state for a higher-priority process.

Until the main memory becomes available, the process stays in the suspend-ready state. The
process is brought to its ready state when the main memory becomes accessible.

8.2.7 Suspend Wait State


• If a process with a higher priority needs to be executed while the main memory is full, the
process goes from the wait state to the suspend wait state. Moving a lower-priority process
from the wait state to the suspend wait state frees up space in the ready state for a higher-
priority process.

• The process gets moved to the suspend-ready state once the resource becomes accessible.
The process is shifted to the ready state once the main memory is available.

Dr. Ashish
25
Unit-1
UNi

8.2.8 Important Notes


Note – 01:
A process must pass through at least four states.

• A process must go through a minimum of four states to be considered complete.


• The new state, run state, ready state, and terminate state are the four states.
• However, in case a process also requires I/O, the minimum number of states required is 5

Note – 02:
Only one process can run at a time on a single CPU.

• Any processor can only handle one process at a time.


• When there are n processors in a system, only n processes can run at the same time.
Note – 03:
Present in Memory State

Secondary Memory New state

Main Memory Ready state

Main Memory Run state

Main Memory Wait state

Secondary Memory Suspend wait state

Secondary Memory Suspend ready state

Note – 04:
It is much more preferable to move a given process from its wait state to its suspend wait
state.

• Consider the situation where a high-priority process comes, and the main memory is full.
• Then there are two options for making space for it. They are:

1. Suspending the processes that have lesser priority than the ready state.
2. Transferring the lower-priority processes from wait to the suspend wait state.
Now, out of these: Moving a process from a wait state to a suspend wait state is the superior
option.

• It is because this process is waiting already for a resource that is currently unavailable.

Dr. Ashish
26
Unit-1
UNi

8.3 Type of Process Schedulers


A scheduler is a type of system software that allows you to handle process scheduling.

There are mainly three types of Process Schedulers:

1. Long Term Scheduler


2. Short Term Scheduler
3. Medium Term Scheduler

8.3.1 Long Term Scheduler


Long term scheduler is also known as a job scheduler. This scheduler regulates the program and
select process from the queue and loads them into memory for execution. It also regulates the

degree of multi-programming. However, the main goal of this type of scheduler is to offer a
balanced mix of jobs, like Processor, I/O jobs., that allows managing multiprogramming.

8.3.2 Medium Term Scheduler


Medium-term scheduling is an important part of swapping. It enables you to handle the swapped
out-processes. In this scheduler, a running process can become suspended, which makes an I/O
request.

A running process can become suspended if it makes an I/O request. A suspended processes can’t
make any progress towards completion. In order to remove the process from memory and make
space for other processes, the suspended process should be moved to secondary storage.

8.3.3 Short Term Scheduler


Short term scheduling is also known as CPU scheduler. The main goal of this scheduler is to
boost the system performance according to set criteria. This helps you to select from a group of

Dr. Ashish
27
Unit-1
UNi

processes that are ready to execute and allocates CPU to one of them. The dispatcher gives control
of the CPU to the process selected by the short term scheduler.

9 Scheduling Objectives

Be Fair while allocating resources to the processes


Maximize throughput of the system
Maximize number of users receiving acceptable response times.
Be predictable
Balance resource use
Avoid indefinite postponement
Enforce Priorities
Give preference to processes holding key resources
Give better service to processes that have desirable behaviour patterns

CPU and I/O Burst Cycle:


Process execution consists of a cycle of CPU execution and I/O wait.
Processes alternate between these two states.
Process execution begins with a CPU burst, followed by an I/O burst, then another CPU
burst ... etc
The last CPU burst will end with a system request to terminate execution rather than
with another I/O burst.
The duration of these CPU burst have been measured.
An I/O-bound program would typically have many short CPU bursts, A CPU-bound
program might have a few very long CPU bursts.

Dr. Ashish
28
Unit-1
UNi

This can help to select an appropriate CPU-scheduling algorithm.

Preemptive Scheduling:
Preemptive scheduling is used when a process switches from running state to ready
state or from waiting state to ready state.
The resources (mainly CPU cycles) are allocated to the process for the limited amount
of time and then is taken away, and the process is again placed back in the ready queue
if that process still has CPU burst time remaining.
That process stays in ready queue till it gets next chance to execute.

Non-Preemptive Scheduling:
Non-preemptive Scheduling is used when a process terminates, or a process switch
from running to waiting state.
In this scheduling, once the resources (CPU cycles) is allocated to a process, the process
holds the CPU till it gets terminated or it reaches a waiting state.
In case of non-preemptive scheduling does not interrupt a process running CPU in
middle of the execution.
Instead, it waits till the process complete its CPU burst time and then it can allocate the
CPU to another process.

Dr. Ashish
29
Unit-1
UNi

Basis for
Preemptive Scheduling Non Preemptive Scheduling
Comparison
Once resources are allocated to a
Basic The resources are allocated to a process, the process holds it till it
process for a limited time. completes its burst time or switches to
waiting state.
Process can be interrupted in Process can not be interrupted till it
Interrupt
between. terminates or switches to waiting state.
If a high priority process
If a process with long burst time is running
Starvation frequently arrives in the ready
CPU, then another process with less CPU
queue, low priority process may
burst time may starve.
starve.
Preemptive scheduling has
Non-preemptive scheduling does not have
Overhead overheads of scheduling the
overheads.
processes.
Flexibility Preemptive scheduling is flexible. Non-preemptive scheduling is rigid.
Preemptive scheduling is cost Non-preemptive scheduling is not cost
Cost
associated. associative.
9.1 Scheduling Criteria
There are several different criteria to consider when trying to select the "best" scheduling
algorithm for a particular situation and environment,
including:
o CPU utilization - Ideally the CPU would be busy
100% of the time, so as to waste 0 CPU cycles. On a
real system CPU usage should range from 40% (
lightly loaded ) to 90% ( heavily loaded. )
o Throughput - Number of processes completed per
unit time. May range from 10 / second to 1 / hour
depending on the specific processes.

o Turnaround time - Time required for a particular process to complete, from submission
time to completion.
o Waiting time - How much time processes spend in the ready queue waiting their turn to get
on the CPU.

Dr. Ashish
30
Unit-1
UNi

o Response time - The time taken in an interactive program from the issuance of a command
to the commence of a response to that command.

In brief:
1. Arrival Time: Time at which the process arrives in the ready queue.
2. Completion Time: Time at which process completes its execution.
3. Burst Time: Time required by a process for CPU execution.
4. Turn Around Time: Time Difference between completion time and arrival time. Turn
Around Time = Completion Time – Arrival Time
5. Waiting Time(W.T): Time Difference between turnaround time and burst time.
Waiting Time = Turn Around Time – Burst Time
6. Response Time
In an interactive system, turn-around time is not the best criterion. A process may
produce some output fairly early and continue computing new results while previous
results are being output to the user. Thus another criterion is the time taken from
submission of the process of the request until the first response is produced. This
measure is called response time.
Response Time = CPU Allocation Time(when the CPU was allocated for the first) -
Arrival Time

10 Types of Scheduling Algorithm


10.1 First Come First Serve (FCFS)
In FCFS Scheduling
The process which arrives first in the ready queue is firstly assigned the CPU.
In case of a tie, process with smaller process id is executed first.
It is always non-preemptive in nature.
Jobs are executed on first come, first serve basis.
It is a non-preemptive, pre-emptive scheduling algorithm.
Easy to understand and implement.
Its implementation is based on FIFO queue.
Poor in performance as average wait time is high.

Advantages-
It is simple and easy to understand.

Dr. Ashish
31
Unit-1
UNi

It can be easily implemented using queue data structure.


It does not lead to starvation.
Disadvantages-
It does not consider the priority or burst time of the processes.
It suffers from convoy effect i.e. processes with higher burst time arrived before the
processes with smaller burst time.

Example 1:

Example 2:
Consider the processes P1, P2, P3 given in the below table, arrives for execution in the same

Dr. Ashish
32
Unit-1
UNi

order, with Arrival Time 0, and given Burst Time,


PROCESS ARRIVAL TIME BURST TIME
P1 0 24
P2 0 3
P3 0 3
Gantt chart

P1 P2 P3
0 24 27 30

PROCESS WAIT TIME TURN AROUND


TIME
P1 0 24
P2 24 27
P3 27 30

Total Wait Time = 0 + 24 + 27 = 51 ms

Average Waiting Time = (Total Wait Time) / (Total number of processes) = 51/3 = 17 ms
Total Turn Around Time: 24 + 27 + 30 = 81 ms
Average Turn Around time = (Total Turn Around Time) / (Total number of processes)
= 81 / 3 = 27 ms
Throughput = 3 jobs/30 sec = 0.1 jobs/sec
Example 3:
1. Consider the processes P1, P2, P3, P4 given in the below table, arrives for execution in
the same order, with given Arrival Time and Burst Time.
PROCESS ARRIVAL TIME BURST TIME
P1 0 8
P2 1 4
P3 2 9
P4 3 5

Gantt chart

Dr. Ashish
33
Unit-1
UNi

P1 P2 P3 P4
0 8 12 21 26

PROCESS WAIT TIME TURN AROUND TIME


P1 0 8–0=8
P2 8–1=7 12 – 1 = 11
P3 12 – 2 = 10 21 – 2 = 19
P4 21 – 3 = 18 26 – 3 = 23

Total Wait Time:= 0 + 7 + 10 + 18 = 35 ms

Average Waiting Time = (Total Wait Time) / (Total number of processes)= 35/4 = 8.75 ms
Total Turn Around Time: 8 + 11 + 19 + 23 = 61 ms
Average Turn Around time = (Total Turn Around Time) / (Total number of processes)
61/4 = 15.25 ms

Throughput: 4 jobs/26 sec = 0.15385 jobs/sec

Dr. Ashish
34
Unit-1
UNi

Dr. Ashish
35
Unit-1
UNi

10.2 Shortest Job First (SJF) or (Shortest Job Next)


Process which has the shortest burst time are scheduled first.
If two processes have the same bust time, then FCFS is used to break the tie.
This is a non-pre-emptive, pre-emptive scheduling algorithm.
Best approach to minimize waiting time.
Easy to implement in Batch systems where required CPU time is known in advance.
Impossible to implement in interactive systems where required CPU time is not
known.
The processer should know in advance how much time process will take.
Pre-emptive mode of Shortest Job First is called as Shortest Remaining Time First
(SRTF).

Advantages-
SRTF is optimal and guarantees the minimum average waiting time.
It provides a standard for other algorithms since no other algorithm performs better
than it.

Disadvantages-
It can not be implemented practically since burst time of the processes can not be
known in advance.
It leads to starvation for processes with larger burst time.
Priorities can not be set for the processes.
Processes with larger burst time have poor response time.

Example-01:
Consider the set of 5 processes whose arrival time and burst time are given below-
Process Id Arrival time Burst time
P1 3 1
P2 1 4
P3 4 2
P4 0 6
P5 2 3
Solution-
If the CPU scheduling policy is SJF non-preemptive, calculate the average waiting time and

Dr. Ashish
36
Unit-1
UNi

average turnaround time.


Gantt Chart-

Now, we know-
Turn Around time = Exit time – Arrival time
Waiting time = Turn Around time – Burst time

Process Id Exit time Turn Around time Waiting time


P1 7 7–3=4 4–1=3
P2 16 16 – 1 = 15 15 – 4 = 11
P3 9 9–4=5 5–2=3
P4 6 6–0=6 6–6=0
P5 12 12 – 2 = 10 10 – 3 = 7
Now,
Average Turn Around time = (4 + 15 + 5 + 6 + 10) / 5 = 40 / 5 = 8 unit
Average waiting time = (3 + 11 + 3 + 0 + 7) / 5 = 24 / 5 = 4.8 unit

Example-02:
Consider the set of 5 processes whose arrival time and burst time are given below-
Process Id Arrival time Burst time
P1 3 1
P2 1 4
P3 4 2
P4 0 6
P5 2 3
If the CPU scheduling policy is SJF pre-emptive, calculate the average waiting time and average
turnaround time.
Solution-

Dr. Ashish
37
Unit-1
UNi

Gantt Chart-

Process Id Exit time Turn Around time Waiting time


P1 4 4–3=1 1–1=0
P2 6 6–1=5 5–4=1
P3 8 8–4=4 4–2=2
P4 16 16 – 0 = 16 16 – 6 = 10
P5 11 11 – 2 = 9 9–3=6

Now,

Average Turn Around time = (1 + 5 + 4 + 16 + 9) / 5 = 35 / 5 = 7 unit


Average waiting time = (0 + 1 + 2 + 10 + 6) / 5 = 19 / 5 = 3.8 unit

Dr. Ashish
38
Unit-1
UNi

10.3 SRTF

Dr. Ashish
39
Unit-1
UNi

Dr. Ashish
40
Unit-1
UNi

Example-03:

Consider the set of 6 processes whose arrival time and burst time are given below-

Process Id Arrival time Burst time


P1 0 7
P2 1 5
P3 2 3
P4 3 1
P5 4 2
P6 5 1

If the CPU scheduling policy is shortest remaining time first, calculate the average
waiting time and average turnaround time.
Solution-
Gantt Chart-

Now, we know-
Turn Around time = Exit time – Arrival time
Waiting time = Turn Around time – Burst time

Process Id Exit time Turn Around time Waiting time


P1 19 19 – 0 = 19 19 – 7 = 12
P2 13 13 – 1 = 12 12 – 5 = 7
P3 6 6–2=4 4–3=1
P4 4 4–3=1 1–1=0
P5 9 9–4=5 5–2=3
P6 7 7–5=2 2–1=1
Now,
Average Turn Around time = (19 + 12 + 4 + 1 + 5 + 2) / 6 = 43 / 6 = 7.17 unit
Average waiting time = (12 + 7 + 1 + 0 + 3 + 1) / 6 = 24 / 6 = 4 unit

Dr. Ashish
41
Unit-1
UNi

Example -04:

Consider the set of 3 processes whose arrival time and burst time are given below-

Process Id Arrival time Burst time


P1 0 9
P2 1 4
P3 2 9

If the CPU scheduling policy is SRTF, calculate the average waiting time and average turn
around time.

Solution- Gantt
Chart-

Now, we know-
Turn Around time = Exit time – Arrival time
Waiting time = Turn Around time – Burst time

Process Id Exit time Turn Around time Waiting time


P1 13 13 – 0 = 13 13 – 9 = 4
P2 5 5–1=4 4–4=0
P3 22 22- 2 = 20 20 – 9 = 11

Now,
Average Turn Around time = (13 + 4 + 20) / 3 = 37 / 3 = 12.33 unit
Average waiting time = (4 + 0 + 11) / 3 = 15 / 3 = 5 unit

Dr. Ashish
42
Unit-1
UNi

Example-05:

Consider the set of 4 processes whose arrival time and burst time are given below-

Process Id Arrival time Burst time


P1 0 20
P2 15 25
P3 30 10
P4 45 15

If the CPU scheduling policy is SRTF, calculate the waiting time of process P2.

Solution-

Gantt Chart-

Now, we know-
Turn Around time = Exit time – Arrival time
Waiting time = Turn Around time – Burst time

Thus,
Turn Around Time of process P2 = 55 – 15 = 40 unit
Waiting time of process P2 = 40 – 25 = 15 unit

Dr. Ashish
43
Unit-1
UNi

10.4 Round Robin Scheduling


CPU is assigned to the process on the basis of FCFS for a fixed amount of time.
This fixed amount of time is called as time quantum or time slice.
After the time quantum expires, the running process is preempted and sent to the ready
queue.
Then, the processor is assigned to the next arrived process.
It is always preemptive in nature.

Advantages-

It gives the best performance in terms of average response time.


It is best suited for time sharing system, client server architecture and
interactive system.

Disadvantages-

• It leads to starvation for processes with larger burst time as they have to repeat the cycle
many times.
• Its performance heavily depends on time quantum.
• Priorities can not be set for the processes.
With decreasing value of time quantum,
• Number of context switch increases
• Response time decreases

Dr. Ashish
44
Unit-1
UNi

• Chances of starvation decreases


Thus, smaller value of time quantum is better in terms of response time.
With increasing value of time quantum,
• Number of context switch decreases
• Response time increases
• Chances of starvation increases
Thus, higher value of time quantum is better in terms of number of context switch.
• With increasing value of time quantum, Round Robin Scheduling tends to become
FCFS Scheduling.
• When time quantum tends to infinity, Round Robin Scheduling becomes FCFS Scheduling.
• The performance of Round Robin scheduling heavily depends on the value of time
quantum.
• The value of time quantum should be such that it is neither too big nor too small.

Example-01:
Consider the set of 5 processes whose arrival time and burst time are given below-

Process Id Arrival time Burst time


P1 0 5
P2 1 3
P3 2 1
P4 3 2
P5 4 3

If the CPU scheduling policy is Round Robin with time quantum = 2 unit, calculate the
average waiting time and average turnaround time.
Solution-
Ready Queue- P5, P1, P2, P5, P4, P1, P3, P2, P1
Gantt Chart-

Now, we know-
Turn Around time = Exit time – Arrival time
Waiting time = Turn Around time – Burst time

Dr. Ashish
45
Unit-1
UNi

Process Id Exit time Turn Around time Waiting time


P1 13 13 – 0 = 13 13 – 5 = 8
P2 12 12 – 1 = 11 11 – 3 = 8
P3 5 5–2=3 3–1=2
P4 9 9–3=6 6–2=4
P5 14 14 – 4 = 10 10 – 3 = 7
Now,
Average Turn Around time = (13 + 11 + 3 + 6 + 10) / 5 = 43 / 5 = 8.6 unit
Average waiting time = (8 + 8 + 2 + 4 + 7) / 5 = 29 / 5 = 5.8 unit

Problem-02:
Consider the set of 6 processes whose arrival time and burst time are given below-
Process Id Arrival time Burst time

P1 0 4

P2 1 5

P3 2 2

P4 3 1

P5 4 6

P6 6 3

If the CPU scheduling policy is Round Robin with time quantum = 2, calculate the average
waiting time and average turnaround time.
Solution-
Ready Queue- P5, P6, P2, P5, P6, P2, P5, P4, P1, P3, P2, P1
Gantt chart-

Now, we know-
Turn Around time = Exit time – Arrival time
Waiting time = Turn Around time – Burst time

Dr. Ashish
46
Unit-1
UNi

Process Id Exit time Turn Around time Waiting time


P1 8 8–0=8 8–4=4
P2 18 18 – 1 = 17 17 – 5 = 12
P3 6 6–2=4 4–2=2
P4 9 9–3=6 6–1=5
P5 21 21 – 4 = 17 17 – 6 = 11
P6 19 19 – 6 = 13 13 – 3 = 10
Now,
Average Turn Around time = (8 + 17 + 4 + 6 + 17 + 13) / 6 = 65 / 6 = 10.84 unit
Average waiting time = (4 + 12 + 2 + 5 + 11 + 10) / 6 = 44 / 6 = 7.33 unit

47
Dr. Ashish
Unit-1
UNi

Problem-03: Consider the set of 6 processes whose arrival time and burst time are given
below-

Process Id Arrival time Burst time


P1 5 5
P2 4 6
P3 3 7
P4 1 9
P5 2 2
P6 6 3
If the CPU scheduling policy is Round Robin with time quantum = 3, calculate the average

48
Dr. Ashish
Unit-1
UNi

waiting time and average turnaround time.


Solution-
Ready Queue- P3, P1, P4, P2, P3, P6, P1, P4, P2, P3, P5, P4
Gantt chart-

Now, we know-
Turn Around time = Exit time – Arrival time
Waiting time = Turn Around time – Burst time
Process Id Exit time Turn Around time Waiting time
P1 32 32 – 5 = 27 27 – 5 = 22
P2 27 27 – 4 = 23 23 – 6 = 17
P3 33 33 – 3 = 30 30 – 7 = 23
P4 30 30 – 1 = 29 29 – 9 = 20
P5 6 6–2=4 4–2=2
P6 21 21 – 6 = 15 15 – 3 = 12

Now,

Average Turn Around time = (27 + 23 + 30 + 29 + 4 + 15) / 6 = 128 / 6 = 21.33 unit


Average waiting time = (22 + 17 + 23 + 20 + 2 + 12) / 6 = 96 / 6 = 16 unit

49
Dr. Ashish
Unit-1
UNi

10.5 Priority Scheduling


• Out of all the available processes, CPU is assigned to the process having the

highest priority.
• In case of a tie, it is broken by FCFS Scheduling.
• Priority Scheduling can be used in both preemptive and non-preemptive mode.

• The waiting time for the process having the highest priority will always be zero
in preemptive mode.
• The waiting time for the process having the highest priority may not be zero in
non- preemptive mode.
Priority scheduling in preemptive and non-preemptive mode behaves exactly same under
following conditions-
• The arrival time of all the processes is same
• All the processes become available
Advantages-
• It considers the priority of the processes and allows the important processes to run
first.
• Priority scheduling in pre-emptive mode is best suited for real time operating
system.
Disadvantages-
• Processes with lesser priority may starve for CPU.
• There is no idea of response time and waiting time.

50
Dr. Ashish
Unit-1
UNi

51
Dr. Ashish
Unit-1
UNi

52
Dr. Ashish
Unit-1
UNi

Problem-01:
Consider the set of 5 processes whose arrival time and burst time are given below-
Process Id Arrival time Burst time Priority

P1 0 4 2

P2 1 3 3

P3 2 1 4

P4 3 5 5

P5 4 2 5

If the CPU scheduling policy is priority non-preemptive, calculate the average waiting time
and average turnaround time. (Higher number represents higher priority)

Solution-

Gantt
Chart-
Now, we know-
• Turn Around time = Exit time – Arrival time
• Waiting time = Turn Around time – Burst time

Process Id Exit time Turn Around time Waiting time


P1 4 4–0=4 4–4=0
P2 15 15 – 1 = 14 14 – 3 = 11
P3 12 12 – 2 = 10 10 – 1 = 9
P4 9 9–3=6 6–5=1
P5 11 11 – 4 = 7 7–2=5
Now,
• Average Turn Around time = (4 + 14 + 10 + 6 + 7) / 5 = 41 / 5 = 8.2 unit
• Average waiting time = (0 + 11 + 9 + 1 + 5) / 5 = 26 / 5 = 5.2 unit

Problem-02: Consider the set of 5 processes whose arrival time and burst time are given
below-

53
Dr. Ashish
Unit-1
UNi

Process Id Arrival time Burst time Priority


P1 0 4 2
P2 1 3 3
P3 2 1 4
P4 3 5 5
P5 4 2 5
If the CPU scheduling policy is priority preemptive, calculate the average waiting time
and average turn around time. (Higher number represents higher priority). Solution-
Gantt Chart-

Now, we know-
• Turn Around time = Exit time – Arrival time
• Waiting time = Turn Around time – Burst time
Process Id Exit time Turn Around time Waiting time
P1 15 15 – 0 = 15 15 – 4 = 11
P2 12 12 – 1 = 11 11 – 3 = 8
P3 3 3–2=1 1–1=0
P4 8 8–3=5 5–5=0
P5 10 10 – 4 = 6 6–2=4

Now,
• Average Turn Around time = (15 + 11 + 1 + 5 + 6) / 5 = 38 / 5 = 7.6 unit
• Average waiting time = (11 + 8 + 0 + 0 + 4) / 5 = 23 / 5 = 4.6 unit

10.6 Multilevel Queue Scheduling


A multi-level queue scheduling algorithm partitions the ready queue into several separate
queues. The processes are permanently assigned to one queue, generally based on some
property of the process, such as memory size, process priority, or process type. Each queue has
its own scheduling algorithm.
Let us consider an example of a multilevel queue-scheduling algorithm with five queues:
1. System Processes
2. Interactive Processes

54
Dr. Ashish
Unit-1
UNi

3. Interactive Editing Processes


4. Batch Processes
5. Student Processes
Each queue has absolute priority over lower-priority queues. No process in the batch queue,
for example, could run unless the queues for system processes, interactive processes, and
interactive editing processes were all empty. If an interactive editing process entered the ready
queue while a batch process was running, the batch process will be pre-empted.

References

• https://byjus.com/gate/process-state-in-operating-system-notes/
• https://www.tutorialspoint.com/operating_system/os_process_scheduling.htm
• https://www.geeksforgeeks.org/context-switch-in-operating-system/
• https://www.scaler.com/topics/scheduling-criteria-in-os/

55
Dr. Ashish
Unit-1
UNi

Unit- II

Concurrent Processes: Process concept, Principle of Concurrency, Producer Consumer


Problem, Critical Section problem, Semaphores, Binary and counting semaphores, P() and V()
operations, Classical problems in Concurrency, Inter Process Communication, Process
Generation, Process Scheduling.
Deadlocks: examples of deadlock, resource concepts, necessary conditions for deadlock,
deadlock solution, deadlock prevention, deadlock avoidance with Banker’s algorithms,
deadlock detection, deadlock recovery.

1 Process concept
1.1 Process

A process is basically a program in execution. The execution of a process must progress in a


sequential fashion.

A process is defined as an entity which represents the basic


unit of work to be implemented in the system.

To put it in simple terms, we write our computer programs in a


text file and when we execute this program, it becomes a
process which performs all the tasks mentioned in the program.

When a program is loaded into the memory and it becomes a


process, it can be divided into four sections ─ stack, heap, text
and data. The following image shows a simplified layout of a
process inside main memory −

S.N. Component & Description


1 Stack
The process Stack contains the temporary data such as method/function parameters,
return address and local variables.
2 Heap
This is dynamically allocated memory to a process during its run time.
3 Text
This includes the current activity represented by the value of Program Counter and
the contents of the processor's registers.
4 Data
This section contains the global and static variables.

56
Dr. Ashish
Unit-1
UNi

1.2 Program

A program is a piece of code which may be a single line or millions of lines. A computer
program is usually written by a computer programmer in a programming language. For
example, here is a simple program written in C programming language −

#include <stdio.h>

int main() {
printf("Hello, World! \n");
return 0;
}

A computer program is a collection of instructions that performs a specific task when


executed by a computer. When we compare a program with a process, we can conclude that a
process is a dynamic instance of a computer program.

A part of a computer program that performs a well-defined task is known as an algorithm. A


collection of computer programs, libraries and related data are referred to as a software.

1.3 Process Life Cycle (Process State Diagram) Refer chapter-1

When a process executes, it passes through different states. These stages may differ in different
operating systems, and the names of these states are also not standardized.

In general, a process can have one of the following five states at a time.

1.4 Process Control Block (PCB)

A Process Control Block is a data structure maintained by the Operating System for every
process. The PCB is identified by an integer process ID (PID). A PCB keeps all the
information needed to keep track of a process as listed below in the table −

S.N. Information & Description


1 Process State
The current state of the process i.e., whether it is ready, running, waiting, or
whatever.
2 Process privileges
This is required to allow/disallow access to system resources.
3 Process ID
Unique identification for each of the process in the operating system.
4 Pointer
A pointer to parent process.
5 Program Counter

57
Dr. Ashish
Unit-1
UNi

Program Counter is a pointer to the address of the next instruction to be executed for
this process.
6 CPU registers
Various CPU registers where process need to be stored for execution for running
state.
7 CPU Scheduling Information
Process priority and other scheduling information which is required to schedule the
process.
8 Memory management information
This includes the information of page table, memory limits, Segment table
depending on memory used by the operating system.
9 Accounting information
This includes the amount of CPU used for process execution, time limits, execution
ID etc.
10 IO status information
This includes a list of I/O devices allocated to the process.

The architecture of a PCB is completely dependent on Operating System and may contain
different information in different operating systems. Here is a simplified diagram of a PCB −

The PCB is maintained for a process throughout


its lifetime, and is deleted once the process
terminates.

58
Dr. Ashish
Unit-1
UNi

1.5 Context switching


An operating system uses this technique to switch a process between states to execute its
functions through CPUs. It is a process of saving the context(state) of the old
process(suspend) and loading it into the new process(resume). It occurs whenever the CPU
switches between one process and another. Basically, the state of CPU’s registers and
program counter at any time represent a context. Here, the saved state of the currently
executing process means to copy all live registers to PCB (Process Control Block). Moreover,
after that, restore the state of the process to run or execute next, which means copying live
registers’ values from PCB to registers.

59
Dr. Ashish
Unit-1
UNi

2 producer-consumer problem
• The producer-consumer problem arises when multiple
threads or processes attempt to share a common buffer
or data structure. Producers produce items and place
them in the buffer, while consumers retrieve items from
the buffer and process them.
• The challenge lies in coordinating the producers and
consumers efficiently to avoid problems like data
inconsistency. This buffer is known as the critical
section. So, we need a synchronization method so that when the producer is producing
the data, the consumer shouldn't interfere in between or vice versa.
• The other challenge that can arise is the producer should
not insert data when the buffer is full i.e., buffer overflow
condition. Similarly, the consumer must not remove data
when the buffer is empty i.e., buffer underflow. As we have
been given limited slots buffer where we need to
synchronize, it is also known as the bounded buffer problem.
• Paradigm for cooperating processes;
▪ producer process produces information that is
consumed by a consumer process.
• We need a buffer of items that can be filled by producer
and emptied by consumer.
• Shared memory solution to the bounded buffer:

60
Dr. Ashish
Unit-1
UNi

Solution-1

Only 9 values are stored, one cell of the buffer is not used by producer code
61
Dr. Ashish
Unit-1
UNi

Solution-2:
Producer Consumer Problem (for 10 Values)

62
Dr. Ashish
Unit-1
UNi

3 Critical Section Problem in OS (Operating System)

• Critical Section is the part of a program which tries to


access shared resources. That resource may be any
resource in a computer like a memory location, Data
structure, CPU or any IO device.
• The critical section cannot be executed by more than one
process at the same time; operating system faces the
difficulties in allowing and disallowing the processes from
entering the critical section.
• The critical section problem is used to design a set of
protocols which can ensure that the Race condition among
the processes will never arise.
• In order to synchronize the cooperative processes, our
main task is to solve the critical section problem. We need
to provide a solution in such a way that the following
conditions can be satisfied. Although there are some properties that should be followed
if any code in the critical section

3.1 Mutual Exclusion

Our solution must provide mutual exclusion.


By Mutual Exclusion, we mean that if one
process is executing inside critical section
then the other process must not enter in the
critical section.

3.2 Progress

Progress means that if one process doesn't need to execute into critical section then it should
not stop other processes to get into the critical section. If no process is executing in its critical
section and some processes wish to enter their critical sections, then only those processes that
are not executing in their remainder sections can participate in deciding which will enter its
critical section next, and this selection cannot be postponed indefinitely.

3.3 Bounded Waiting


There exists a bound, or limit, on the number of times that other processes are allowed to
enter their critical sections after a process has made a request to enter its critical section and
before that request is granted.
We should be able to predict the waiting time for every process to get into the critical section.
The process must not be endlessly waiting for getting into the critical section.

63
Dr. Ashish
Unit-1
UNi

Software based solution


Critical Section Problem: Algorithm 1 (Strict Alteration or Deckers algo)

Algorithm 1
• Satisfies mutual exclusion
The turn is equal to either i or j and hence one of Pi
and Pj can enter the critical section
• Does not satisfy progress
Example: Pi finishes the critical section and then
gets stuck indefinitely in its remainder section.
Then Pj enters the critical section, finishes, and
then finishes its remainder section. Pj then tries to
enter the critical section again, but it cannot since
turn was set to i by Pj in the previous iteration.
Since Pi is stuck in the remainder section, turn will
be equal to i indefinitely and Pj can’t enter although
it wants to. Hence no process is in the critical
section and hence no progress.

• We don’t need to discuss/consider bounded wait when progress is not satisfied

64
Dr. Ashish
Unit-1
UNi

Critical Section Problem: Algorithm 2

65
Dr. Ashish
Unit-1
UNi

3. Peterson’s Solution
While(1)
{
flag[0] = true;
turn = 1;
while (flag[1] ==True && turn == 1);
P0
Critical Section
flag[0] = false;
}

While(1)
{
flag[1] = true;
turn = 0; /* Change to Both waiting, turn=0 */
P1
while (flag[0]==True && turn == 0);

Critical Section

flag[1] = false;

}
Explanation of Peterson's Algorithm

The algorithm utilizes two main variables, which are a turn and a flag. turn is an integer variable that indicates whose
turn it is to enter the critical section. flag is an array of Boolean variables. Each element in the array represents the
intention of a process to enter the critical section. Let us look at the explanation of how Peterson's algorithm in OS works:

• Firstly, both processes set their flag variables to indicate that they don't currently want to enter the critical section. By
default, the turn variable is set to the ID of one of the processes, it can be 0 or 1. This will indicate that it is initially
the turn of that process to enter the critical section.
• Both processes set their flag variable to indicate their intention to enter the critical section.
• Then the process, which is having the next turn, sets the turn variable to its own ID. This will indicate that it is its turn
to enter the critical section. For example, if the current value of turn is 1, and it is now the turn of Process 0 to enter,
Process 0 sets turn = 0.
• Both processes enter a loop where they will check the flag of the other process and wait if necessary. The loop
continues as long as the following two conditions are met:
i. The other process has expressed its intention to enter the critical section (flag[1 - processID] == true for Process
processID).
ii. It is currently the turn of the other process (turn == 1 - processID).
• If both conditions are satisfied, the process waits and yields the CPU to the other process. This ensures that the other
process has an opportunity to enter the critical section.
• Once a process successfully exits the waiting loop, then it can enter the critical section. It can also access the shared
resource without interference from the other process. It can perform any necessary operations or modifications within
this section.
• After completing its work in the critical section, the process resets its flag variable. Resetting is required to indicate
that this process no longer wants to enter the critical section (flag[processID] = false). This step ensures that the
process can enter the waiting loop again correctly if needed.

66
Dr. Ashish
Unit-1
UNi

4 Semaphore

Semaphores are integer variables that are used to solve the critical section problem by using
two atomic operations, wait and signal that are used for process synchronization.

The definitions of wait and signal are as follows −

• Wait –P( )
The wait operation decrements the value of its argument S, if it is positive. If S is
negative or zero, then no operation is performed.
wait(S)
{
while (S<=0);

S--;
}

•Signal----V( )
The signal operation increments the value of its argument S.
signal(S)
{
S++;
}
4.1 Types of Semaphores

There are two main types of semaphores i.e. counting semaphores and binary semaphores.
Details about these are given as follows −

• Counting Semaphores
These are integer value semaphores and have an unrestricted value domain. These
semaphores are used to coordinate the resource access, where the semaphore count is
the number of available resources. If the resources are added, semaphore count
automatically incremented and if the resources are removed, the count is decremented.
• Binary Semaphores
The binary semaphores are like counting semaphores but their value is restricted to 0
and 1. The wait operation only works when the semaphore is 1 and the signal operation
succeeds when semaphore is 0. It is sometimes easier to implement binary semaphores
than counting semaphores.

Some points regarding P and V operation:

1. P operation is also called wait, sleep, or down operation, and V operation is also called
signal, wake-up, or up operation.

2. Both operations are atomic and semaphore(s) is always initialized to one. Here atomic
means that variable on which read, modify and update happens at the same time/moment
with no pre-emption i.e. in-between read, modify and update no other operation is
performed that may change the variable.

67
Dr. Ashish
Unit-1
UNi

A critical section is surrounded by both operations to implement process synchronization. See


the below image. The critical section of Process P is in between P and V operation

• Now, let us see how it implements mutual exclusion. Let


there be two processes P1 and P2 and a semaphore s is
initialized as 1.
• Now if suppose P1 enters in its critical section then the value
of semaphore s becomes 0.
• Now if P2 wants to enter its critical section then it will wait
until s > 0, this can only happen when P1 finishes its critical
section and calls V operation on semaphore s.
• This way mutual exclusion is achieved. Look at the below
image for details which is a Binary semaphore

68
Dr. Ashish
Unit-1
UNi

5 Classical problems of synchronization


classical problems of synchronization as examples of a large class of concurrency-control
problems. In our solutions to the problems, we use semaphores for synchronization, since
that is the traditional way to present such solutions. However, actual implementations of these
solutions could use mutex locks in plac of binary semaphores.
These problems are used for testing nearly every newly proposed synchronization scheme.
The following problems of synchronization are considered as classical problems:
1. Bounded-buffer (or Producer-Consumer) Problem,
2. Dining-Philosophers Problem,
3. Readers and Writers Problem,
These are summarized, for detailed explanation, you can view the linked articles for each.
5.1 Bounded-buffer (or Producer-Consumer) Problem:
Bounded Buffer problem is also called producer consumer problem. This problem is
generalized in terms of the Producer-Consumer problem. Solution to this problem is, creating
two counting semaphores “full” and “empty” to keep track of the current number of full and
empty buffers respectively. Producers produce a product and consumers consume the product,
but both use of one of the containers each time.
5.2 Dining-Philosophers Problem:
The Dining Philosopher Problem states that K philosophers
seated around a circular table with one chopstick between each
pair of philosophers. There is one chopstick between each
philosopher. A philosopher may eat if he can pickup the two
chopsticks adjacent to him. One chopstick may be picked up
by any one of its adjacent followers but not both. This problem
involves the allocation of limited resources to a group of
processes in a deadlock-free and starvation-free manner.
5.3 Readers and Writers Problem:
Suppose that a database is to be shared among several
concurrent processes. Some of these processes may want only
to read the database, whereas others may want to update (that is, to read and write) the database.
We distinguish between these two types of processes by referring to the former as readers and
to the latter as writers. Precisely in OS we call this situation as the readers-writers problem.
Problem parameters:

• One set of data is shared among a number of processes.


• Once a writer is ready, it performs its write. Only one writer may write at a time.
• If a process is writing, no other process can read it.
• If at least one reader is reading, no other process can write.
• Readers may not write and only read.

69
Dr. Ashish
Unit-1
UNi

6 Reader’s writer problem is another example of a classic synchronization


problem

Solution

Variables used –

1. Mutex – mutex (used for mutual exclusion, when readcount is changed)


1. initialised as 1
2. Semaphore – wrt (used by both readers and writers)
1. initialised as 1
3. readers_count – Counter of number of people reading the file
1. initialised as 0

70
Dr. Ashish
Unit-1
UNi

Readers Problem
while(TRUE)
{
//a reader wishes to enter critical section
//this mutex is used for mutual exclusion before readcount
is changed
wait(mutex);

//now since he has entered increase the readers_count by 1


readers_count++;

// readers_count value now should be greater than or equal


to 1

// used ==1 not >=1 as we want to perform this only once.


if(readers_count == 1)
// decrementing w value so no writer can enter writer
section
// as readers are reading
// MCQ Question Fact -
// Readers have more priority then writers.
wait(wrt);

//this will ensure that now, other readers can enter


critical section
signal(mutex);

/* perform the reading operation */

// a reader wants to leave after reading process


wait(mutex);
readers_count--;

if(readers_count == 0)
// if readers_count is 0 we must restore w value to 1 so
writers can write
signal(wrt);

// reader has now left


signal(mutex);
}

71
Dr. Ashish
Unit-1
UNi

7 Dining philosopher's problem


• The dining philosopher's problem is the classical problem of synchronization which says
that Five philosophers are sitting around a circular table and their job is to think and eat
alternatively.
• A bowl of noodles is placed at the center of the table along with five chopsticks for each of
the philosophers. To eat a philosopher needs both their right and a left chopstick.

• A philosopher can only eat if both immediate left and right chopsticks of the philosopher is
available. In case if both immediate left and right chopsticks of the philosopher are not
available then the philosopher puts down their (either left or right) chopstick and starts
thinking again.
• The dining philosopher demonstrates a large class of concurrency control problems hence
it's a classic synchronization problem.
• When a philosopher thinks, he does not interact with others. When he gets hungry, he tries
to pick up the two chopsticks that are near to him.
• For example, philosopher 1 will try to pick chopsticks 1 and 2. But the philosopher can
pickup only one chopstick at a time. He cannot take a chopstick that is already in the hands
of his neighbour.
• The philosopher stars to eat when he has both his chopsticks in his hand. After eating the
philosopher puts down both the chopsticks and starts to think again.
7.1 Approach to Solution
Here is the simple approach to solving it:

1. We convert each fork as a binary semaphore. That implies there exists an array of
semaphore data types of forks having size = 5. It initially contains 1 at all positions.
2. Now when a philosopher picks up a fork he calls wait() on fork[i] which means that
i'th philosopher has acquired the fork.
3. If a philosopher has done eating he calls release (). That implies fork[i] is released and
any other philosopher can pick up this fork and can start eating.

72
Dr. Ashish
Unit-1
UNi

Here is the code for this approach:

do{
wait(fork[i]);
wait(fork[(i + 1) % 5]);
//eat noodles

signal(fork[i]);
signal(fork[(i + 1) % 5]);
//think

}while(1);

7.2 The drawback of the above solution of the dining philosopher problem

From the above solution of the dining philosopher problem, we have proved that no two
neighboring philosophers can eat at the same point in time. The drawback of the above solution
is that this solution can lead to a deadlock condition. This situation happens if all the
philosophers pick their left chopstick at the same time, which leads to the condition of deadlock
and none of the philosophers can eat.

To avoid deadlock, some of the solutions are as follows -

o Maximum number of philosophers on the table should not be more than four, in this case,
chopstick C4 will be available for philosopher P3, so P3 will start eating and after the finish
of his eating procedure, he will put down his both the chopstick C3 and C4, i.e. semaphore
C3 and C4 will now be incremented to 1. Now philosopher P2 which was holding chopstick
C2 will also have chopstick C3 available, hence similarly, he will put down his chopstick
after eating and enable other philosophers to eat.
o A philosopher at an even position should pick the right chopstick and then the left chopstick
while a philosopher at an odd position should pick the left chopstick and then the right
chopstick.
o Only in case if both the chopsticks ( left and right ) are available at the same time, only then
a philosopher should be allowed to pick their chopsticks
o All the four starting philosophers ( P0, P1, P2, and P3) should pick the left chopstick and
then the right chopstick, whereas the last philosopher P4 should pick the right chopstick
and then the left chopstick. This will force P4 to hold his right chopstick first since the right
chopstick of P4 is C0, which is already held by philosopher P0 and its value is set to 0, i.e
C0 is already 0, because of which P4 will get trapped into an infinite loop and chopstick
C4 remains vacant. Hence philosopher P3 has both left C3 and right C4 chopstick available,
therefore it will start eating and will put down its both chopsticks once finishes and let
others eat which removes the problem of deadlock.

73
Dr. Ashish
Unit-1
UNi

8 Producer consumer problem solution using semaphore


In the producer-consumer problem, we use three semaphore variables:

1. Semaphore S: This semaphore variable is used to achieve mutual exclusion between


processes. By using this variable, either Producer or Consumer will be allowed to use
or access the shared buffer at a particular time. This variable is set to 1 initially.
2. Semaphore E: This semaphore variable is used to define the empty space in the
buffer. Initially, it is set to the whole space of the buffer i.e. "n" because the buffer is
initially empty.
3. Semaphore F: This semaphore variable is used to define the space that is filled by
the producer. Initially, it is set to "0" because there is no space filled by the producer
initially.

void producer() { void consumer() {


while(T) { while(T) {
produce() wait(F)
wait(E) wait(S)
wait(S) take()
append() signal(S)
signal(S) signal(E)
signal(F) use()
} }
} }

74
Dr. Ashish
Unit-1
UNi

9 What is Inter process Communication?


A process can be of two types:

• Independent process.

• Co-operating process.

An independent process is not affected by the execution of other processes while a co-operating
process can be affected by other executing processes. Though one can think that those
processes, which are running independently, will execute very efficiently, in reality, there are
many situations when co-operative nature can be utilized for increasing computational speed,
convenience, and modularity. Inter-process communication (IPC) is a mechanism that allows
processes to communicate with each other and synchronize their actions. The communication
between these processes can be seen as a method of co-operation between them. Inter process
communication (IPC) is a process that allows different processes of a computer system to share
information. IPC lets different programs run in parallel, share data, and communicate with each
other. It’s important for two reasons: First, it speeds up the execution of tasks, and secondly, it
ensures that the tasks run correctly and in the order that they were executed.

Processes can communicate with each other through both:

1. Shared Memory

2. Message passing

• Communication between processes using shared memory requires processes to share some
variable, and it completely depends on how the programmer will implement it.
• Suppose process1 and process2 are executing simultaneously, and they share some resources or use
some information from another process. Process1 generates information about certain computations or
resources being used and keeps it as a record in shared memory.
• When process2 needs to use the shared information, it will check in the record stored in shared memory
and take note of the information generated by process1 and act accordingly. Processes can use shared
memory for extracting information as a record from another process as well as for delivering any
specific information to other processes.

75
Dr. Ashish
Unit-1
UNi

ii) Messaging Passing Method


Now, We will start our discussion of the communication between processes via message
passing. In this method, processes communicate with each other without using any kind of
shared memory. If two processes p1 and p2 want to communicate with each other, they proceed
as follows:

• Establish a communication link (if a link already exists, no need to establish it again.)
• Start exchanging messages using basic primitives.
We need at least two primitives:
– send(message, destination) or send(message)
– receive(message, host) or receive(message)

The message size can be of fixed size or of variable size. If it is of fixed size, it is easy for an
OS designer but complicated for a programmer and if it is of variable size then it is easy for
a programmer but complicated for the OS designer. A standard message can have two
parts: header and body.
The header part is used for storing message type, destination id, source id, message length,
and control information. The control information contains information like what to do if runs
out of buffer space, sequence number, priority. Generally, message is sent using FIFO style.

Message Passing through Communication Link. Direct and Indirect Communication


link
Direct Communication links are implemented when the processes use a specific process
identifier for the communication, but it is hard to identify the sender ahead of time.
For example the print server.
In-direct Communication is done via a shared mailbox (port), which consists of a queue of
messages. The sender keeps the message in mailbox and the receiver picks them up.
Read More
https://www.geeksforgeeks.org/inter-process-communication-
ipc/#:~:text=Inter%2Dprocess%20communication%20(IPC),Shared%20Memory

76
Dr. Ashish
Unit-1
UNi

10 Deadlock
Every process needs some resources to complete its execution. However, the resource is
granted in a sequential order.

1. The process requests for some resource.

2. OS grant the resource if it is available otherwise let the process waits.

3. The process uses it and release on the completion.

A Deadlock is a situation where each of the computer


process waits for a resource which is being assigned to
some another process. In this situation, none of the
process gets executed since the resource it needs, is
held by some other process which is also waiting for some other resource to be released.

Let us assume that there are three processes P1, P2 and P3. There are three different
resources R1, R2 and R3. R1 is assigned to P1, R2 is assigned to P2 and R3 is assigned to
P3.

After some time, P1 demands for R1 which is being used by P2. P1 halts its execution since
it can't complete without R2. P2 also demands for R3 which is being used by P3. P2 also
stops its execution because it can't continue without R3. P3 also demands for R1 which is
being used by P1 therefore P3 also stops its execution.

10.1 Necessary conditions for Deadlocks


1. Mutual Exclusion

77
Dr. Ashish
Unit-1
UNi

A resource can only be shared in mutually exclusive manner. It implies, if two process
cannot use the same resource at the same time.

2. Hold and Wait

• A process waits for some resources while holding another resource at the same time.
• Hold and wait is when a process is holding a resource and waiting to acquire another
resource that it needs but cannot proceed because another process is keeping the first
resource. Each of these processes must have a hold on at least one of the resources it’s
requesting.

3. No preemption

• The process which once scheduled will be executed till the completion. No other
process can be scheduled by the scheduler meanwhile.
• Preemption means temporarily interrupting a task or process to execute another task or
process. Preemption can occur due to an external event or internally within the system.
If we take away the resource from the process that is causing deadlock, we can avoid
deadlock. But is it a good approach? The answer is NO because that will lead to an
inconsistent state.
• For example, if we take away memory from any process(whose data was in the process
of getting stored) and assign it to some other process. Then will lead to an inconsistent
state.

4. Circular Wait

All the processes must be waiting for the resources in a cyclic manner so that the last
process is waiting for the resource which is being held by the first process.

78
Dr. Ashish
Unit-1
UNi

10.2 Methods For Handling Deadlocks


Methods that are used in order to handle the problem of deadlocks are as follows:

10.2.1 Deadlock Prevention


As we have discussed in the above section, that all four conditions: Mutual Exclusion,
Hold and Wait, No preemption, and circular wait if held by a system then causes
deadlock to occur. The main aim of the deadlock prevention method is to violate any
one condition among the four; because if any of one condition is violated then the
problem of deadlock will never occur. As the idea behind this method is simple but
the difficulty can occur during the physical implementation of this method in the
system.

10.2.2 Avoiding the Deadlock


This method is used by the operating system in order to check whether the system is in
a safe state or in an unsafe state. This method checks every step performed by the
operating system. Any process continues its execution until the system is in a safe state.
Once the system enters into an unsafe state, the operating system has to take a step back.

Basically, with the help of this method, the operating system keeps an eye on each
allocation, and make sure that allocation does not cause any deadlock in the system.

10.2.3 Deadlock detection and recovery


With this method, the deadlock is detected first by using some algorithms of the
resource-allocation graph. This graph is mainly used to represent the allocations of
various resources to different processes. After the detection of deadlock, a number of
methods can be used in order to recover from that deadlock.

10.2.4 Ignoring the Deadlock


According to this method, it is assumed that deadlock would never occur.This approach
is used by many operating systems where they assume that deadlock will never occur
which means operating systems simply ignores the deadlock. This approach can be
beneficial for those systems that are only used for browsing and for normal tasks. Thus
ignoring the deadlock method can be useful in many cases but it is not perfect in order
to remove the deadlock from the operating system.

79
Dr. Ashish
Unit-1
UNi

11 Deadlock Prevention
If we simulate deadlock with a table which is standing on its four legs then we can also
simulate four legs with the four conditions which when occurs simultaneously, cause the
deadlock.
However, if we break one of the legs of the table then the table will fall definitely. The same
happens with deadlock, if we can be able to violate one of the four necessary conditions and
don't let them occur together then we can prevent the deadlock.

Let's see how we can prevent each of the conditions.

11.1 1. Mutual Exclusion


Mutual section from the resource point of view is the fact that a resource can never be used by
more than one process simultaneously which is fair enough but that is the main reason behind
the deadlock. If a resource could have been used by more than one process at the same time
then the process would have never been waiting for any resource.

11.1.1 Spooling
For a device like printer, spooling can work. There is a memory associated with the printer
which stores jobs from each of the process into it. Later, Printer
collects all the jobs and print each one of them according
to FCFS. By using this mechanism, the process doesn't
have to wait for the printer and it can continue whatever it
was doing. Later, it collects the output when it is produced.

Although, Spooling can be an effective approach to violate


mutual exclusion but it suffers from two kinds of problems.

1. This cannot be applied to every resource.


2. After some point of time, there may arise a race condition between the processes to get
space in that spool.

We cannot force a resource to be used by more than one process at the same time since it will
not be fair enough and some serious problems may arise in the performance. Therefore, we
cannot violate mutual exclusion for a process practically.

11.2 Hold and Wait

Hold and wait condition lies when a process holds a resource and waiting for some other
resource to complete its task. Deadlock occurs because there can be more than one process
which are holding one resource and waiting for other in the cyclic order.

80
Dr. Ashish
Unit-1
UNi

• However, we have to find out some mechanism by which a process either doesn't hold any
resource or doesn't wait. That means, a process must be assigned all the necessary resources
before the execution starts. A process must not wait for any resource once the execution
has been started.
• This can be implemented practically if a process declares all the resources initially.
However, this sounds very practical but can't be done in the computer system because a
process can't determine necessary resources initially.
• Process is the set of instructions which are executed by the CPU. Each of the instruction
may demand multiple resources at the multiple times. The need cannot be fixed by the OS.

The problem with the approach is:


1. Practically not possible.
2. Possibility of getting starved will be increases due to the fact that some process
may hold a resource for a very long time.

11.3 No Preemption

• Deadlock arises due to the fact that a process can't be stopped once it starts. However, if
we take the resource away from the process which is causing deadlock then we can prevent
deadlock.
• This is not a good approach at all since if we take a resource away which is being used by
the process then all the work which it has done till now can become inconsistent.
• Consider a printer is being used by any process. If we take the printer away from that
process and assign it to some other process then all the data which has been printed can
become inconsistent and ineffective and also the fact that the process can't start printing
again from where it has left which causes performance inefficiency.

11.4 Eliminate Circular Wait


• Each resource will be assigned a numerical number.
• To eliminate circular wait, we assign a priority to each resource. A
process can only request resources in increasing order of priority.
• In the example above, process P3 is requesting resource R1, which
has a number lower than resource R3 which is already allocated to
process P3. So this request is invalid and cannot be made, as R1 is
already allocated to process P1.
• Challenges:

o It is difficult to assign a relative priority to resources, as one resource can be prioritized


differently by different processes. For Example: A media player will give a lesser priority
to a printer while a document processor might give it a higher priority. The priority of
resources is different according to the situation and use case.

81
Dr. Ashish
Unit-1
UNi

12 Deadlock Ignorance or Ostrich Algorithm


• The simplest approach is the ostrich algorithm: stick your head in the sand and pretend
there is no problem at all.
• Different people react to this strategy in different ways. Mathematicians find it totally
unacceptable and say that deadlocks must be prevented at all costs.
• Engineers ask how often the problem is expected, how often the system crashes for other
reasons, and how serious a deadlock is.
• If deadlocks occur on average once every five years, but system crashes due to hardware
failures, compiler errors, and operating system bugs occur once a week, most engineers
would not be willing to pay a large penalty in performance or convenience to eliminate
deadlocks.
• Most operating systems, including UNIX and Windows, just ignore the problem on the
assumption that most users would prefer an occasional deadlock to a rule restricting all
users to one process, one open file, and one of everything.
• If deadlocks could be eliminated for free, there would not be much discussion. The problem
is that the price is high, mostly in terms of putting inconvenient restrictions on processes,
as we will see shortly. Thus we are faced with an unpleasant trade-off between convenience
and correctness, and a great deal of discussion about which is more important, and to whom.
Under these conditions, general solutions are hard to find.

13 Deadlock Avoidance in OS
• The operating system avoids Deadlock by knowing the maximum resource requirements
of the processes initially, and also, the Operating System knows the free resources available
at that time.
• The operating system tries to allocate the resources according to the process requirements
and checks if the allocation can lead to a safe state or an unsafe state. If the resource
allocation leads to an unsafe state, then the Operating System does not proceed further with
the allocation sequence.
Safe State and Unsafe State
Safe State – If Operating System is able to satisfy the needs of all
processes with their resource requirements. So all the processes are able
to complete their execution in a certain order. So, If the Operating System
is able to allocate or satisfy the maximum resource requirements of all
the processes in any order then the system is said to be in Safe State. So
safe state does not lead to Deadlock.
Unsafe State - If the Operating System is not able to prevent Processes
from requesting resources which can also lead to a Deadlock, then the
System is said to be in an Unsafe State.
Unsafe State does not necessarily cause deadlock it may or may not cause deadlock.

82
Dr. Ashish
Unit-1
UNi

13.1 Banker’s Algorithm


Example:
Considering a system with five processes P0 through P4 and three resources of type A, B, C. Resource
type A has 10 instances, B has 5 instances and type C has 7 instances. Suppose at time t0 following
snapshot of the system has been taken:

83
Dr. Ashish
Unit-1
UNi

84
Dr. Ashish
Unit-1
UNi

14 Deadlock Detection and Recovery


14.1 Deadlock Detection
• When a deadlock is detected, the system can initiate recovery procedures to break the
deadlock and restore the system to a functional state.
• It is essential to monitor and detect deadlocks as early as possible to prevent any negative
impact on the system's performance and stability. A delay in detecting deadlocks can result
in significant wait times for processes and unresponsive systems, leading to user frustration
and potential business losses. There are two primary methods for detecting deadlocks:
resource allocation graph (RAG) and wait-for graph (WFG).

14.1.1 Resource Allocation Graph (RAG)


• The resource allocation graph (RAG) is a widely used method for deadlock detection
in computer systems. The RAG is a graphical representation of the current allocation
state of resources and the processes that are holding them. The nodes of the graph
represent the resources and the processes, and the edges represent the allocation
relationship between them.
• In the RAG method, a cycle in the graph indicates the presence of a deadlock. The RAG
method is highly efficient and can quickly detect deadlocks, making it an essential
technique in modern operating systems.

Can solve using Banker’s algorithm

85
Dr. Ashish
Unit-1
UNi

14.1.2 Wait-For Graph (WFG)


The wait-for graph (WFG) is a common method used in
deadlock detection in computer systems. The WFG is a
graphical representation of the dependencies between the
processes and the resources that they are waiting for. In the
WFG, the nodes represent processes, and the resources are
represented as edges. Each edge points from the process
that is waiting for a resource to the process that currently
holds that resource. The WFG method can efficiently
detect deadlocks by analysing the graph for cycles. If a
cycle is found, it indicates that a set of processes is waiting
for resources that are being held by other processes in the same set, resulting in a deadlock.
The system can then take appropriate actions to break the deadlock, such as rolling back the
transactions or aborting some of the processes.

14.2 Deadlock Recovery

A traditional operating system such as Windows doesn’t deal with deadlock recovery as it is
a time and space-consuming process. Real-time operating systems use Deadlock recovery.
1. Killing the process –Killing all the processes involved in the deadlock. Killing process one by one.
After killing each process check for deadlock again and keep repeating the process till the system
recovers from deadlock. Killing all the processes one by one helps a system to break circular wait
conditions.
2. Resource Preemption –Resources are preempted from the processes involved in the deadlock, and
preempted resources are allocated to other processes so that there is a possibility of recovering the
system from the deadlock. In this case, the system goes into starvation.
3. Concurrency Control – Concurrency control mechanisms are used to prevent data inconsistencies
in systems with multiple concurrent processes. These mechanisms ensure that concurrent processes
do not access the same data at the same time, which can lead to inconsistencies and errors.
Deadlocks can occur in concurrent systems when two or more processes are blocked, waiting for

86
Dr. Ashish
Unit-1
UNi

each other to release the resources they need. This can result in a system-wide stall, where no
process can make progress. Concurrency control mechanisms can help prevent deadlocks by
managing access to shared resources and ensuring that concurrent processes do not interfere with
each other.

• Since process P3 does not need any


resource, so it executes.
• After execution, process P3 release its
resources.

Then,

Available

=[00]+[01]

=[01]

Step-02:

• With the instances available currently, only


the requirement of the process P1 can be
satisfied.
• So, process P1 is allocated the requested
resources.
• It completes its execution and then free up
the instances of resources held by it.

Then-Available

=[01]+[10]

=[11]

Step-03:

• With the instances available currently, the


requirement of the process P2 can be
satisfied.
• So, process P2 is allocated the requested
resources.
• It completes its execution and then free up the
instances of resources held by it.
Then-

Available

=[11]+[01]

=[12]

Thus,

• There exists a safe sequence P3, P1, P2 in


which all the processes can be executed.
• So, the system is in a safe state.

87
Dr. Ashish
Unit-1
UNi

References
1. https://www.geeksforgeeks.org/deadlock-
prevention/#:~:text=Deadlock%20avoidance%20is%20another%20technique,that%20cou
ld%20lead%20to%20deadlocks.
2. https://www.javatpoint.com/os-deadlock-avoidance
3. https://www.scaler.com/topics/operating-system/deadlock-avoidance-in-os/
4. https://www.tutorialspoint.com/deadlock-avoidance
5. https://www.studytonight.com/operating-system/deadlock-avoidance-in-operating-system

88
Dr. Ashish

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy