Unit 1 and II Operating System DR - Ashish
Unit 1 and II Operating System DR - Ashish
Unit 1 and II Operating System DR - Ashish
Unit-1
UNi
Unit I
Unit II
Unit III
Unit IV
I/O Device and the organization: I/O Device and the organization of the I/O function, I/O
Buffering, Disk I/O, Disk Scheduling Algorithms, File system: File Concepts, attributes,
operations, File organization and Access mechanism, disk space allocation methods,
Directory structure, free disk space management, File sharing, Implementation issues. Case
studies: Unix system, Windows XP.
Dr. Ashish
1
Unit-1
UNi
1 INTRODUCTION
• An Operating System acts as a communication bridge (interface) between the user
and computer hardware.
• The purpose of an operating system is to provide a platform on which a user can execute
programs conveniently and efficiently.
• The main goal of the Operating System is to make the computer environment more
convenient to use and the Secondary goal is to use the resources most efficiently.
Dr. Ashish
2
Unit-1
UNi
• Convenience
• Efficiency
• Hardware Abstraction
• Security
1. Convenience
An Operating System's primary and first goal is to provide a friendly and convenient
environment to the user. It is optional to use Operating System. Still, things become harder
when the user has to perform all the process scheduling and convert user commands to machine
language so that system can perform tasks. So, we use an Operating System to act as a bridge
between us and the computer hardware. We only have to give commands to the system, and OS
will take the instructions and do the rest of the work. Because of this operating system should
be convenient to use and operate by the user.
2. Efficiency
The second and important goal of an Operating System is efficiency. An operating system
should utilize all the resources efficiently. The management of resources and programs should
be done so that no resource is kept idle and memory is used for no use.
The operating system can work/operate on different machines with different processors and
memory configurations. This makes the operating system more reliable.
Also, the operating system can protect itself and the user from accidental damage from the user
program.
4. Hardware Abstraction
Dr. Ashish
3
Unit-1
UNi
The operating system can conceal or can be said to control all functions and resources of the
computer. The user can give commands and access any function or resource of the computer
without facing any difficulties. In this way, the Operating system communicates between the
user and computer hardware.
5. Security
An operating system provides the safety and security of data between the user and the hardware.
OS enables multiple users to securely share a computer(system), including files, processes,
memory, and device separately.
1. Memory Management
2. Processor Management
3. Device Management
4. File Management
5. Network Management
• Security
• Job accounting
Dr. Ashish
4
Unit-1
UNi
deallocation of memory to various processes and ensures that the other process does not consume
the memory allocated to one process. An Operating System performs the following activities for
Memory Management:
• It keeps track of primary memory, i.e., which bytes of memory are used by which user
program. The memory addresses that have already been allocated and the memory
addresses of the memory that has not yet been used.
• In multiprogramming, the OS decides the order in which processes are granted memory
access, and for how long.
• It Allocates the memory to a process when the process requests it and deallocates the
memory when the process has terminated or is performing an I/O operation.
Dr. Ashish
5
Unit-1
UNi
• When more than one process runs on the system the OS decides how and when a process
will use the CPU. Hence, the name is
also CPU Scheduling. The OS:
• Allocates and deallocates processor to
the processes.
• Keeps record of CPU status.
• Certain algorithms used for CPU
scheduling are as follows:
o First Come First Serve (FCFS)
o Shortest Job First (SJF)
o Round-Robin Scheduling
o Priority-based scheduling etc.
Dr. Ashish
6
Unit-1
UNi
• The operating system is in charge of storing and accessing files. The creation of files, the
creation of directories, the reading and writing of data from files and directories, as well
as the copying of the contents of files and directories from one location to another are all
included in storage management.
• Assists businesses in storing more data on existing hardware, speeding up the data
retrieval process, preventing data loss, meeting data retention regulations, and lowering
IT costs
2.6 Following are some of the important activities that an Operating System performs −
• Security – For security, modern operating systems employ a firewall. A firewall is a type
of security system that monitors all computer activity and blocks it if it detects a threat.
• Job Accounting – As the operating system keeps track of all the functions of a computer
system. Hence, it makes a record of all the activities taking place on the system. It has an
account of all the information about the memory, resources, errors, etc. Therefore, this
information can be used as and when required.
• Control over system performance – The operating system will collect consumption
statistics for various resources and monitor performance indicators such as reaction time,
which is the time between requesting a service and receiving a response from the system.
• Error detecting aids – While a computer system is running, a variety of errors might occur.
Error detection guarantees that data is delivered reliably across susceptible networks. The
operating system continuously monitors the system to locate or recognize problems and
protects the system from them.
Dr. Ashish
7
Unit-1
UNi
• Coordination between other software and users – The operating system (OS) allows
hardware components to be coordinated and directs and allocates assemblers, interpreters,
compilers, and other software to different users of the computer system.
From a broader perspective, the evolution of operating system can be divided into four
generations. Let us briefly discuss this generation-based evolution of operating system with
their timeline.
2. First Generation (1945-1955) The first generation of the operating system was used in the
year 19451945 to 19551955 during the time of electronic computing systems development.
It was the era of mechanical computing systems where the users or the programmers used to
provide the instructions (through punch cards, paper tape, and magnetic tape, etc.) and the
computer had to follow them.
Now, due to the human
intervention, the process was
very slow and there were
chances of human mistakes.
Dr. Ashish
8
Unit-1
UNi
system itself. So, less speed and more errors were the first-generation operating systems'
drawbacks.
3. Second Generation (1955-1965) The second generation of the operating system was used
from the year 19551955 to 19651965 during the time of batch operating system
development. During the second-generation phase, the users used to prepare their
instructions (tasks or jobs) in the form of jobs on an off-line device like punch cards and
submits them to the computer operator. Now, out of these punch cards (these punch cards
were tabulated into instructions for computers), similar punch cards of jobs were grouped and
run as a group to speed up the entire process. The jobs consisted of program and input data
along with the control instructions. The main task of the programmer or developer was to
create jobs or programs and then hand them over to the operator in the form of punch cards.
Now, it was the duty of an operator to sort the programs with similar requirements into
batches.
o We could not set the priority of the jobs as jobs were scheduled only basis of
similarities among the jobs.
o The CPU was not utilized to its max potential as the CPU becomes idle (when the
operator was loading jobs).
4. Third Generation (1965-1980) The third generation of the operating system was used in
the year 19651965 to 19801980 during the time of multiprogramming operating
system development. The third-generation operating system was developed to serve
more than one user at a time (multi-users). During this period, users were able to
Dr. Ashish
9
Unit-1
UNi
communicate with the operating systems with the help of a software called command
line interface. So, the computers became multi-user and multiprogramming.
5. Fourth Generation (1980-Now) The fourth generation of the operating system is being
in from the year 19801980 till now. Before the evolution of the fourth generation of the
operating system, the users were able to communicate with the operating system but with
the help of command line interfaces, punch cards, magnetic tapes, etc. So, the user had to
provide commands (that needed to be remembered) which became hectic.
So, the fourth generation of operating systems came into existence with the development of
GUI (Graphical User Interface). The GUI made the user experience more convenient.
• Networked Systems (1980s to 1990s): This era witnessed the proliferation of networked
computing environments, enabling multiple computers to communicate and share
resources over a network.
• Mobile Operating Systems (Late 1990s to Early 2000s): The late 20th century and early
21st century saw the emergence of mobile operating systems, specifically designed for
handheld devices. This development
fundamentally changed how we interacted
with technology.
Dr. Ashish
10
Unit-1
UNi
integrated Artificial Intelligence (AI) technologies. This has led to more intelligent and
adaptive computing experiences.
Dr. Ashish
11
Unit-1
UNi
Multiprogramming Operating Systems can be simply illustrated as more than one program is
present in the main memory and any one of them can be kept in execution. This is basically used
for better execution of resources.
Dr. Ashish
12
Unit-1
UNi
• There is not any facility for user interaction of system resources with the system.
Dr. Ashish
13
Unit-1
UNi
Dr. Ashish
14
Unit-1
UNi
Soft Real-Time Systems: These OSs are for applications where time-constraint is less strict.
4.4 Distributed OS
This system is based on autonomous but interconnected computers communicating with each
other via communication lines or a shared network. Each autonomous system has its own
processor that may differ in size and function. These operating systems are often used for tasks
such as telecommunication networks, airline reservation controls and peer-to-peer networks.
A distributed operating system serves multiple applications and multiple users in real time. The
data processing function is then distributed across the processors. Potential advantages and
disadvantages of distributed operating systems are:
Advantages Disadvantages
They allow a faster exchange of data among users. They're expensive to install.
Failure in one site may not cause much disruption to the They require a high level of expertise to
system. maintain.
Dr. Ashish
15
Unit-1
UNi
Advantages Disadvantages
4.5 Network OS
Network operating systems are installed on a server providing users with the capability to manage
data, user groups and applications. This operating system enables users to access and share files
and devices such as printers, security software and other applications, mostly in a local area
network.
Examples of network operating systems include Microsoft Windows, Linux and macOS X.
Potential advantages and disadvantages of these systems are:
Advantages Disadvantages
Centralized servers provide high stability. They require regular updates and maintenance.
It's easy to upgrade and integrate new Users' reliance on a central server might be detrimental
technologies. to workflows.
• An interactive operating system allows the user to interacts directly with the computer.
In this type of operating system, the user enters a command into the system, and the
system executes it.
• Programs that allow users to enter data or commands are known as interactive
computer systems. The majority of commonly used software, such as word processors
and spreadsheet applications, are interactive.
Dr. Ashish
16
Unit-1
UNi
• A non-interactive program is one that, once started, continues without the need for human
interaction. A compiler, like all batch processing applications, is a non-interactive
program.
1) Batch Processing
2) Multitasking
3) Multiprogramming
4) Distributive Environment
5) Interactivity
The ability of a user to interact with a system is referred to as interactivity. The operating system
(OS) provides an interface for interacting with the system, manages I/O devices, and ensures a
quick response time.
6) Real-Time System
Dedicated embedded systems are real-time systems. To ensure good performance, the OS reads
and reacts to sensor data and provides a response in a fixed time period.
7) Spooling
Spooling is the process of pushing data from various I/O jobs into a buffer, disc, or somewhere
in the memory so that a device can access the data when it is ready.
In order to maintain the spooling buffer, the OS handles I/O device data spooling because the
devices have varying data access rates. Buffer acts as a waiting station for data to rest while
slower devices catch up. Print Spooling is a spooling application.
Dr. Ashish
17
Unit-1
UNi
There are several ways in which an operating system can provide system protection:
User authentication: The operating system requires users to authenticate themselves before
accessing the system. Usernames and passwords are commonly used for this purpose.
Access control: The operating system uses access control lists (ACLs) to determine which users
or processes have permission to access specific resources or perform specific actions.
Encryption: The operating system can use encryption to protect sensitive data and prevent
unauthorized access.
Firewall: A firewall is a software program that monitors and controls incoming and outgoing
network traffic based on predefined security rules.
Antivirus software: Antivirus software is used to protect the system from viruses, malware, and
other malicious software.
System updates and patches: The operating system must be kept up-to-date with the latest
security patches and updates to prevent known vulnerabilities from being exploited.
By implementing these protection mechanisms, the operating system can prevent unauthorized
access to the system, protect sensitive data, and ensure the overall security and integrity of the
system.
2. Prevents unauthorized access, misuse, or modification of the operating system and its
resources
Dr. Ashish
18
Unit-1
UNi
5. Prevents malware and other security threats from infecting the system
6. Allows for safe sharing of resources and data among users and applications
4. Can create a false sense of security if users are not properly educated on safe computing
practices
5. Can create additional costs for implementing and maintaining security measures.
• Like microkernel, this one also manages system resources between application and hardware,
but user services and kernel services are implemented under the same address space.
• It increases the size of the kernel, thus increasing the size of the operating system as well.
• This kernel provides CPU scheduling, memory management, file management, and other
operating system functions through system calls.
• As both services are implemented under the same address space, this makes operating system
execution faster.
• If any service fails the entire system crashes, and it is one of the drawbacks of this kernel. The
entire operating system needs modification if the user adds a new service.
Advantages
Dr. Ashish
20
Unit-1
UNi
• One of the major advantages of having a monolithic kernel is that it provides CPU
scheduling, memory management, file management, and other operating system functions
through system calls.
• The other one is that it is a single large process running entirely in a single address space.
• It is a single static binary file. Examples of some Monolithic Kernel-based OSs are Unix,
Linux, Open VMS, XTS-400, z/TPF.
Disadvantages
• One of the major disadvantages of a monolithic kernel is that if anyone service fails it
leads to an entire system failure.
• If the user has to add any new service. The user needs to modify the entire operating
system.
7.3 Layered Structure:
• To eliminate the disadvantages of simple structure from MS-DOS and mono lithc
structure from UNIX, Layered structure comes into a picture.
• An OS can be broken into pieces and retain much more control on system.
• In this structure the OS is broken into number of layers (levels).
• The bottom layer (layer 0) is the hardware and the topmost layer (layer N) is the user
interface.
• These layers are so designed that each layer uses the functions of the lower level layers
only.
• This simplifies the debugging process as if lower level layers are debugged and an error
occurs during debugging then the error must be on
that layer only as the lower level layers have
already been debugged.
• The main disadvantage of this structure is that at
each layer, the data needs to be modified and
passed on which adds overhead to the system.
Moreover careful planning of the layers is
necessary as a layer can use only lower level layers.
UNIX is an example of this structure
Advantages:
Dr. Ashish
21
Unit-1
UNi
Disadvantages:
• It requires careful planning for designing the layers as higher layers use the functionalities
of only the lower layers.
• Advantages of this structure are that all new services need to be added to user space and
does not require the kernel to be modified.
• Thus it is more secure and reliable as if a service fails then rest of the operating system
remains untouched. Mac OS is an example of this type of OS.
Advantages:-
Dr. Ashish
22
Unit-1
UNi
Disadvantages: -
Performance of the microkernel system can be indifferent and may sometimes cause problems.
Dr. Ashish
23
Unit-1
UNi
CPU Scheduling
8 CPU Scheduling
• Scheduling is the process of allotting the CPU to the processes present in the ready queue.
We also refer to this procedure as process scheduling.
• The operating system schedules the process so that the CPU always has one process to
execute. This reduces the CPU’s idle time and increases its utilization.
• The part of OS that allots the computer resources to the processes is termed as
a scheduler. It uses scheduling algorithms to decide which process it must allot to the
CPU.
Dr. Ashish
24
Unit-1
UNi
• The process advances to the ready state after the I/O operation is completed or the resource
becomes available.
Until the main memory becomes available, the process stays in the suspend-ready state. The
process is brought to its ready state when the main memory becomes accessible.
• The process gets moved to the suspend-ready state once the resource becomes accessible.
The process is shifted to the ready state once the main memory is available.
Dr. Ashish
25
Unit-1
UNi
Note – 02:
Only one process can run at a time on a single CPU.
Note – 04:
It is much more preferable to move a given process from its wait state to its suspend wait
state.
• Consider the situation where a high-priority process comes, and the main memory is full.
• Then there are two options for making space for it. They are:
1. Suspending the processes that have lesser priority than the ready state.
2. Transferring the lower-priority processes from wait to the suspend wait state.
Now, out of these: Moving a process from a wait state to a suspend wait state is the superior
option.
• It is because this process is waiting already for a resource that is currently unavailable.
Dr. Ashish
26
Unit-1
UNi
degree of multi-programming. However, the main goal of this type of scheduler is to offer a
balanced mix of jobs, like Processor, I/O jobs., that allows managing multiprogramming.
A running process can become suspended if it makes an I/O request. A suspended processes can’t
make any progress towards completion. In order to remove the process from memory and make
space for other processes, the suspended process should be moved to secondary storage.
Dr. Ashish
27
Unit-1
UNi
processes that are ready to execute and allocates CPU to one of them. The dispatcher gives control
of the CPU to the process selected by the short term scheduler.
9 Scheduling Objectives
Dr. Ashish
28
Unit-1
UNi
Preemptive Scheduling:
Preemptive scheduling is used when a process switches from running state to ready
state or from waiting state to ready state.
The resources (mainly CPU cycles) are allocated to the process for the limited amount
of time and then is taken away, and the process is again placed back in the ready queue
if that process still has CPU burst time remaining.
That process stays in ready queue till it gets next chance to execute.
Non-Preemptive Scheduling:
Non-preemptive Scheduling is used when a process terminates, or a process switch
from running to waiting state.
In this scheduling, once the resources (CPU cycles) is allocated to a process, the process
holds the CPU till it gets terminated or it reaches a waiting state.
In case of non-preemptive scheduling does not interrupt a process running CPU in
middle of the execution.
Instead, it waits till the process complete its CPU burst time and then it can allocate the
CPU to another process.
Dr. Ashish
29
Unit-1
UNi
Basis for
Preemptive Scheduling Non Preemptive Scheduling
Comparison
Once resources are allocated to a
Basic The resources are allocated to a process, the process holds it till it
process for a limited time. completes its burst time or switches to
waiting state.
Process can be interrupted in Process can not be interrupted till it
Interrupt
between. terminates or switches to waiting state.
If a high priority process
If a process with long burst time is running
Starvation frequently arrives in the ready
CPU, then another process with less CPU
queue, low priority process may
burst time may starve.
starve.
Preemptive scheduling has
Non-preemptive scheduling does not have
Overhead overheads of scheduling the
overheads.
processes.
Flexibility Preemptive scheduling is flexible. Non-preemptive scheduling is rigid.
Preemptive scheduling is cost Non-preemptive scheduling is not cost
Cost
associated. associative.
9.1 Scheduling Criteria
There are several different criteria to consider when trying to select the "best" scheduling
algorithm for a particular situation and environment,
including:
o CPU utilization - Ideally the CPU would be busy
100% of the time, so as to waste 0 CPU cycles. On a
real system CPU usage should range from 40% (
lightly loaded ) to 90% ( heavily loaded. )
o Throughput - Number of processes completed per
unit time. May range from 10 / second to 1 / hour
depending on the specific processes.
o Turnaround time - Time required for a particular process to complete, from submission
time to completion.
o Waiting time - How much time processes spend in the ready queue waiting their turn to get
on the CPU.
Dr. Ashish
30
Unit-1
UNi
o Response time - The time taken in an interactive program from the issuance of a command
to the commence of a response to that command.
In brief:
1. Arrival Time: Time at which the process arrives in the ready queue.
2. Completion Time: Time at which process completes its execution.
3. Burst Time: Time required by a process for CPU execution.
4. Turn Around Time: Time Difference between completion time and arrival time. Turn
Around Time = Completion Time – Arrival Time
5. Waiting Time(W.T): Time Difference between turnaround time and burst time.
Waiting Time = Turn Around Time – Burst Time
6. Response Time
In an interactive system, turn-around time is not the best criterion. A process may
produce some output fairly early and continue computing new results while previous
results are being output to the user. Thus another criterion is the time taken from
submission of the process of the request until the first response is produced. This
measure is called response time.
Response Time = CPU Allocation Time(when the CPU was allocated for the first) -
Arrival Time
Advantages-
It is simple and easy to understand.
Dr. Ashish
31
Unit-1
UNi
Example 1:
Example 2:
Consider the processes P1, P2, P3 given in the below table, arrives for execution in the same
Dr. Ashish
32
Unit-1
UNi
P1 P2 P3
0 24 27 30
Average Waiting Time = (Total Wait Time) / (Total number of processes) = 51/3 = 17 ms
Total Turn Around Time: 24 + 27 + 30 = 81 ms
Average Turn Around time = (Total Turn Around Time) / (Total number of processes)
= 81 / 3 = 27 ms
Throughput = 3 jobs/30 sec = 0.1 jobs/sec
Example 3:
1. Consider the processes P1, P2, P3, P4 given in the below table, arrives for execution in
the same order, with given Arrival Time and Burst Time.
PROCESS ARRIVAL TIME BURST TIME
P1 0 8
P2 1 4
P3 2 9
P4 3 5
Gantt chart
Dr. Ashish
33
Unit-1
UNi
P1 P2 P3 P4
0 8 12 21 26
Average Waiting Time = (Total Wait Time) / (Total number of processes)= 35/4 = 8.75 ms
Total Turn Around Time: 8 + 11 + 19 + 23 = 61 ms
Average Turn Around time = (Total Turn Around Time) / (Total number of processes)
61/4 = 15.25 ms
Dr. Ashish
34
Unit-1
UNi
Dr. Ashish
35
Unit-1
UNi
Advantages-
SRTF is optimal and guarantees the minimum average waiting time.
It provides a standard for other algorithms since no other algorithm performs better
than it.
Disadvantages-
It can not be implemented practically since burst time of the processes can not be
known in advance.
It leads to starvation for processes with larger burst time.
Priorities can not be set for the processes.
Processes with larger burst time have poor response time.
Example-01:
Consider the set of 5 processes whose arrival time and burst time are given below-
Process Id Arrival time Burst time
P1 3 1
P2 1 4
P3 4 2
P4 0 6
P5 2 3
Solution-
If the CPU scheduling policy is SJF non-preemptive, calculate the average waiting time and
Dr. Ashish
36
Unit-1
UNi
Now, we know-
Turn Around time = Exit time – Arrival time
Waiting time = Turn Around time – Burst time
Example-02:
Consider the set of 5 processes whose arrival time and burst time are given below-
Process Id Arrival time Burst time
P1 3 1
P2 1 4
P3 4 2
P4 0 6
P5 2 3
If the CPU scheduling policy is SJF pre-emptive, calculate the average waiting time and average
turnaround time.
Solution-
Dr. Ashish
37
Unit-1
UNi
Gantt Chart-
Now,
Dr. Ashish
38
Unit-1
UNi
10.3 SRTF
Dr. Ashish
39
Unit-1
UNi
Dr. Ashish
40
Unit-1
UNi
Example-03:
Consider the set of 6 processes whose arrival time and burst time are given below-
If the CPU scheduling policy is shortest remaining time first, calculate the average
waiting time and average turnaround time.
Solution-
Gantt Chart-
Now, we know-
Turn Around time = Exit time – Arrival time
Waiting time = Turn Around time – Burst time
Dr. Ashish
41
Unit-1
UNi
Example -04:
Consider the set of 3 processes whose arrival time and burst time are given below-
If the CPU scheduling policy is SRTF, calculate the average waiting time and average turn
around time.
Solution- Gantt
Chart-
Now, we know-
Turn Around time = Exit time – Arrival time
Waiting time = Turn Around time – Burst time
Now,
Average Turn Around time = (13 + 4 + 20) / 3 = 37 / 3 = 12.33 unit
Average waiting time = (4 + 0 + 11) / 3 = 15 / 3 = 5 unit
Dr. Ashish
42
Unit-1
UNi
Example-05:
Consider the set of 4 processes whose arrival time and burst time are given below-
If the CPU scheduling policy is SRTF, calculate the waiting time of process P2.
Solution-
Gantt Chart-
Now, we know-
Turn Around time = Exit time – Arrival time
Waiting time = Turn Around time – Burst time
Thus,
Turn Around Time of process P2 = 55 – 15 = 40 unit
Waiting time of process P2 = 40 – 25 = 15 unit
Dr. Ashish
43
Unit-1
UNi
Advantages-
Disadvantages-
• It leads to starvation for processes with larger burst time as they have to repeat the cycle
many times.
• Its performance heavily depends on time quantum.
• Priorities can not be set for the processes.
With decreasing value of time quantum,
• Number of context switch increases
• Response time decreases
Dr. Ashish
44
Unit-1
UNi
Example-01:
Consider the set of 5 processes whose arrival time and burst time are given below-
If the CPU scheduling policy is Round Robin with time quantum = 2 unit, calculate the
average waiting time and average turnaround time.
Solution-
Ready Queue- P5, P1, P2, P5, P4, P1, P3, P2, P1
Gantt Chart-
Now, we know-
Turn Around time = Exit time – Arrival time
Waiting time = Turn Around time – Burst time
Dr. Ashish
45
Unit-1
UNi
Problem-02:
Consider the set of 6 processes whose arrival time and burst time are given below-
Process Id Arrival time Burst time
P1 0 4
P2 1 5
P3 2 2
P4 3 1
P5 4 6
P6 6 3
If the CPU scheduling policy is Round Robin with time quantum = 2, calculate the average
waiting time and average turnaround time.
Solution-
Ready Queue- P5, P6, P2, P5, P6, P2, P5, P4, P1, P3, P2, P1
Gantt chart-
Now, we know-
Turn Around time = Exit time – Arrival time
Waiting time = Turn Around time – Burst time
Dr. Ashish
46
Unit-1
UNi
47
Dr. Ashish
Unit-1
UNi
Problem-03: Consider the set of 6 processes whose arrival time and burst time are given
below-
48
Dr. Ashish
Unit-1
UNi
Now, we know-
Turn Around time = Exit time – Arrival time
Waiting time = Turn Around time – Burst time
Process Id Exit time Turn Around time Waiting time
P1 32 32 – 5 = 27 27 – 5 = 22
P2 27 27 – 4 = 23 23 – 6 = 17
P3 33 33 – 3 = 30 30 – 7 = 23
P4 30 30 – 1 = 29 29 – 9 = 20
P5 6 6–2=4 4–2=2
P6 21 21 – 6 = 15 15 – 3 = 12
Now,
49
Dr. Ashish
Unit-1
UNi
highest priority.
• In case of a tie, it is broken by FCFS Scheduling.
• Priority Scheduling can be used in both preemptive and non-preemptive mode.
• The waiting time for the process having the highest priority will always be zero
in preemptive mode.
• The waiting time for the process having the highest priority may not be zero in
non- preemptive mode.
Priority scheduling in preemptive and non-preemptive mode behaves exactly same under
following conditions-
• The arrival time of all the processes is same
• All the processes become available
Advantages-
• It considers the priority of the processes and allows the important processes to run
first.
• Priority scheduling in pre-emptive mode is best suited for real time operating
system.
Disadvantages-
• Processes with lesser priority may starve for CPU.
• There is no idea of response time and waiting time.
50
Dr. Ashish
Unit-1
UNi
51
Dr. Ashish
Unit-1
UNi
52
Dr. Ashish
Unit-1
UNi
Problem-01:
Consider the set of 5 processes whose arrival time and burst time are given below-
Process Id Arrival time Burst time Priority
P1 0 4 2
P2 1 3 3
P3 2 1 4
P4 3 5 5
P5 4 2 5
If the CPU scheduling policy is priority non-preemptive, calculate the average waiting time
and average turnaround time. (Higher number represents higher priority)
Solution-
Gantt
Chart-
Now, we know-
• Turn Around time = Exit time – Arrival time
• Waiting time = Turn Around time – Burst time
Problem-02: Consider the set of 5 processes whose arrival time and burst time are given
below-
53
Dr. Ashish
Unit-1
UNi
Now, we know-
• Turn Around time = Exit time – Arrival time
• Waiting time = Turn Around time – Burst time
Process Id Exit time Turn Around time Waiting time
P1 15 15 – 0 = 15 15 – 4 = 11
P2 12 12 – 1 = 11 11 – 3 = 8
P3 3 3–2=1 1–1=0
P4 8 8–3=5 5–5=0
P5 10 10 – 4 = 6 6–2=4
Now,
• Average Turn Around time = (15 + 11 + 1 + 5 + 6) / 5 = 38 / 5 = 7.6 unit
• Average waiting time = (11 + 8 + 0 + 0 + 4) / 5 = 23 / 5 = 4.6 unit
54
Dr. Ashish
Unit-1
UNi
References
• https://byjus.com/gate/process-state-in-operating-system-notes/
• https://www.tutorialspoint.com/operating_system/os_process_scheduling.htm
• https://www.geeksforgeeks.org/context-switch-in-operating-system/
• https://www.scaler.com/topics/scheduling-criteria-in-os/
55
Dr. Ashish
Unit-1
UNi
Unit- II
1 Process concept
1.1 Process
56
Dr. Ashish
Unit-1
UNi
1.2 Program
A program is a piece of code which may be a single line or millions of lines. A computer
program is usually written by a computer programmer in a programming language. For
example, here is a simple program written in C programming language −
#include <stdio.h>
int main() {
printf("Hello, World! \n");
return 0;
}
When a process executes, it passes through different states. These stages may differ in different
operating systems, and the names of these states are also not standardized.
In general, a process can have one of the following five states at a time.
A Process Control Block is a data structure maintained by the Operating System for every
process. The PCB is identified by an integer process ID (PID). A PCB keeps all the
information needed to keep track of a process as listed below in the table −
57
Dr. Ashish
Unit-1
UNi
Program Counter is a pointer to the address of the next instruction to be executed for
this process.
6 CPU registers
Various CPU registers where process need to be stored for execution for running
state.
7 CPU Scheduling Information
Process priority and other scheduling information which is required to schedule the
process.
8 Memory management information
This includes the information of page table, memory limits, Segment table
depending on memory used by the operating system.
9 Accounting information
This includes the amount of CPU used for process execution, time limits, execution
ID etc.
10 IO status information
This includes a list of I/O devices allocated to the process.
The architecture of a PCB is completely dependent on Operating System and may contain
different information in different operating systems. Here is a simplified diagram of a PCB −
58
Dr. Ashish
Unit-1
UNi
59
Dr. Ashish
Unit-1
UNi
2 producer-consumer problem
• The producer-consumer problem arises when multiple
threads or processes attempt to share a common buffer
or data structure. Producers produce items and place
them in the buffer, while consumers retrieve items from
the buffer and process them.
• The challenge lies in coordinating the producers and
consumers efficiently to avoid problems like data
inconsistency. This buffer is known as the critical
section. So, we need a synchronization method so that when the producer is producing
the data, the consumer shouldn't interfere in between or vice versa.
• The other challenge that can arise is the producer should
not insert data when the buffer is full i.e., buffer overflow
condition. Similarly, the consumer must not remove data
when the buffer is empty i.e., buffer underflow. As we have
been given limited slots buffer where we need to
synchronize, it is also known as the bounded buffer problem.
• Paradigm for cooperating processes;
▪ producer process produces information that is
consumed by a consumer process.
• We need a buffer of items that can be filled by producer
and emptied by consumer.
• Shared memory solution to the bounded buffer:
60
Dr. Ashish
Unit-1
UNi
Solution-1
Only 9 values are stored, one cell of the buffer is not used by producer code
61
Dr. Ashish
Unit-1
UNi
Solution-2:
Producer Consumer Problem (for 10 Values)
62
Dr. Ashish
Unit-1
UNi
3.2 Progress
Progress means that if one process doesn't need to execute into critical section then it should
not stop other processes to get into the critical section. If no process is executing in its critical
section and some processes wish to enter their critical sections, then only those processes that
are not executing in their remainder sections can participate in deciding which will enter its
critical section next, and this selection cannot be postponed indefinitely.
63
Dr. Ashish
Unit-1
UNi
Algorithm 1
• Satisfies mutual exclusion
The turn is equal to either i or j and hence one of Pi
and Pj can enter the critical section
• Does not satisfy progress
Example: Pi finishes the critical section and then
gets stuck indefinitely in its remainder section.
Then Pj enters the critical section, finishes, and
then finishes its remainder section. Pj then tries to
enter the critical section again, but it cannot since
turn was set to i by Pj in the previous iteration.
Since Pi is stuck in the remainder section, turn will
be equal to i indefinitely and Pj can’t enter although
it wants to. Hence no process is in the critical
section and hence no progress.
64
Dr. Ashish
Unit-1
UNi
65
Dr. Ashish
Unit-1
UNi
3. Peterson’s Solution
While(1)
{
flag[0] = true;
turn = 1;
while (flag[1] ==True && turn == 1);
P0
Critical Section
flag[0] = false;
}
While(1)
{
flag[1] = true;
turn = 0; /* Change to Both waiting, turn=0 */
P1
while (flag[0]==True && turn == 0);
Critical Section
flag[1] = false;
}
Explanation of Peterson's Algorithm
The algorithm utilizes two main variables, which are a turn and a flag. turn is an integer variable that indicates whose
turn it is to enter the critical section. flag is an array of Boolean variables. Each element in the array represents the
intention of a process to enter the critical section. Let us look at the explanation of how Peterson's algorithm in OS works:
• Firstly, both processes set their flag variables to indicate that they don't currently want to enter the critical section. By
default, the turn variable is set to the ID of one of the processes, it can be 0 or 1. This will indicate that it is initially
the turn of that process to enter the critical section.
• Both processes set their flag variable to indicate their intention to enter the critical section.
• Then the process, which is having the next turn, sets the turn variable to its own ID. This will indicate that it is its turn
to enter the critical section. For example, if the current value of turn is 1, and it is now the turn of Process 0 to enter,
Process 0 sets turn = 0.
• Both processes enter a loop where they will check the flag of the other process and wait if necessary. The loop
continues as long as the following two conditions are met:
i. The other process has expressed its intention to enter the critical section (flag[1 - processID] == true for Process
processID).
ii. It is currently the turn of the other process (turn == 1 - processID).
• If both conditions are satisfied, the process waits and yields the CPU to the other process. This ensures that the other
process has an opportunity to enter the critical section.
• Once a process successfully exits the waiting loop, then it can enter the critical section. It can also access the shared
resource without interference from the other process. It can perform any necessary operations or modifications within
this section.
• After completing its work in the critical section, the process resets its flag variable. Resetting is required to indicate
that this process no longer wants to enter the critical section (flag[processID] = false). This step ensures that the
process can enter the waiting loop again correctly if needed.
66
Dr. Ashish
Unit-1
UNi
4 Semaphore
Semaphores are integer variables that are used to solve the critical section problem by using
two atomic operations, wait and signal that are used for process synchronization.
• Wait –P( )
The wait operation decrements the value of its argument S, if it is positive. If S is
negative or zero, then no operation is performed.
wait(S)
{
while (S<=0);
S--;
}
•Signal----V( )
The signal operation increments the value of its argument S.
signal(S)
{
S++;
}
4.1 Types of Semaphores
There are two main types of semaphores i.e. counting semaphores and binary semaphores.
Details about these are given as follows −
• Counting Semaphores
These are integer value semaphores and have an unrestricted value domain. These
semaphores are used to coordinate the resource access, where the semaphore count is
the number of available resources. If the resources are added, semaphore count
automatically incremented and if the resources are removed, the count is decremented.
• Binary Semaphores
The binary semaphores are like counting semaphores but their value is restricted to 0
and 1. The wait operation only works when the semaphore is 1 and the signal operation
succeeds when semaphore is 0. It is sometimes easier to implement binary semaphores
than counting semaphores.
1. P operation is also called wait, sleep, or down operation, and V operation is also called
signal, wake-up, or up operation.
2. Both operations are atomic and semaphore(s) is always initialized to one. Here atomic
means that variable on which read, modify and update happens at the same time/moment
with no pre-emption i.e. in-between read, modify and update no other operation is
performed that may change the variable.
67
Dr. Ashish
Unit-1
UNi
68
Dr. Ashish
Unit-1
UNi
69
Dr. Ashish
Unit-1
UNi
Solution
Variables used –
70
Dr. Ashish
Unit-1
UNi
Readers Problem
while(TRUE)
{
//a reader wishes to enter critical section
//this mutex is used for mutual exclusion before readcount
is changed
wait(mutex);
if(readers_count == 0)
// if readers_count is 0 we must restore w value to 1 so
writers can write
signal(wrt);
71
Dr. Ashish
Unit-1
UNi
• A philosopher can only eat if both immediate left and right chopsticks of the philosopher is
available. In case if both immediate left and right chopsticks of the philosopher are not
available then the philosopher puts down their (either left or right) chopstick and starts
thinking again.
• The dining philosopher demonstrates a large class of concurrency control problems hence
it's a classic synchronization problem.
• When a philosopher thinks, he does not interact with others. When he gets hungry, he tries
to pick up the two chopsticks that are near to him.
• For example, philosopher 1 will try to pick chopsticks 1 and 2. But the philosopher can
pickup only one chopstick at a time. He cannot take a chopstick that is already in the hands
of his neighbour.
• The philosopher stars to eat when he has both his chopsticks in his hand. After eating the
philosopher puts down both the chopsticks and starts to think again.
7.1 Approach to Solution
Here is the simple approach to solving it:
1. We convert each fork as a binary semaphore. That implies there exists an array of
semaphore data types of forks having size = 5. It initially contains 1 at all positions.
2. Now when a philosopher picks up a fork he calls wait() on fork[i] which means that
i'th philosopher has acquired the fork.
3. If a philosopher has done eating he calls release (). That implies fork[i] is released and
any other philosopher can pick up this fork and can start eating.
72
Dr. Ashish
Unit-1
UNi
do{
wait(fork[i]);
wait(fork[(i + 1) % 5]);
//eat noodles
signal(fork[i]);
signal(fork[(i + 1) % 5]);
//think
}while(1);
7.2 The drawback of the above solution of the dining philosopher problem
From the above solution of the dining philosopher problem, we have proved that no two
neighboring philosophers can eat at the same point in time. The drawback of the above solution
is that this solution can lead to a deadlock condition. This situation happens if all the
philosophers pick their left chopstick at the same time, which leads to the condition of deadlock
and none of the philosophers can eat.
o Maximum number of philosophers on the table should not be more than four, in this case,
chopstick C4 will be available for philosopher P3, so P3 will start eating and after the finish
of his eating procedure, he will put down his both the chopstick C3 and C4, i.e. semaphore
C3 and C4 will now be incremented to 1. Now philosopher P2 which was holding chopstick
C2 will also have chopstick C3 available, hence similarly, he will put down his chopstick
after eating and enable other philosophers to eat.
o A philosopher at an even position should pick the right chopstick and then the left chopstick
while a philosopher at an odd position should pick the left chopstick and then the right
chopstick.
o Only in case if both the chopsticks ( left and right ) are available at the same time, only then
a philosopher should be allowed to pick their chopsticks
o All the four starting philosophers ( P0, P1, P2, and P3) should pick the left chopstick and
then the right chopstick, whereas the last philosopher P4 should pick the right chopstick
and then the left chopstick. This will force P4 to hold his right chopstick first since the right
chopstick of P4 is C0, which is already held by philosopher P0 and its value is set to 0, i.e
C0 is already 0, because of which P4 will get trapped into an infinite loop and chopstick
C4 remains vacant. Hence philosopher P3 has both left C3 and right C4 chopstick available,
therefore it will start eating and will put down its both chopsticks once finishes and let
others eat which removes the problem of deadlock.
73
Dr. Ashish
Unit-1
UNi
74
Dr. Ashish
Unit-1
UNi
• Independent process.
• Co-operating process.
An independent process is not affected by the execution of other processes while a co-operating
process can be affected by other executing processes. Though one can think that those
processes, which are running independently, will execute very efficiently, in reality, there are
many situations when co-operative nature can be utilized for increasing computational speed,
convenience, and modularity. Inter-process communication (IPC) is a mechanism that allows
processes to communicate with each other and synchronize their actions. The communication
between these processes can be seen as a method of co-operation between them. Inter process
communication (IPC) is a process that allows different processes of a computer system to share
information. IPC lets different programs run in parallel, share data, and communicate with each
other. It’s important for two reasons: First, it speeds up the execution of tasks, and secondly, it
ensures that the tasks run correctly and in the order that they were executed.
1. Shared Memory
2. Message passing
• Communication between processes using shared memory requires processes to share some
variable, and it completely depends on how the programmer will implement it.
• Suppose process1 and process2 are executing simultaneously, and they share some resources or use
some information from another process. Process1 generates information about certain computations or
resources being used and keeps it as a record in shared memory.
• When process2 needs to use the shared information, it will check in the record stored in shared memory
and take note of the information generated by process1 and act accordingly. Processes can use shared
memory for extracting information as a record from another process as well as for delivering any
specific information to other processes.
75
Dr. Ashish
Unit-1
UNi
• Establish a communication link (if a link already exists, no need to establish it again.)
• Start exchanging messages using basic primitives.
We need at least two primitives:
– send(message, destination) or send(message)
– receive(message, host) or receive(message)
The message size can be of fixed size or of variable size. If it is of fixed size, it is easy for an
OS designer but complicated for a programmer and if it is of variable size then it is easy for
a programmer but complicated for the OS designer. A standard message can have two
parts: header and body.
The header part is used for storing message type, destination id, source id, message length,
and control information. The control information contains information like what to do if runs
out of buffer space, sequence number, priority. Generally, message is sent using FIFO style.
76
Dr. Ashish
Unit-1
UNi
10 Deadlock
Every process needs some resources to complete its execution. However, the resource is
granted in a sequential order.
Let us assume that there are three processes P1, P2 and P3. There are three different
resources R1, R2 and R3. R1 is assigned to P1, R2 is assigned to P2 and R3 is assigned to
P3.
After some time, P1 demands for R1 which is being used by P2. P1 halts its execution since
it can't complete without R2. P2 also demands for R3 which is being used by P3. P2 also
stops its execution because it can't continue without R3. P3 also demands for R1 which is
being used by P1 therefore P3 also stops its execution.
77
Dr. Ashish
Unit-1
UNi
A resource can only be shared in mutually exclusive manner. It implies, if two process
cannot use the same resource at the same time.
• A process waits for some resources while holding another resource at the same time.
• Hold and wait is when a process is holding a resource and waiting to acquire another
resource that it needs but cannot proceed because another process is keeping the first
resource. Each of these processes must have a hold on at least one of the resources it’s
requesting.
3. No preemption
• The process which once scheduled will be executed till the completion. No other
process can be scheduled by the scheduler meanwhile.
• Preemption means temporarily interrupting a task or process to execute another task or
process. Preemption can occur due to an external event or internally within the system.
If we take away the resource from the process that is causing deadlock, we can avoid
deadlock. But is it a good approach? The answer is NO because that will lead to an
inconsistent state.
• For example, if we take away memory from any process(whose data was in the process
of getting stored) and assign it to some other process. Then will lead to an inconsistent
state.
4. Circular Wait
All the processes must be waiting for the resources in a cyclic manner so that the last
process is waiting for the resource which is being held by the first process.
78
Dr. Ashish
Unit-1
UNi
Basically, with the help of this method, the operating system keeps an eye on each
allocation, and make sure that allocation does not cause any deadlock in the system.
79
Dr. Ashish
Unit-1
UNi
11 Deadlock Prevention
If we simulate deadlock with a table which is standing on its four legs then we can also
simulate four legs with the four conditions which when occurs simultaneously, cause the
deadlock.
However, if we break one of the legs of the table then the table will fall definitely. The same
happens with deadlock, if we can be able to violate one of the four necessary conditions and
don't let them occur together then we can prevent the deadlock.
11.1.1 Spooling
For a device like printer, spooling can work. There is a memory associated with the printer
which stores jobs from each of the process into it. Later, Printer
collects all the jobs and print each one of them according
to FCFS. By using this mechanism, the process doesn't
have to wait for the printer and it can continue whatever it
was doing. Later, it collects the output when it is produced.
We cannot force a resource to be used by more than one process at the same time since it will
not be fair enough and some serious problems may arise in the performance. Therefore, we
cannot violate mutual exclusion for a process practically.
Hold and wait condition lies when a process holds a resource and waiting for some other
resource to complete its task. Deadlock occurs because there can be more than one process
which are holding one resource and waiting for other in the cyclic order.
80
Dr. Ashish
Unit-1
UNi
• However, we have to find out some mechanism by which a process either doesn't hold any
resource or doesn't wait. That means, a process must be assigned all the necessary resources
before the execution starts. A process must not wait for any resource once the execution
has been started.
• This can be implemented practically if a process declares all the resources initially.
However, this sounds very practical but can't be done in the computer system because a
process can't determine necessary resources initially.
• Process is the set of instructions which are executed by the CPU. Each of the instruction
may demand multiple resources at the multiple times. The need cannot be fixed by the OS.
11.3 No Preemption
• Deadlock arises due to the fact that a process can't be stopped once it starts. However, if
we take the resource away from the process which is causing deadlock then we can prevent
deadlock.
• This is not a good approach at all since if we take a resource away which is being used by
the process then all the work which it has done till now can become inconsistent.
• Consider a printer is being used by any process. If we take the printer away from that
process and assign it to some other process then all the data which has been printed can
become inconsistent and ineffective and also the fact that the process can't start printing
again from where it has left which causes performance inefficiency.
81
Dr. Ashish
Unit-1
UNi
13 Deadlock Avoidance in OS
• The operating system avoids Deadlock by knowing the maximum resource requirements
of the processes initially, and also, the Operating System knows the free resources available
at that time.
• The operating system tries to allocate the resources according to the process requirements
and checks if the allocation can lead to a safe state or an unsafe state. If the resource
allocation leads to an unsafe state, then the Operating System does not proceed further with
the allocation sequence.
Safe State and Unsafe State
Safe State – If Operating System is able to satisfy the needs of all
processes with their resource requirements. So all the processes are able
to complete their execution in a certain order. So, If the Operating System
is able to allocate or satisfy the maximum resource requirements of all
the processes in any order then the system is said to be in Safe State. So
safe state does not lead to Deadlock.
Unsafe State - If the Operating System is not able to prevent Processes
from requesting resources which can also lead to a Deadlock, then the
System is said to be in an Unsafe State.
Unsafe State does not necessarily cause deadlock it may or may not cause deadlock.
82
Dr. Ashish
Unit-1
UNi
83
Dr. Ashish
Unit-1
UNi
84
Dr. Ashish
Unit-1
UNi
85
Dr. Ashish
Unit-1
UNi
A traditional operating system such as Windows doesn’t deal with deadlock recovery as it is
a time and space-consuming process. Real-time operating systems use Deadlock recovery.
1. Killing the process –Killing all the processes involved in the deadlock. Killing process one by one.
After killing each process check for deadlock again and keep repeating the process till the system
recovers from deadlock. Killing all the processes one by one helps a system to break circular wait
conditions.
2. Resource Preemption –Resources are preempted from the processes involved in the deadlock, and
preempted resources are allocated to other processes so that there is a possibility of recovering the
system from the deadlock. In this case, the system goes into starvation.
3. Concurrency Control – Concurrency control mechanisms are used to prevent data inconsistencies
in systems with multiple concurrent processes. These mechanisms ensure that concurrent processes
do not access the same data at the same time, which can lead to inconsistencies and errors.
Deadlocks can occur in concurrent systems when two or more processes are blocked, waiting for
86
Dr. Ashish
Unit-1
UNi
each other to release the resources they need. This can result in a system-wide stall, where no
process can make progress. Concurrency control mechanisms can help prevent deadlocks by
managing access to shared resources and ensuring that concurrent processes do not interfere with
each other.
Then,
Available
=[00]+[01]
=[01]
Step-02:
Then-Available
=[01]+[10]
=[11]
Step-03:
Available
=[11]+[01]
=[12]
Thus,
87
Dr. Ashish
Unit-1
UNi
References
1. https://www.geeksforgeeks.org/deadlock-
prevention/#:~:text=Deadlock%20avoidance%20is%20another%20technique,that%20cou
ld%20lead%20to%20deadlocks.
2. https://www.javatpoint.com/os-deadlock-avoidance
3. https://www.scaler.com/topics/operating-system/deadlock-avoidance-in-os/
4. https://www.tutorialspoint.com/deadlock-avoidance
5. https://www.studytonight.com/operating-system/deadlock-avoidance-in-operating-system
88
Dr. Ashish