Compiled Operating Systems Notes
Compiled Operating Systems Notes
Page 1 of 18
-it breaks up the operating system into different layers and retains much more
control on the system.
- OS is split into various layers such that all the layers perform
different functionalities
c) Virtual memory architecture of operating system.
- It is created by a real machine operating system, which make a single real
machine appears to be several real machines.
-
d) client/server architecture of operating system
- architecture of a computer network in which many clients (remote processors)
request and receive service from a centralized server (host computer).
- is a computing model in which the server hosts, delivers, and manages most of
the resources and services requested by the client.
FEATURES OF TIME-SHARING OS
✓ Multiple online users can use the same computer at the same
time.
✓ End-users feel that they monopolize the computer system.
✓ Better interaction between users and computers.
✓ It can make quick processing with a lot of tasks.
Page 4 of 18
• autopilot systems.
• spacecrafts.
✓ Soft real-time systems: - try to reach deadlines but do not fail if a
deadline is missed.
f) Multitasking
-The multitasking OS refers to a logical extension of the multiprogramming
operating system, which allows users to run many programs simultaneously.
Page 5 of 18
g) Multiprogramming
-Multiprogramming OS is an ability of an operating system to execute more than
one program using a single processor machine.
a) Interactivity.
- Interactivity refers to the ability of users to interact with a computer system.
b) Real-Time Systems
-systems that processes data and events that have critically defined time
constraints.
Activities related to interactivity
i. In such systems, Operating Systems typically read from and react to
sensor data.
c) Distributed Environment
- refers to multiple independent CPUs or processors in a computer system.
d) Spooling
- Spooling is an acronym for simultaneous peripheral operations on line. Spooling
refers to putting data of various I/O jobs in a buffer. This buffer is a special area in
memory or hard disk which is accessible to I/O devices.
e) Job control
Page 6 of 18
- refers to the control of multiple tasks or jobs on a computer system, ensuring
that they each have access to adequate resources to perform correct.
Command language
- e is a language used for executing a series of commands instructions that
would otherwise be executed at the prompt.
PROCESS MANAGEMENT.
5. What’s a process?
-A process is basically a program in execution.
6. Define the following terms in process.
Page 7 of 18
a) Stack
-The process Stack contains the temporary data such as method/function
parameters, return address, and local variables.
b) Heap
-This is a dynamically allocated memory to a process during its runtime.
c) Text
-This includes the current activity represented by the value of Program Counter
and the contents of the processor's registers.
d) Data
-This section contains the global and static variables.
7. What’s a process model.
-It’s a process of the same nature that are classified together into a model.
Page 8 of 18
-a smaller unit of execution within a process that shares the same memory and
resources.
10. What are the features of Threads in a Operating System.
• Each thread has its own stack and register.
• Threads can directly communicate with each other as they share the same
address space.
• One system call is capable of creating more than one thread.
• Threads share data, memory, resources, files.
11. What’s the similarity between Process and Thread.?
• Like processes threads share CPU and only one thread active (running) at a
time.
• Like processes, threads within a process, threads within processes execute
sequentially.
• Like processes, thread can create children.
• And like process, if one thread is blocked, another thread can run.
12. What’s the difference between Process and Thread.?
• Unlike processes, threads are not independent of one another.
• Unlike processes, all threads can access every address in the task
• Unlike processes, thread is design to assist one other. Note that processes might
or might not assist one another because processes may originate from different
users.
13. Give two thread levels in operating system.?
a) User-Level Threads- User-level threads implement in user-level libraries, rather
than via systems calls, so thread switching does not need to call operating
system and to cause interrupt to the kernel.
Advantages of User-Level thread
• Does not require modification to operating systems.
• Simple Representation: Each thread is represented simply by a PC,
registers, stack and a small control block, all stored in the user process
address space.
• Simple Management: This simply means that creating a thread,
switching between threads and synchronization between threads can all
be done without intervention of the kernel.
• Fast and Efficient: Thread switching is not much more expensive than a
procedure call.
Disadvantages of User-Level thread
• There is a lack of coordination between threads and operating system
kernel.
• User-level threads requires non-blocking systems call i.e., a
multithreaded kernel.
b) Kernel-Level Threads- In this method, the kernel knows about and manages the
threads. No runtime system is needed in this case.
Process vs Thread
3. It takes more time for creation. It takes less time for creation.
Page 11 of 18
S.NO Process Thread
13. A system call is involved in it. No system call is involved, it is created using APIs.
Page 12 of 18
A thread shares information like data segment, code segment files etc. with its peer threads
while it contains its own registers, stack, counter etc.
The two main types of threads are user-level threads and kernel-level threads. A diagram that
demonstrates these is as follows −
The user-level threads are implemented by users and the kernel is not aware of the existence
of these threads. It handles them as if they were single-threaded processes. User-level threads
are small and much faster than kernel level threads. They are represented by a program counter
(PC), stack, registers and a small process control block. Also, there is no kernel involvement in
synchronization for user-level threads.
• User-level threads are easier and faster to create than kernel-level threads. They can
also be more easily managed.
• User-level threads can be run on any operating system.
• There are no kernel mode privileges required for thread switching in user-level threads.
Disadvantages of User-Level Threads
Page 13 of 18
• Multithreaded applications in user-level threads cannot use multiprocessing to their
advantage.
• The entire process is blocked if one user-level thread performs blocking operation.
Kernel-Level Threads
Kernel-level threads are handled by the operating system directly and the thread management
is done by the kernel. The context information for the process as well as the process threads is
all managed by the kernel. Because of this, kernel-level threads are slower than user-level
threads.
• A mode switch to kernel mode is required to transfer control from one thread to
another in a process.
• Kernel-level threads are slower to create as well as manage as compared to user-level
threads.
DEVICE MANAGEMENT
OBJECTIVE OF DEVICE MANAGEMENT
The objectives of device (I/O) management are to efficiently and effectively manage input and
output operations within a computer system. Here are the key objectives:
1. Resource Allocation: Efficiently allocate system resources such as CPU time, memory,
and device access to handle input and output requests from various processes and
devices concurrently.
2. Device Recognition and Configuration: Automatically detect connected devices,
configure them for operation, and load the necessary device drivers to enable
communication between the operating system and the devices.
Page 14 of 18
3. I/O Scheduling: Optimize the scheduling of input and output operations to minimize
latency, maximize throughput, and ensure fair access to shared resources among
competing processes.
4. Error Handling: Detect, report, and recover from errors that may occur during input
and output operations, ensuring data integrity, system reliability, and uninterrupted
operation.
5. Performance Optimization: Utilize techniques such as caching, buffering, and
prefetching to improve I/O performance, reduce latency, and enhance overall system
responsiveness.
6. Concurrency and Parallelism: Support concurrent execution of multiple input and
output operations to fully utilize system resources and improve system throughput.
7. Power Management: Implement power-saving mechanisms to reduce energy
consumption and extend battery life in mobile devices by dynamically adjusting device
power states based on workload and usage patterns.
8. Security and Access Control: Enforce access control policies to restrict access to I/O
devices based on user privileges, preventing unauthorized access and protecting
sensitive data from malicious or accidental manipulation.
9. Device Independence: Provide a uniform interface for accessing I/O devices,
abstracting the hardware details to ensure compatibility with different types of devices
and facilitate software portability.
10. Scalability and Extensibility: Design I/O management mechanisms that scale to
accommodate growing system demands and support the addition of new devices
without requiring significant modifications to the system architecture.
By fulfilling these objectives, device management systems ensure efficient, reliable, and secure
handling of input and output operations, thereby enhancing the overall performance and
usability of computer systems.
Page 16 of 18
• Importance: This simplifies the user's and programmer's task by providing a uniform
way to access different types of devices and files.
3. Error Handling
• Definition: The I/O system must handle errors robustly, providing mechanisms to
report, log, and recover from errors.
• Importance: Ensures system reliability and helps in troubleshooting problems without
crashing the system.
4. Synchronous vs. Asynchronous Operations
• Synchronous: I/O operations that block the requesting process until the operation is
completed.
• Asynchronous: I/O operations that allow the requesting process to continue while the
operation is being performed.
• Importance: Understanding and utilizing these concepts help optimize the
performance and responsiveness of applications.
5. Buffering
• Definition: Temporary storage used to hold data while it is being transferred between
two locations, usually between an application and a hardware device.
• Importance: Buffering can help smooth out differences in data transfer rates and
handle bursts of data.
6. Caching
• Definition: A technique used to store frequently accessed data in faster storage (like
RAM) to speed up access.
• Importance: Significantly improves the performance of I/O operations by reducing
access times.
7. Spooling
• Definition: Simultaneous Peripheral Operations On-Line, a process where data is
temporarily held to be used and executed by a device, program, or the system.
• Importance: Commonly used in print spooling to manage print jobs in a queue.
8. Device Drivers
• Definition: Specialized software modules that allow the operating system to
communicate with hardware devices.
• Importance: Essential for enabling the operating system to support a wide range of
hardware without needing to understand the details of each device.
9. Direct Memory Access (DMA)
Page 17 of 18
• Definition: A feature that allows certain hardware subsystems to access main system
memory independently of the CPU.
• Importance: Reduces CPU overhead and increases data transfer rates by allowing
devices to directly transfer data to/from memory.
10. Interrupt Handling
• Definition: Mechanisms that allow devices to signal the CPU that they need attention.
• Importance: Provides efficient I/O operations by allowing the CPU to be alerted and
respond to I/O events, instead of constantly polling devices.
11. I/O Scheduling
• Definition: The method by which the operating system decides the order in which I/O
operations will be executed.
• Importance: Critical for optimizing the performance and responsiveness of the system,
especially when multiple I/O operations are requested concurrently.
12. Virtualization of Devices
• Definition: The process of abstracting physical hardware into virtual devices that can be
managed by the operating system.
• Importance: Enhances flexibility, isolation, and security in managing hardware
resources.
Summary
I/O software is designed to provide an efficient, uniform, and reliable way for applications to
interact with hardware devices. Understanding and implementing these principles helps
ensure that the system can effectively manage the diverse and complex nature of I/O
operations.
• User Level Libraries: This provides simple interface to the user program to perform
input and output. For example, studio is a library provided by C and C++
programming languages.
• Kernel Level Modules: This allows the device driver to interact with the device
controller and device-independent I/O modules used by the device drivers.
Following are some of the services provided:
Page 18 of 18
i. Scheduling - Kernel schedules a set of I/O requests to determine a good order
in which to execute them. When an application issues a blocking I/O system
call, the request is placed on the queue for that device.
ii. Buffering - Kernel I/O Subsystem maintains a memory area known as buffer
that stores data while transferring between two devices or between a device
with an application operation. Buffering is done to cope with a speed mismatch
between the producer and consumer of a data stream or to adapt between
devices with different data transfer sizes.
iii. Caching - The kernel maintains cache memory, a region of fast memory that
holds copies of data. Access to the cached copy is more efficient than access to
the original.
iv. Spooling and Device Reservation - A spool is a buffer that holds output for a
device, such as a printer, that cannot accept interleaved data streams. The
spooling system copies the queued spool files to the printer one at a time. In
some operating systems, spooling is managed by a system daemon process. In
other operating systems, it is handled by an in-kernel thread.
v. Error Handling - An operating system with protected memory can guard
against many hardware and application errors.
• Hardware: This layer includes actual hardware and hardware controller which
interact with the device drivers and make hardware alive.
Device Drivers
Device drivers are software modules that can be plugged into an OS to handle a particular
device. Operating System takes help from device drivers to handle all I/O devices.
A device driver performs the following jobs:
Page 20 of 18
5. Interrupt Service Routine (ISR):
• The ISR executes to handle the interrupt. This might involve reading data from a
device, processing an event, or performing necessary operations to address the
interrupt.
• The ISR must be efficient and quick to ensure the system's responsiveness.
6. Restore State and Resume:
• After handling the interrupt, the ISR restores the processor's state saved before
the interrupt occurred.
• The processor resumes execution of the interrupted program at the point where
it left off.
Disk Performance Parameters
Disk performance is crucial for a computer system's overall efficiency and speed. The key
parameters affecting disk performance include:
1. Seek Time:
• The time it takes for the disk’s read/write head to move to the track where the
desired data is located.
• Consists of the time to move the head across the disk surface.
2. Rotational Latency:
• The time it takes for the desired disk sector to rotate under the read/write
head.
• Depends on the disk's rotational speed, measured in revolutions per minute
(RPM).
3. Transfer Time:
• The time it takes to transfer data once the read/write head is positioned
correctly.
• Depends on the data transfer rate of the disk, typically measured in megabytes
per second (MB/s).
4. Disk Access Time:
• The total time it takes to read or write data, combining seek time and rotational
latency.
• Access time = Seek Time + Rotational Latency
5. Throughput:
• The amount of data that can be read from or written to the disk in a given
period.
Page 21 of 18
• Measured in megabytes per second (MB/s) or gigabytes per second (GB/s).
6. I/O Operations Per Second (IOPS):
• The number of read/write operations a disk can perform per second.
• Important for evaluating the performance of disks in environments with a high
volume of small transactions.
7. Queue Depth:
• The number of I/O operations that can be queued at the disk controller.
• Higher queue depth can improve throughput but may increase latency.
8. Cache Size:
• The amount of memory on the disk used to store frequently accessed data.
• Larger caches can improve performance by reducing the need to access the disk
media for frequently used data.
9. Average Seek Time:
• The average time for a disk head to move to any random track.
• Typically calculated as a weighted average of seek times for different distances.
10. Mean Time Between Failures (MTBF):
• A measure of disk reliability, indicating the average time the disk is expected to
operate before failing.
• Important for assessing the longevity and durability of the disk.
Performance Optimization
• Disk Scheduling Algorithms: Optimize the order of I/O requests to minimize seek time
and improve throughput (e.g., FCFS, SSTF, SCAN).
• Defragmentation: Rearrange fragmented data to improve access times (primarily for
HDDs).
• RAID Configurations: Combine multiple disks to improve redundancy, performance, or
both.
• Caching: Use disk caches and buffers to store frequently accessed data and improve
performance.
• Proper Sizing and Allocation: Ensure adequate disk space and appropriate partitioning
to optimize performance.
Disk Scheduling
Different types of scheduling algorithms are as follows.
Page 22 of 18
1. First Come, First Served scheduling algorithm (FCFS). The simplest form of scheduling
is first-in-first-out (FIFO) scheduling, which processes items from the queue in
sequential order.
2. Shortest Seek Time First (SSTF) algorithm. The SSTF policy is to select the disk I/O
request that requires the least movement of the disk arm from its current position.
3. SCAN scheduling algorithm. The scan algorithm has the head start at track 0 and
moves towards the highest numbered track, servicing all requests for a track as it
passes the track.
The disk arm moves from one end of the disk to the other, servicing requests in one
direction, and then reverses direction at the end.
4. LOOK Scheduling Algorithm. Start the head moving in one direction. Satisfy the request
for the closest track in that direction when there are no more requests in the direction,
the head is
traveling, reverse direction, and repeat.
Disk Management
The operating system is responsible for disk management. Following are some activities
involved.
1) Disk Formatting in OS
Disk formatting is the process of preparing a storage device such as a hard drive, SSD, or USB
flash drive for data storage. This involves setting up an empty file system on the disk, which
allows an operating system (OS) to read from and write to the disk. Here's a detailed
explanation of the process and its components:
1. Low-Level Formatting (Physical Formatting.
• Low-level formatting is the process of marking the surface of a disk with sectors
and tracks, creating the physical structure of the disk.
2. High-Level Formatting (Logical Formatting).
Common File Systems:
FAT32: Compatible across many systems but has limitations on file and partition sizes.
NTFS: Used by Windows, supports large files, security features, and recovery
capabilities.
2) Boot Block
The boot block is a critical component of a storage device, such as a hard drive or SSD, that
plays an essential role in the booting process of a computer system. Here's a detailed
explanation:
Page 24 of 18
Preemptive Scheduling:
• Preemptive scheduling is used when a process switches from running state to ready
state or from waiting state to ready state.
• The resources (mainly CPU cycles) are allocated to the process for the limited amount
of time and then is taken away, and the process is again placed back in the ready queue
if that process still has CPU burst time remaining.
• That process stays in ready queue till it gets next chance to execute.
Non-Preemptive Scheduling:
• Non-preemptive Scheduling is used when a process terminates, or a process switches
from running to waiting state.
• In this scheduling, once the resources (CPU cycles) is allocated to a process, the process
holds the CPU till it gets terminated or it reaches a waiting state.
• In case of non-preemptive scheduling does not interrupt a process running CPU in
middle of the execution.
• Instead, it waits till the process complete its CPU burst time and then it can allocate the
CPU to another process.
Basis for
Preemptive Scheduling Non Preemptive Scheduling
Comparison
Once resources are allocated to a
The resources are allocated to a process, the process holds it till it
Basic
process for a limited time. completes its burst time or switches to
waiting state.
Process can be interrupted in Process can not be interrupted till it
Interrupt
between. terminates or switches to waiting state.
If a high priority process
If a process with long burst time is
frequently arrives in the ready
Starvation running CPU, then another process with
queue, low priority process may
less CPU burst time may starve.
starve.
Preemptive scheduling has
Non-preemptive scheduling does not
Overhead overheads of scheduling the
have overheads.
processes.
Flexibility Preemptive scheduling is flexible. Non-preemptive scheduling is rigid.
Preemptive scheduling is cost Non-preemptive scheduling is not cost
Cost
associated. associative.
Scheduling Criteria
• There are several different criteria to consider when trying to select the "best"
scheduling algorithm for a particular situation and environment, including:
o CPU utilization - Ideally the CPU would be busy 100% of the time, so
as to waste 0 CPU cycles. On a real system CPU usage should range from
40% ( lightly loaded ) to 90% ( heavily loaded. )
o Throughput - Number of processes completed per unit time. May range
from 10 / second to 1 / hour depending on the specific processes.
Page 25 of 18
o Turnaround time - Time required for a particular process to complete,
from submission time to completion.
o Waiting time - How much time processes spend in the ready queue
waiting their turn to get on the CPU.
o Response time - The time taken in an interactive program from the
issuance of a command to the commence of a response to that command.
In brief:
Arrival Time: Time at which the process arrives in the ready queue. Completion
Time: Time at which process completes its execution. Burst Time: Time
required by a process for CPU execution. Turn Around Time: Time
Difference between completion time and arrival time. Turn Around Time = Completion
Time – Arrival Time
Waiting Time(W.T): Time Difference between turnaround time and burst time. Waiting
Time = Turn Around Time – Burst Time
Advantages-
• It is simple and easy to understand.
• It can be easily implemented using queue data structure.
• It does not lead to starvation.
Disadvantages-
• It does not consider the priority or burst time of the processes.
• It suffers from convoy effect i.e. processes with higher burst time arrived before
the processes with smaller burst time.
Page 26 of 18
Example 1:
Example 2:
Consider the processes P1, P2, P3 given in the below table, arrives for execution in
the same order, with Arrival Time 0, and given Burst Time,
PROCESS ARRIVAL TIME BURST TIME
P1 0 24
P2 0 3
P3 0 3
Gantt chart
P1 P2 P3
0 24 27 30
Page 27 of 18
PROCESS WAIT TIME TURN AROUND TIME
P1 0 24
P2 24 27
P3 27 30
Average Waiting Time = (Total Wait Time) / (Total number of processes) = 51/3 = 17 ms
Average Turn Around time = (Total Turn Around Time) / (Total number of processes)
= 81 / 3 = 27 ms
Throughput = 3 jobs/30 sec = 0.1 jobs/sec
Example 3:
Consider the processes P1, P2, P3, P4 given in the below table, arrives for execution
in the same order, with given Arrival Time and Burst Time.
PROCESS ARRIVAL TIME BURST TIME
P1 0 8
P2 1 4
P3 2 9
P4 3 5
Gantt chart
P1 P2 P3 P4
0 8 12 21 26
Average Waiting Time = (Total Wait Time) / (Total number of processes)= 35/4 = 8.75 ms
Average Turn Around time = (Total Turn Around Time) / (Total number of processes)
61/4 = 15.25 ms
Page 28 of 18
(b) Shortest Job First (SJF)
• Process which have the shortest burst time are scheduled first.
• If two processes have the same bust time, then FCFS is used to break the tie.
• This is a non-pre-emptive, pre-emptive scheduling algorithm.
• Best approach to minimize waiting time.
• Easy to implement in Batch systems where required CPU time is known in advance.
• Impossible to implement in interactive systems where required CPU time is not
known.
• The processer should know in advance how much time process will take.
• Pre-emptive mode of Shortest Job First is called as Shortest Remaining Time
First (SRTF).
Advantages-
• SRTF is optimal and guarantees the minimum average waiting time.
• It provides a standard for other algorithms since no other algorithm performs
better than it.
Disadvantages-
• It can not be implemented practically since burst time of the processes can not
be known in advance.
• It leads to starvation for processes with larger burst time.
• Priorities can not be set for the processes.
• Processes with larger burst time have poor response time.
Example-01:
Consider the set of 5 processes whose arrival time and burst time are given below-
Process Id Arrival time Burst time
P1 3 1
P2 1 4
P3 4 2
P4 0 6
P5 2 3
Solution-
If the CPU scheduling policy is SJF non-preemptive, calculate the average waiting
time and average turnaround time.
Gantt Chart-
Now, we know-
• Turn Around time = Exit time – Arrival time
• Waiting time = Turn Around time – Burst time
Page 29 of 18
Process Id Exit time Turn Around time Waiting time
P1 7 7–3=4 4–1=3
P2 16 16 – 1 = 15 15 – 4 = 11
P3 9 9–4=5 5–2=3
P4 6 6–0=6 6–6=0
P5 12 12 – 2 = 10 10 – 3 = 7
Now,
• Average Turn Around time = (4 + 15 + 5 + 6 + 10) / 5 = 40 / 5 = 8 unit
• Average waiting time = (3 + 11 + 3 + 0 + 7) / 5 = 24 / 5 = 4.8 unit
Example-02:
Consider the set of 5 processes whose arrival time and burst time are given below-
Process Id Arrival time Burst time
P1 3 1
P2 1 4
P3 4 2
P4 0 6
P5 2 3
If the CPU scheduling policy is SJF pre-emptive, calculate the average waiting time and
average turnaround time.
Solution-
Gantt Chart-
Now,
Page 30 of 18
Example-03:
Consider the set of 6 processes whose arrival time and burst time are given below-
If the CPU scheduling policy is shortest remaining time first, calculate the average
waiting time and average turnaround time.
Solution-
Gantt Chart-
Now, we know-
• Turn Around time = Exit time – Arrival time
• Waiting time = Turn Around time – Burst time
Now,
• Average Turn Around time = (19 + 12 + 4 + 1 + 5 + 2) / 6 = 43 / 6 = 7.17 unit
• Average waiting time = (12 + 7 + 1 + 0 + 3 + 1) / 6 = 24 / 6 = 4 unit
Page 31 of 18
Example -04:
Consider the set of 3 processes whose arrival time and burst time are given below-
If the CPU scheduling policy is SRTF, calculate the average waiting time and average
turn around time.
Solution-
Gantt Chart-
Now, we know-
• Turn Around time = Exit time – Arrival time
• Waiting time = Turn Around time – Burst time
Now,
• Average Turn Around time = (13 + 4 + 20) / 3 = 37 / 3 = 12.33 unit
• Average waiting time = (4 + 0 + 11) / 3 = 15 / 3 = 5 unit
Example-05:
Consider the set of 4 processes whose arrival time and burst time are given below-
Page 32 of 18
If the CPU scheduling policy is SRTF, calculate the waiting time of process P2.
Solution-
Gantt Chart-
Now, we know-
• Turn Around time = Exit time – Arrival time
• Waiting time = Turn Around time – Burst time
Thus,
• Turn Around Time of process P2 = 55 – 15 = 40 unit
• Waiting time of process P2 = 40 – 25 = 15 unit
Page 33 of 18
Advantages-
Disadvantages-
• It leads to starvation for processes with larger burst time as they have to repeat
the cycle many times.
• Its performance heavily depends on time quantum.
• Priorities can not be set for the processes.
Thus, higher value of time quantum is better in terms of number of context switch.
Example-01:
Consider the set of 5 processes whose arrival time and burst time are given below-
Page 34 of 18
If the CPU scheduling policy is Round Robin with time quantum = 2 unit, calculate
the average waiting time and average turnaround time.
Solution-
Ready Queue- P5, P1, P2, P5, P4, P1, P3, P2, P1
Gantt Chart-
Now, we know-
• Turn Around time = Exit time – Arrival time
• Waiting time = Turn Around time – Burst time
Process Id Exit time Turn Around time Waiting time
P1 13 13 – 0 = 13 13 – 5 = 8
P2 12 12 – 1 = 11 11 – 3 = 8
P3 5 5–2=3 3–1=2
P4 9 9–3=6 6–2=4
P5 14 14 – 4 = 10 10 – 3 = 7
Now,
• Average Turn Around time = (13 + 11 + 3 + 6 + 10) / 5 = 43 / 5 = 8.6 unit
• Average waiting time = (8 + 8 + 2 + 4 + 7) / 5 = 29 / 5 = 5.8 unit
Problem-02:
Consider the set of 6 processes whose arrival time and burst time are given below-
Process Id Arrival time Burst time
P1 0 4
P2 1 5
P3 2 2
P4 3 1
P5 4 6
P6 6 3
If the CPU scheduling policy is Round Robin with time quantum = 2, calculate the average
waiting time and average turnaround time.
Solution-
Ready Queue- P5, P6, P2, P5, P6, P2, P5, P4, P1, P3, P2, P1
Gantt chart-
Now, we know-
• Turn Around time = Exit time – Arrival time
• Waiting time = Turn Around time – Burst time
Page 35 of 18
Process Id Exit time Turn Around time Waiting time
P1 8 8–0=8 8–4=4
P2 18 18 – 1 = 17 17 – 5 = 12
P3 6 6–2=4 4–2=2
P4 9 9–3=6 6–1=5
P5 21 21 – 4 = 17 17 – 6 = 11
P6 19 19 – 6 = 13 13 – 3 = 10
Now,
• Average Turn Around time = (8 + 17 + 4 + 6 + 17 + 13) / 6 = 65 / 6 = 10.84 unit
• Average waiting time = (4 + 12 + 2 + 5 + 11 + 10) / 6 = 44 / 6 = 7.33 unit
Problem-03: Consider the set of 6 processes whose arrival time and burst time are
given below-
Process Id Arrival time Burst time
P1 5 5
P2 4 6
P3 3 7
P4 1 9
P5 2 2
P6 6 3
If the CPU scheduling policy is Round Robin with time quantum = 3, calculate the
average waiting time and average turnaround time.
Solution-
Ready Queue- P3, P1, P4, P2, P3, P6, P1, P4, P2, P3, P5, P4
Gantt chart-
Now, we know-
• Turn Around time = Exit time – Arrival time
• Waiting time = Turn Around time – Burst time
Process Id Exit time Turn Around time Waiting time
P1 32 32 – 5 = 27 27 – 5 = 22
P2 27 27 – 4 = 23 23 – 6 = 17
P3 33 33 – 3 = 30 30 – 7 = 23
P4 30 30 – 1 = 29 29 – 9 = 20
P5 6 6–2=4 4–2=2
P6 21 21 – 6 = 15 15 – 3 = 12
Page 36 of 18
Now,
• The waiting time for the process having the highest priority will always be zero in
preemptive mode.
• The waiting time for the process having the highest priority may not be zero in non-
preemptive mode.
Priority scheduling in preemptive and non-preemptive mode behaves exactly same under
following conditions-
• The arrival time of all the processes is same
• All the processes become available
Advantages-
• It considers the priority of the processes and allows the important processes to
run first.
• Priority scheduling in pre-emptive mode is best suited for real time operating
system.
Disadvantages-
• Processes with lesser priority may starve for CPU.
• There is no idea of response time and waiting time.
Problem-01:
Consider the set of 5 processes whose arrival time and burst time are given below-
Process Id Arrival time Burst time Priority
P1 0 4 2
P2 1 3 3
P3 2 1 4
P4 3 5 5
P5 4 2 5
If the CPU scheduling policy is priority non-preemptive, calculate the average waiting time
and average turnaround time. (Higher number represents higher priority)
Page 37 of 18
Solution-
Gantt Chart-
Now, we know-
• Turn Around time = Exit time – Arrival time
• Waiting time = Turn Around time – Burst time
Process Id Exit time Turn Around time Waiting time
P1 4 4–0=4 4–4=0
P2 15 15 – 1 = 14 14 – 3 = 11
P3 12 12 – 2 = 10 10 – 1 = 9
P4 9 9–3=6 6–5=1
P5 11 11 – 4 = 7 7–2=5
Now,
• Average Turn Around time = (4 + 14 + 10 + 6 + 7) / 5 = 41 / 5 = 8.2 unit
• Average waiting time = (0 + 11 + 9 + 1 + 5) / 5 = 26 / 5 = 5.2 unit
Problem-02: Consider the set of 5 processes whose arrival time and burst time are
given below-
Process Id Arrival time Burst time Priority
P1 0 4 2
P2 1 3 3
P3 2 1 4
P4 3 5 5
P5 4 2 5
If the CPU scheduling policy is priority preemptive, calculate the average waiting
time and average turn around time. (Higher number represents higher priority).
Solution-
Gantt Chart-
Now, we know-
• Turn Around time = Exit time – Arrival time
• Waiting time = Turn Around time – Burst time
Process Id Exit time Turn Around time Waiting time
P1 15 15 – 0 = 15 15 – 4 = 11
P2 12 12 – 1 = 11 11 – 3 = 8
P3 3 3–2=1 1–1=0
P4 8 8–3=5 5–5=0
P5 10 10 – 4 = 6 6–2=4
Page 38 of 18
Now,
• Average Turn Around time = (15 + 11 + 1 + 5 + 6) / 5 = 38 / 5 = 7.6 unit
• Average waiting time = (11 + 8 + 0 + 0 + 4) / 5 = 23 / 5 = 4.6 unit
4.3 Deadlock
• Deadlock is a situation where a set of processes are blocked because each process is
holding a resource and waiting for another resource acquired by some other process.
• For example, in the below diagram, Process 1 is holding Resource 1 and waiting for
resource 2 which is acquired by process 2, and process 2 is waiting for resource 1.
Page 39 of 18
Deadlock can arise if following four necessary conditions hold simultaneously.
1. Mutual Exclusion: One or more than one resource are non-sharable means Only one
process can use at a time.
2. Hold and Wait: A process is holding at least one resource and waiting for another
resources.
3. No Pre-emption: A resource cannot be taken from a process unless the process releases
the resource means the process which once scheduled will be executed till the
completion and no other process can be scheduled by the scheduler meanwhile.
4. Circular Wait: A set of processes are waiting for each other in circular form means
All the processes must be waiting for the resources in a cyclic manner so that the last
process is waiting for the resource which is being held by the first process.
Difference between Starvation and Deadlock
Sr. Deadlock Starvation
The requested resource is blocked by the other The requested resource is continuously be
4
process. used by the higher priority processes.
Deadlock Handling
The various strategies for handling deadlock are-
1. Deadlock Prevention
2. Deadlock Avoidance
3. Deadlock Detection and Recovery
4. Deadlock Ignorance
1. Deadlock Prevention
• Deadlocks can be prevented by preventing at least one of the four required
conditions:
Mutual Exclusion
• Shared resources such as read-only files do not lead to deadlocks.
• Unfortunately, some resources, such as printers and tape drives, require exclusive
access by a single process.
Hold and Wait
• To prevent this condition processes must be prevented from holding one or more
resources while simultaneously waiting for one or more others.
Page 40 of 18
No Preemption
• Preemption of process resource allocations can prevent this condition of
deadlocks,when it is possible.
Circular Wait
• One way to avoid circular wait is to number all resources, and to require that
processesrequest resources only in strictly increasing ( or decreasing ) order.
2. Deadlock Avoidance
• In deadlock avoidance, the operating system checks whether the system is in safe
stateor in unsafe state at every step which the operating system performs.
• The process continues until the system is in safe state.
• Once the system moves to unsafe state, the OS has to backtrack one step.
• In simple words, The OS reviews each allocation so that the allocation doesn't
causethe deadlock in the system.
4. Deadlock Ignorance
• This strategy involves ignoring the concept of deadlock and assuming as if it does
notexist.
• This strategy helps to avoid the extra overhead of handling deadlock.
• Windows and Linux use this strategy and it is the most widely used method.
MEMORY MANAGEMENT
Memory Allocation:
Memory allocation refers to the process of assigning memory space to programs and
processes running on a computer system. There are several memory allocation
techniques used by operating systems to manage memory efficiently. Some of the
common memory allocation techniques include contiguous allocation, paging, and
swapping.
1. Contiguous Allocation:
In contiguous allocation, each process is allocated a contiguous block of memory.
This means that the entire memory space required by a process must be available as
a single block of consecutive memory addresses. Contiguous allocation is simple and
efficient in terms of memory access since there is no need for translation of
addresses. However, it can lead to fragmentation, where small gaps of unusable
memory form between allocated blocks, reducing overall memory utilization.
Advantages:
Page 41 of 18
• Efficiency: Memory access is efficient since the entire process is stored in a single
contiguous block.
• Simplicity: It's relatively simple to implement compared to other techniques.
• No Overhead: There is no overhead for managing page tables or fragmentation.
Disadvantages:
• Fragmentation: External fragmentation can occur when memory is allocated and
deallocated over time, leading to inefficient use of memory.
• Limited Flexibility: It's challenging to allocate memory for processes with varying
sizes due to fragmentation.
In this scenario:
• The operating system occupies the memory addresses from 0 to 199.
• The user processes are allocated memory addresses from 200 to 999.
• Each user process is loaded into the entire user partition when it is running.
Advantages:
• Straightforward implementation.
Page 42 of 18
• Easy to manage since only one process runs at a time.
Disadvantages:
• Inefficient memory use, as only one process can run at a time.
• Limited multitasking capability, as multiple processes cannot run concurrently.
2. Multiple Partition Allocation:
In multiple partition allocation, memory is divided into multiple fixed-size partitions.
Each partition can accommodate one process. When a process arrives, it is allocated
memory from a free partition that is large enough to hold it.
Example: Multiple Partition Allocation
Suppose we have a computer system with 1000 bytes of available memory, and we
divide it into two partitions of equal size:
•
When a process arrives requiring 300 bytes, it is allocated into Partition 1.
• If another process requiring 200 bytes arrives, it can be allocated into Partition 2.
• If a process requiring more memory than the size of any partition arrives, it cannot
be accommodated.
Advantages:
• Allows for better memory utilization compared to single partition allocation.
• Supports multitasking by allowing multiple processes to run concurrently.
Disadvantages:
• External fragmentation can occur as processes are loaded and unloaded, leaving
small unusable gaps between partitions.
• It's challenging to accommodate processes of varying sizes efficiently, especially if the
available memory becomes fragmented.
3. Allocation Strategies:
In both single and multiple partition allocation, various allocation strategies can be
used to assign processes to memory partitions:
• First Fit: The operating system allocates the first available partition that is large
enough to hold the process.
Page 43 of 18
• Best Fit: The operating system allocates the smallest available partition that is large
enough to hold the process.
• Worst Fit: The operating system allocates the largest available partition.
Page 44 of 18
Advantages and Disadvantages of Allocation Strategies:
• First Fit:
• Simple to implement.
• May lead to less fragmentation compared to Best Fit.
• May waste some space if the first available partition is significantly larger than
the process.
• Best Fit:
• Reduces wastage by selecting the smallest partition that fits the process.
• May lead to more fragmentation compared to First Fit.
• Requires additional time for searching the entire list of partitions for the best
fit.
• Worst Fit:
• Reduces fragmentation by using larger partitions.
• May lead to more wastage compared to First and Best Fit.
• Like Best Fit, requires additional time for searching.
Page 45 of 18
2. Paging:
Paging is a memory management scheme that allows the physical memory to be divided
into fixed-size blocks called pages. Similarly, the logical memory is divided into blocks of
the same size called frames. When a process is loaded into memory, it is divided into
fixed-size blocks called pages. These pages are then mapped to frames in physical
memory using a page table. Paging allows for more efficient memory management and
helps overcome fragmentation problems associated with contiguous allocation.
ROM
RAM
1. Physical Memory:
Imagine physical memory (RAM) as a series of frames, each capable of holding one page
of data. In our example, we have 8 frames in physical memory.
2. Logical Memory:
Logical memory, on the other hand, consists of a series of pages, each of the same size
as a frame in physical memory. In our example, each page is 4KB in size, and we have 16
pages of data.
3. Page Table:
The page table is used by the operating system to map logical pages to physical frames. It
contains an entry for each page, indicating which frame it is currently located in.
Page 46 of 18
4.
Memory Access:
When a program accesses memory, it does so use logical addresses. The operating
system translates these logical addresses into physical addresses using the page table.
For example, if the program accesses Page 2, the page table indicates that Page 2 is
currently in Frame 5. So, the operating system maps Page 2 to Frame 5 and retrieves the
data.
Advantages:
• No External Fragmentation: Paging eliminates external fragmentation by dividing
memory into fixed-size blocks (pages).
• Flexible Allocation: Processes can be allocated memory in non-contiguous chunks,
allowing for more flexible memory management.
• Simpler Address Translation: Address translation is simplified since it involves
mapping logical page numbers to physical frame numbers.
Disadvantages:
• Internal Fragmentation: Paging can suffer from internal fragmentation if the last
page of a process does not fully utilize its allocated frame.
• Page Table Overhead: Maintaining page tables can consume additional memory and
CPU resources.
• Page Faults: Paging introduces the concept of page faults, which can lead to
performance overhead if not managed efficiently.
3. Swapping:
1. Swapping. Swapping is a mechanism in which a process can be swapped temporarily
out of main memory (or move) to secondary storage (disk) and make that memory
available to other methods. At some later time, the system swaps back the process
from the secondary storage to the main memory
Page 47 of 18
2. Swapping is a technique used by operating systems to manage memory by moving
pages of data between main memory (RAM) and secondary storage (usually a hard
disk) when they are not actively being used. When the operating system detects that
the amount of free memory is low, it selects some pages that are not currently being
used and swaps them out to the secondary storage to free up space in RAM. Later,
when the swapped-out pages are needed again, the operating system can swap them
back into the main memory. Swapping allows for more efficient memory usage by
allowing the operating system to utilize secondary storage as an extension of RAM
when necessary.
Advantages:
• Increases Effective Memory Size: Swapping allows the operating system to
effectively increase the amount of available memory by using secondary storage as
an extension of RAM.
• Better Memory Utilization: It helps in better utilization of physical memory by
moving inactive pages out to secondary storage.
• Allows Multi-programming: Swapping enables multi-programming by allowing more
processes to be loaded into memory than would otherwise fit.
Disadvantages:
Page 48 of 18
• Performance Overhead: Swapping introduces overhead due to the time taken to
move pages between main memory and secondary storage.
• Disk I/O Bottleneck: Excessive swapping can lead to a bottleneck on disk I/O,
especially if the secondary storage is slower than RAM.
• Complexity: Managing swapping requires complex algorithms to decide which pages
to swap and when, as well as to handle page faults efficiently.
FRAGMENTATION
Fragmentation in operating systems occurs when memory is allocated and deallocated in a
way that leaves unusable memory fragments scattered throughout the system. There are
two main types of fragmentation: external fragmentation and internal fragmentation.
1. External Fragmentation:
• External fragmentation occurs when there is enough total memory space to
satisfy a request, but it is not contiguous.
• This type of fragmentation typically occurs in systems that use contiguous
memory allocation techniques, such as fixed or dynamic partitioning.
• As memory is allocated and deallocated, free memory blocks become
scattered throughout the memory space, leaving small unusable gaps
between allocated blocks.
• External fragmentation can lead to inefficient memory utilization since
available memory cannot be used if it is fragmented into smaller pieces.
2. Internal Fragmentation:
• Internal fragmentation occurs when allocated memory is larger than what the
process actually needs.
• This typically happens in systems that allocate memory in fixed-size blocks or
segments.
• When a process is allocated memory, it may be given a larger block than
necessary, leading to wasted space within that block.
• Although the entire allocated block is used by the process, the extra space
within it is not utilized efficiently, resulting in internal fragmentation.
DISADVANTAGES OF FRAGMENTATION:
• Reduced Memory Utilization: Both external and internal fragmentation leads to
inefficient use of memory, as some memory space becomes unusable.
Page 49 of 18
• Performance Degradation: Fragmentation can degrade system performance. For
example, accessing fragmented memory may require additional time for memory
management operations like searching for contiguous blocks or moving data around.
• Memory Management Overhead: Fragmentation may require additional overhead
for memory management. For instance, the operating system may need to
implement complex algorithms to handle fragmentation, which can consume CPU
cycles and memory resources.
• Increased Swapping: Swapping may occur more frequently in systems with high
fragmentation as the operating system tries to free up contiguous memory space by
moving pages to secondary storage.
QUESTIONS
1. What is an operating system?
2. Why is an operating system necessary for a computer?
3. What are the main functions of an operating system?
4. Define process in the context of an operating system.
5. What is process management and why is it important?
6. Explain the concept of multitasking in an operating system.
7. Differentiate between process and thread.
8. What is a process control block (PCB)?
9. How does an operating system handle process scheduling?
10. What are the criteria for selecting a scheduling algorithm?
11. Describe the difference between preemptive and non-preemptive scheduling.
12. What is CPU burst time and how does it relate to scheduling?
13. Explain the terms "context switch" and "dispatch latency."
14. How does an operating system manage processes in a multi-user environment?
15. What is process synchronization and why is it necessary?
16. Define deadlock in the context of process management.
17. How does an operating system prevent or resolve deadlocks?
18. What is a semaphore and how is it used for synchronization?
19. Explain the concept of inter-process communication (IPC).
20. What are some common IPC mechanisms used by operating systems?
Page 50 of 18
QUESTIONS
21. What is an operating system?
22. An operating system is software that acts as an intermediary between computer
hardware and user applications. It manages computer resources, provides essential
services, and facilitates communication between software and hardware
components.
23. Why is an operating system necessary for a computer?
24. An operating system is necessary for a computer because it provides a user-friendly
interface, manages hardware resources efficiently, facilitates multitasking, enables
software execution, ensures security, and offers various services such as file
management and networking.
25. What are the main functions of an operating system?
26. The main functions of an operating system include process management, memory
management, file system management, device management, security and access
control, user interface management, and networking.
27. Define process in the context of an operating system.
28. A process is an instance of a running program. It consists of the program code,
program counter, registers, stack, heap, and other necessary data. Processes are
managed by the operating system and can execute concurrently.
29. What is process management and why is it important?
30. Process management involves creating, scheduling, terminating, and controlling
processes. It is important for efficient utilization of CPU resources, ensuring fair
access to resources, providing multitasking capabilities, and facilitating concurrent
execution of multiple programs.
31. Explain the concept of multitasking in an operating system.
32. Multitasking is the ability of an operating system to execute multiple processes
concurrently on a single CPU. It allows users to run multiple programs simultaneously
and provides the illusion of parallel execution by rapidly switching between
processes.
33. Differentiate between process and thread.
34. A process is an independent entity that runs in its own memory space, whereas a
thread is a lightweight execution unit within a process. Multiple threads can exist
within a single process and share the same memory space, enabling concurrent
execution.
35. What is a process control block (PCB)?
Page 51 of 18
36. A process control block (PCB) is a data structure used by the operating system to
store information about a process. It contains essential details such as process state,
program counter, CPU registers, memory allocation, and scheduling information.
37. How does an operating system handle process scheduling?
38. The operating system uses process scheduling algorithms to determine the order in
which processes are executed on the CPU. It selects processes from the ready queue
and allocates CPU time based on scheduling policies and priorities.
39. What are the criteria for selecting a scheduling algorithm?
40. Criteria for selecting a scheduling algorithm include CPU utilization, throughput,
turnaround time, waiting time, response time, fairness, and scalability.
41. Describe the difference between preemptive and non-preemptive scheduling.
42. Preemptive scheduling allows the operating system to interrupt a currently running
process to allocate CPU time to a higher-priority process. Non-preemptive scheduling
does not allow such interruptions and lets processes run until they voluntarily yield
the CPU.
43. What is CPU burst time and how does it relate to scheduling?
44. CPU burst time is the amount of time a process spends executing on the CPU without
being interrupted. It is a crucial factor in scheduling algorithms, as scheduling
decisions are often based on predictions of future CPU burst times.
45. Explain the terms "context switch" and "dispatch latency."
46. A context switch is the process of saving the state of a currently running process,
loading the state of another process, and transferring control from one process to
another. Dispatch latency refers to the time taken by the operating system to
perform a context switch.
47. How does an operating system manage processes in a multi-user environment?
48. In a multi-user environment, the operating system allocates resources fairly among
multiple users, enforces access control mechanisms to protect user data, and
provides facilities for user authentication, session management, and inter-process
communication.
49. What is process synchronization and why is it necessary?
50. Process synchronization is the coordination of multiple processes to ensure that they
cooperate and share resources in a controlled manner. It is necessary to prevent race
conditions, avoid data inconsistency, and maintain system integrity.
51. Define deadlock in the context of process management.
Page 52 of 18
52. Deadlock occurs when two or more processes are unable to proceed because each is
waiting for a resource held by the other. Deadlocks can lead to system-wide resource
starvation and must be prevented or resolved by the operating system.
53. How does an operating system prevent or resolve deadlocks?
54. Operating systems prevent or resolve deadlocks using techniques such as resource
allocation policies, deadlock detection algorithms, deadlock avoidance strategies,
and deadlock recovery mechanisms like process termination or resource preemption.
55. What is a semaphore and how is it used for synchronization?
56. A semaphore is a synchronization primitive used to control access to shared
resources by multiple processes or threads. It can be used to implement mutual
exclusion, synchronization, and signaling mechanisms by providing atomic operations
such as wait (P) and signal (V).
57. Explain the concept of inter-process communication (IPC).
58. Inter-process communication (IPC) refers to mechanisms used by processes to
exchange data, synchronize their actions, and communicate with each other. IPC
allows processes to cooperate, coordinate, and share information in a multi-tasking
environment.
59. What are some common IPC mechanisms used by operating systems?
60. Common IPC mechanisms include pipes, sockets, message queues, shared memory,
signals, semaphores, and remote procedure calls (RPC). These mechanisms facilitate
communication and synchronization between processes running on the same system
or across networked computers.
61.
Page 53 of 18