0% found this document useful (0 votes)
28 views

Compiled Operating Systems Notes

Nice

Uploaded by

cheselemib
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views

Compiled Operating Systems Notes

Nice

Uploaded by

cheselemib
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 53

INTRODUCTION TO OPERATING SYSTEM

1. Define the following terms.


a) Operating System (OS)
-is system software that manages computer hardware and software resources
and provides common services for computer programs.
-software that supports a computer's basic functions, such as scheduling tasks
and controlling peripherals.
-a type of software interface between the user and the device hardware
b) Processes
-A process is an instance of a program running in a computer.
- a program in execution which then forms the basis of all computation.
c) Files
- is a collection of related information that is recorded on secondary storage.
-A collection of data or information
Example of files in OS
• Data Files- a computer file which stores data to be used by
a computer application or system
• Text. Files- a type of computer file that contains plain text,
i.e., human-readable characters
• Program Files-a folder or directory where all third-party
applications are installed
• Directory Files-a file system cataloguing structure which
contains references to other computer files
d) A system calls
- is a way for programs to interact with the operating system.
-a system call refers to the process used by a computer program to request a
service from an operating system.
e) Shell
-is a software interface that's often a command line interface that enables the
user to interact with the computer.
- a program that takes commands from the keyboard and gives them to the
operating system to perform.
f) Kernel
-is an intermediary between applications and hardware.
-acts as a bridge between applications and data processing performed at
hardware level.
g) Virtual Machines:
-is a compute resource that uses software instead of a physical computer to run
programs and deploy app.

2. What’s the classification of operating system structure.


a) Monolithic architecture of operating system.
-It is the oldest architecture used for developing operating system.
- an operating system architecture where the entire operating system is working
in kernel space.
-
b) Layered Architecture of operating system.

Page 1 of 18
-it breaks up the operating system into different layers and retains much more
control on the system.
- OS is split into various layers such that all the layers perform
different functionalities
c) Virtual memory architecture of operating system.
- It is created by a real machine operating system, which make a single real
machine appears to be several real machines.
-
d) client/server architecture of operating system
- architecture of a computer network in which many clients (remote processors)
request and receive service from a centralized server (host computer).
- is a computing model in which the server hosts, delivers, and manages most of
the resources and services requested by the client.

3. Types of Operating System.

a) Batch Operating System.


- Batch processing Operating System is a type of operating system that
processes similar types of jobs into a batch.
- The users of a batch operating system do not interact with the computer
directly. Each user prepares his job on an off-line device like punch cards and
submits it to the computer operator.
Examples of Batch operating system.
• Payroll System
• Bank Statements
ADVANTAGES OF BATCH PROCESSING
i. It isn't easy to forecast how long it will take to complete a
job.
ii. This system can easily manage large jobs again and again.
iii. The batch process can be divided into several stages to
increase processing speed.
iv. CPU utilization gets improved.
v.

DISADVANTAGES OF BATCH PROCESSING

i. Computer operators must have full knowledge of batch


systems.
ii. When a job fails once, it must be scheduled to be
completed, and it may take a long time to complete the
task.
iii. The computer system and the user have no direct
interaction.
iv. If a job enters an infinite loop, other jobs must wait for
an unknown period of time.
Page 2 of 18
b) Time-sharing Operating Systems.
-Time-sharing is a technique which enables many people, located at various
terminals, to use a particular computer system at the same time.

Examples of Time-sharing operating system.


• Windows server
• UNIX
• LINUX

FEATURES OF TIME-SHARING OS
✓ Multiple online users can use the same computer at the same
time.
✓ End-users feel that they monopolize the computer system.
✓ Better interaction between users and computers.
✓ It can make quick processing with a lot of tasks.

ADVANTAGES OF TIME SHARING


✓ It provides a quick response.
✓ Reduces CPU idle time.
✓ All the tasks are given a specific time.
✓ Improves response time.
✓ Easy to use and user friendly.
✓ Avoids duplication of software
DISADVANTAGES OF TIME SHARING
✓ It consumes many resources.
✓ Requires high specification of hardware.
✓ Probability of data communication problem.
✓ An issue with the security and integrity of user programs and
data.

c) Distributed Operating System.


- Distributed systems use multiple central processors to serve multiple real-time
applications and multiple users. Data processing jobs are distributed among the
processors accordingly.
- Distributed Operating System is a type of model where applications are
running on multiple computers linked by communications.
ADVANTAGES OF DISTRIBUTED (OS)
✓ Reduction of the load on the host computer.
✓ Better service to the customers.
✓ Speedup the exchange of data with one another via
electronic mail.
✓ With resource sharing facility, a user at one site may be able
to use the resources available at another.
✓ are far more reliable than single systems in terms of failures.
Page 3 of 18
✓ are made to be efficient in every aspect since they possess
multiple computers.

DISADVANTAGES OF DISTRIBUTED (OS)


✓ the implementation cost of a distributed system is
significantly higher.

d) A Network Operating System


-runs on a server and provides the server the capability to manage data, users,
groups, security, applications, and other networking functions.

Examples of network system.


• Windows Server 2008
• UNIX
• LINUX
ADVANTAGES OF NETWORK (OS)
✓ Centralized servers are highly stable.
✓ Security is server managed.
✓ Remote access to servers is possible from different locations
and types of systems.
✓ allows you to share resources like printers and files between
different computers on the network.
✓ has built-in security features that help to protect the network
from unauthorized access and other security threats.

DISADVANTAGES OF NETWORK (OS)


✓ can be vulnerable to security threats, such as hacking,
viruses, and malware.
✓ requires regular maintenance and updates to keep it running
smoothly.
✓ can be expensive, and this can be a problem for some
organizations, especially those with limited budgets.
✓ can be quite complex, and this means that it can be difficult
for people who aren’t familiar with it to use.

e) Real-Time Operating System.


- is defined as a data processing system in which the time interval required to
process and respond to inputs is so small that it controls the environment.

Types of Real-Time (OS) system.


✓ Hard real-time systems: - Hard real-time software systems have a set of
strict deadlines, and missing a deadline is considered a system failure.
Examples of hard real time systems.
• airplane sensor.

Page 4 of 18
• autopilot systems.
• spacecrafts.
✓ Soft real-time systems: - try to reach deadlines but do not fail if a
deadline is missed.

Examples of hard real time systems.


• Audio delivery system.
• Video delivery system.

ADVANTAGES OF REAL-TIME (OS)


✓ Maximum utilization of devices and systems. Thus, more
output from all the resources.
✓ they use fewer resources than other types of operating
systems.
✓ scalable, they can be adapted to fit a wide range of
applications and devices.
✓ real-time data, users can get up-to-date information about
the status of their systems or devices.
DISADVANTAGES OF REAL-TIME (OS)
✓ more complex than other types of operating systems, which
means that they require more technical knowledge to use.

f) Multitasking
-The multitasking OS refers to a logical extension of the multiprogramming
operating system, which allows users to run many programs simultaneously.

ADVANATAGES OF MULTITASKING OPERATING SYSTEM


✓ Has the ability to execute multiple applications or processes
concurrently.
✓ It effectively allocates system resources such as memory, CPU time, and
input/output devices among multiple processes.
✓ enable quick context switching between processes, resulting in faster
response times for users.
✓ includes multi-core processors and multitasking operating systems can
effectively utilize these resources.
✓ It employs virtual memory techniques to allow each process to use more
memory than is physically available.

DISADVANATAGES OF MULTITASKING OPERATING SYSTEM


✓ There might be competition for computer assets like RAM and central
processing time when multiple programs are active at once.
✓ It needs stronger hardware, and this may prove more costly than the
technological infrastructure needed for focused attention platforms.

Page 5 of 18
g) Multiprogramming
-Multiprogramming OS is an ability of an operating system to execute more than
one program using a single processor machine.

ADVANATAGES OF MULTIPROGRAMMING OPERATING SYSTEM


✓ CPU utilization is high because the CPU is never goes to idle state.
✓ Memory utilization is efficient.
✓ Response time is shorter
✓ Efficient resources utilization
✓ Short time jobs completed faster than long time jobs

DISADVANATAGES OF MULTIPROGRAMMING OPERATING SYSTEM


✓ Long time jobs have to wait long
✓ Tracking all processes sometimes difficult
✓ CPU scheduling is required
✓ Requires efficient memory management

4. Define the following terms as used in Operating System.

a) Interactivity.
- Interactivity refers to the ability of users to interact with a computer system.

Activities related to interactivity


i. Provides the user an interface to interact with the system.
ii. Manages input devices to take inputs from the user. For example,
keyboard.
iii. Manages output devices to show outputs to the user. For example,
Monitor.

b) Real-Time Systems
-systems that processes data and events that have critically defined time
constraints.
Activities related to interactivity
i. In such systems, Operating Systems typically read from and react to
sensor data.
c) Distributed Environment
- refers to multiple independent CPUs or processors in a computer system.

d) Spooling
- Spooling is an acronym for simultaneous peripheral operations on line. Spooling
refers to putting data of various I/O jobs in a buffer. This buffer is a special area in
memory or hard disk which is accessible to I/O devices.

e) Job control

Page 6 of 18
- refers to the control of multiple tasks or jobs on a computer system, ensuring
that they each have access to adequate resources to perform correct.

Job control language (JCL)


- is a scripting language that is used to communicate with the operating system.

Command language
- e is a language used for executing a series of commands instructions that
would otherwise be executed at the prompt.

Advantages of command languages


✓ Very easy for all types of users to write.
✓ Do not require the files to be compiled.
✓ Easy to modify and make additional commands
✓ Very small files
✓ Do not require any additional programs or files that are not
already found on the operating system
Disadvantages of command languages
✓ Can be limited when comparing with other programming
languages or scripting languages.
✓ May not execute as fast as other languages or compiled
programs.
✓ Some command languages often offer little more than using
the commands available for the operating system used.

PROCESS MANAGEMENT.
5. What’s a process?
-A process is basically a program in execution.
6. Define the following terms in process.

Page 7 of 18
a) Stack
-The process Stack contains the temporary data such as method/function
parameters, return address, and local variables.
b) Heap
-This is a dynamically allocated memory to a process during its runtime.
c) Text
-This includes the current activity represented by the value of Program Counter
and the contents of the processor's registers.
d) Data
-This section contains the global and static variables.
7. What’s a process model.
-It’s a process of the same nature that are classified together into a model.

8. Define goals of a process model in Operating System.


• Descriptive.
-Track what actually happens during a process
• Prescriptive
-Define the desired processes and how they should/could/might be performed.
-Establish rules, guidelines, and behaviour patterns which, if followed, would lead to
the desired process performance.
• Explanatory
-Provide explanations about the rationale of processes
-Pre-defines points at which data can be extracted for reporting purposes.
-Explore and evaluate the several possible courses of action based on rational
arguments.
9. Define Thread.?
-A thread is a single sequence stream within in a process.

Page 8 of 18
-a smaller unit of execution within a process that shares the same memory and
resources.
10. What are the features of Threads in a Operating System.
• Each thread has its own stack and register.
• Threads can directly communicate with each other as they share the same
address space.
• One system call is capable of creating more than one thread.
• Threads share data, memory, resources, files.
11. What’s the similarity between Process and Thread.?
• Like processes threads share CPU and only one thread active (running) at a
time.
• Like processes, threads within a process, threads within processes execute
sequentially.
• Like processes, thread can create children.
• And like process, if one thread is blocked, another thread can run.
12. What’s the difference between Process and Thread.?
• Unlike processes, threads are not independent of one another.
• Unlike processes, all threads can access every address in the task
• Unlike processes, thread is design to assist one other. Note that processes might
or might not assist one another because processes may originate from different
users.
13. Give two thread levels in operating system.?
a) User-Level Threads- User-level threads implement in user-level libraries, rather
than via systems calls, so thread switching does not need to call operating
system and to cause interrupt to the kernel.
Advantages of User-Level thread
• Does not require modification to operating systems.
• Simple Representation: Each thread is represented simply by a PC,
registers, stack and a small control block, all stored in the user process
address space.
• Simple Management: This simply means that creating a thread,
switching between threads and synchronization between threads can all
be done without intervention of the kernel.
• Fast and Efficient: Thread switching is not much more expensive than a
procedure call.
Disadvantages of User-Level thread
• There is a lack of coordination between threads and operating system
kernel.
• User-level threads requires non-blocking systems call i.e., a
multithreaded kernel.

b) Kernel-Level Threads- In this method, the kernel knows about and manages the
threads. No runtime system is needed in this case.

Advantages of Kernel-Level thread


• Because kernel has full knowledge of all threads, Scheduler may decide
to give more time to a process having large number of threads than
process having small number of threads.
Page 9 of 18
• Kernel-level threads are especially good for applications that frequently
block.
Disadvantages of User-Level thread
• The kernel-level threads are slow and inefficient
• Since kernel must manage and schedule threads as well as processes. It
requires a full thread control block (TCB) for each thread to maintain
information about threads. As a result, there is significant overhead and
increased in kernel complexity.
14. Define multithreading and give examples.
- Multithreading enables us to run multiple threads concurrently.
Examples of Multithreading.
i. Many to many relationships.
- The many-to-many model multiplexes any number of user threads
onto an equal or smaller number of kernel threads.
ii. Many to One Model
- Many-to-one model maps many user level threads to one Kernel-
level thread.
iii. One to One Model
- The one-to-one model maps a single user-level thread to a single
kernel-level thread.
Advantages of Threads over Multiple Processes
• Context Switching- Threads are very inexpensive to create and
destroy, and they are inexpensive to represent.
• Sharing- Treads allow the sharing of a lot resources that cannot be
shared in process, for example, sharing code section, data section,
Operating System resources like open file etc

Disadvantages of Threads over Multi processing


• Blocking-The major disadvantage if that if the kernel is single
threaded, a system call of one thread will block the whole process
and CPU may be idle during the blocking period.
• Security-Since there is, an extensive sharing among threads there
is a potential problem of security.
Difference between Process and Thread
•••
Process: Processes are basically the programs that are dispatched from the ready state and
are scheduled in the CPU for execution. PCB(Process Control Block) holds the concept of
process. A process can create other processes which are known as Child Processes. The
process takes more time to terminate and it is isolated means it does not share the memory
with any other process.
The process can have the following states new, ready, running, waiting, terminated, and
suspended.
Thread: Thread is the segment of a process which means a process can have multiple threads
and these multiple threads are contained within a process. A thread has three states: Running,
Ready, and Blocked.
Page 10 of 18
The thread takes less time to terminate as compared to the process but unlike the process,
threads do not isolate.

Process vs Thread

Difference between Process and Thread:


S.NO Process Thread

Process means any program is


Thread means a segment of a process.
1. in execution.

The process takes more time to


The thread takes less time to terminate.
2. terminate.

3. It takes more time for creation. It takes less time for creation.

It also takes more time for


It takes less time for context switching.
4. context switching.

The process is less efficient in Thread is more efficient in terms of


5. terms of communication. communication.

Page 11 of 18
S.NO Process Thread

We don’t need multi programs in action for


Multiprogramming holds the
multiple threads because a single process consists
concepts of multi-process.
6. of multiple threads.

7. The process is isolated. Threads share memory.

The process is called the A Thread is lightweight as each thread in a


8. heavyweight process. process shares code, data, and resources.

Process switching uses an Thread switching does not require calling an


interface in an operating operating system and causes an interrupt to the
9. system. kernel.

If one process is blocked then it


If a user-level thread is blocked, then all other
will not affect the execution of
user-level threads are blocked.
10. other processes

The process has its own Process


Thread has Parents’ PCB, its own Thread Control
Control Block, Stack, and
Block, and Stack and common Address space.
11. Address Space.

Since all threads of the same process share


Changes to the parent process address space and other resources so any
do not affect child processes. changes to the main thread may affect the
12. behaviour of the other threads of the process.

13. A system call is involved in it. No system call is involved, it is created using APIs.

The process does not share


Threads share data with each other.
14. data with each other.

User-level threads and Kernel-level threads

A thread is a lightweight process that can be managed independently by a scheduler. It improves


the application performance using parallelism.

Page 12 of 18
A thread shares information like data segment, code segment files etc. with its peer threads
while it contains its own registers, stack, counter etc.

The two main types of threads are user-level threads and kernel-level threads. A diagram that
demonstrates these is as follows −

User - Level Threads

The user-level threads are implemented by users and the kernel is not aware of the existence
of these threads. It handles them as if they were single-threaded processes. User-level threads
are small and much faster than kernel level threads. They are represented by a program counter
(PC), stack, registers and a small process control block. Also, there is no kernel involvement in
synchronization for user-level threads.

Advantages of User-Level Threads

Some of the advantages of user-level threads are as follows −

• User-level threads are easier and faster to create than kernel-level threads. They can
also be more easily managed.
• User-level threads can be run on any operating system.
• There are no kernel mode privileges required for thread switching in user-level threads.
Disadvantages of User-Level Threads

Some of the disadvantages of user-level threads are as follows −

Page 13 of 18
• Multithreaded applications in user-level threads cannot use multiprocessing to their
advantage.
• The entire process is blocked if one user-level thread performs blocking operation.

Kernel-Level Threads

Kernel-level threads are handled by the operating system directly and the thread management
is done by the kernel. The context information for the process as well as the process threads is
all managed by the kernel. Because of this, kernel-level threads are slower than user-level
threads.

Advantages of Kernel-Level Threads

Some of the advantages of kernel-level threads are as follows −

• Multiple threads of the same process can be scheduled on different processors in


kernel-level threads.
• The kernel routines can also be multithreaded.
• If a kernel-level thread is blocked, another thread of the same process can be
scheduled by the kernel.
Disadvantages of Kernel-Level Threads

Some of the disadvantages of kernel-level threads are as follows −

• A mode switch to kernel mode is required to transfer control from one thread to
another in a process.
• Kernel-level threads are slower to create as well as manage as compared to user-level
threads.

DEVICE MANAGEMENT
OBJECTIVE OF DEVICE MANAGEMENT
The objectives of device (I/O) management are to efficiently and effectively manage input and
output operations within a computer system. Here are the key objectives:
1. Resource Allocation: Efficiently allocate system resources such as CPU time, memory,
and device access to handle input and output requests from various processes and
devices concurrently.
2. Device Recognition and Configuration: Automatically detect connected devices,
configure them for operation, and load the necessary device drivers to enable
communication between the operating system and the devices.

Page 14 of 18
3. I/O Scheduling: Optimize the scheduling of input and output operations to minimize
latency, maximize throughput, and ensure fair access to shared resources among
competing processes.
4. Error Handling: Detect, report, and recover from errors that may occur during input
and output operations, ensuring data integrity, system reliability, and uninterrupted
operation.
5. Performance Optimization: Utilize techniques such as caching, buffering, and
prefetching to improve I/O performance, reduce latency, and enhance overall system
responsiveness.
6. Concurrency and Parallelism: Support concurrent execution of multiple input and
output operations to fully utilize system resources and improve system throughput.
7. Power Management: Implement power-saving mechanisms to reduce energy
consumption and extend battery life in mobile devices by dynamically adjusting device
power states based on workload and usage patterns.
8. Security and Access Control: Enforce access control policies to restrict access to I/O
devices based on user privileges, preventing unauthorized access and protecting
sensitive data from malicious or accidental manipulation.
9. Device Independence: Provide a uniform interface for accessing I/O devices,
abstracting the hardware details to ensure compatibility with different types of devices
and facilitate software portability.
10. Scalability and Extensibility: Design I/O management mechanisms that scale to
accommodate growing system demands and support the addition of new devices
without requiring significant modifications to the system architecture.
By fulfilling these objectives, device management systems ensure efficient, reliable, and secure
handling of input and output operations, thereby enhancing the overall performance and
usability of computer systems.

PRINCIPAL OF DEVICE MANAGEMENT


The principle of input/output (I/O) hardware revolves around facilitating communication
between a computer system and the outside world, enabling the exchange of data and
commands. Here are the key principles:
1. Interfacing: I/O devices interface with the computer system through hardware ports or
channels. These interfaces define the electrical, mechanical, and functional
characteristics necessary for communication. Common interfaces include USB,
Ethernet, HDMI, etc.
2. Device Recognition and Control: The operating system (OS) must recognize connected
I/O devices and manage their operations. This involves device detection, configuration,
and providing appropriate drivers for communication. Device controllers handle the
low-level details of interfacing with specific types of hardware.
Page 15 of 18
3. Data Transfer: Data transfer between the CPU and I/O devices occurs through input
and output operations. Input operations involve transferring data from an external
device to the CPU, while output operations send data from the CPU to the device. This
process often involves buffers to temporarily store data during transfer.
4. Polling vs Interrupts: Two main methods are used for communication between the CPU
and I/O devices: polling and interrupts. Polling involves the CPU regularly checking the
status of devices to determine if they need attention. Interrupts, on the other hand,
allow devices to signal the CPU when they require attention, reducing CPU overhead.
5. Device Drivers: Device drivers are software components that allow the OS to
communicate with specific hardware devices. They provide an interface between the
OS and the device controller, abstracting the hardware details and providing a
standardized interface for software applications to interact with the device.
6. DMA (Direct Memory Access): DMA allows certain types of I/O devices to transfer data
directly to and from memory without CPU intervention, improving overall system
performance by offloading data transfer tasks from the CPU.
7. Error Handling: I/O devices may encounter errors during operation, such as data
transmission errors or hardware malfunctions. Error handling mechanisms are
implemented to detect and respond to these errors, ensuring reliable operation and
data integrity.
8. Synchronization: Synchronization mechanisms are employed to coordinate data
transfer between multiple I/O devices and the CPU, preventing data corruption and
ensuring orderly processing.
By adhering to these principles, I/O hardware enables computers to interact with the external
world, facilitating tasks ranging from simple keyboard input to complex data transfer with
peripheral devices.

PRINCIPAL OF I/O SOFTWARE


Input/output (I/O) software is a critical component of an operating system, responsible for
managing and facilitating the communication between the computer's hardware and the
software applications. Here are the principal concepts of I/O software:
1. Device Independence
• Definition: The I/O system should be designed in such a way that the software does not
need to be aware of the specifics of the device it is interacting with.
• Importance: This allows applications to work with any device using a standard
interface, promoting compatibility and flexibility.
2. Uniform Naming
• Definition: Devices and files should have consistent naming conventions.

Page 16 of 18
• Importance: This simplifies the user's and programmer's task by providing a uniform
way to access different types of devices and files.
3. Error Handling
• Definition: The I/O system must handle errors robustly, providing mechanisms to
report, log, and recover from errors.
• Importance: Ensures system reliability and helps in troubleshooting problems without
crashing the system.
4. Synchronous vs. Asynchronous Operations
• Synchronous: I/O operations that block the requesting process until the operation is
completed.
• Asynchronous: I/O operations that allow the requesting process to continue while the
operation is being performed.
• Importance: Understanding and utilizing these concepts help optimize the
performance and responsiveness of applications.
5. Buffering
• Definition: Temporary storage used to hold data while it is being transferred between
two locations, usually between an application and a hardware device.
• Importance: Buffering can help smooth out differences in data transfer rates and
handle bursts of data.
6. Caching
• Definition: A technique used to store frequently accessed data in faster storage (like
RAM) to speed up access.
• Importance: Significantly improves the performance of I/O operations by reducing
access times.
7. Spooling
• Definition: Simultaneous Peripheral Operations On-Line, a process where data is
temporarily held to be used and executed by a device, program, or the system.
• Importance: Commonly used in print spooling to manage print jobs in a queue.
8. Device Drivers
• Definition: Specialized software modules that allow the operating system to
communicate with hardware devices.
• Importance: Essential for enabling the operating system to support a wide range of
hardware without needing to understand the details of each device.
9. Direct Memory Access (DMA)

Page 17 of 18
• Definition: A feature that allows certain hardware subsystems to access main system
memory independently of the CPU.
• Importance: Reduces CPU overhead and increases data transfer rates by allowing
devices to directly transfer data to/from memory.
10. Interrupt Handling
• Definition: Mechanisms that allow devices to signal the CPU that they need attention.
• Importance: Provides efficient I/O operations by allowing the CPU to be alerted and
respond to I/O events, instead of constantly polling devices.
11. I/O Scheduling
• Definition: The method by which the operating system decides the order in which I/O
operations will be executed.
• Importance: Critical for optimizing the performance and responsiveness of the system,
especially when multiple I/O operations are requested concurrently.
12. Virtualization of Devices
• Definition: The process of abstracting physical hardware into virtual devices that can be
managed by the operating system.
• Importance: Enhances flexibility, isolation, and security in managing hardware
resources.
Summary
I/O software is designed to provide an efficient, uniform, and reliable way for applications to
interact with hardware devices. Understanding and implementing these principles helps
ensure that the system can effectively manage the diverse and complex nature of I/O
operations.

Introduction to I/O software


I/O software is often organized into the following layers:

• User Level Libraries: This provides simple interface to the user program to perform
input and output. For example, studio is a library provided by C and C++
programming languages.

• Kernel Level Modules: This allows the device driver to interact with the device
controller and device-independent I/O modules used by the device drivers.
Following are some of the services provided:

Page 18 of 18
i. Scheduling - Kernel schedules a set of I/O requests to determine a good order
in which to execute them. When an application issues a blocking I/O system
call, the request is placed on the queue for that device.
ii. Buffering - Kernel I/O Subsystem maintains a memory area known as buffer
that stores data while transferring between two devices or between a device
with an application operation. Buffering is done to cope with a speed mismatch
between the producer and consumer of a data stream or to adapt between
devices with different data transfer sizes.
iii. Caching - The kernel maintains cache memory, a region of fast memory that
holds copies of data. Access to the cached copy is more efficient than access to
the original.
iv. Spooling and Device Reservation - A spool is a buffer that holds output for a
device, such as a printer, that cannot accept interleaved data streams. The
spooling system copies the queued spool files to the printer one at a time. In
some operating systems, spooling is managed by a system daemon process. In
other operating systems, it is handled by an in-kernel thread.
v. Error Handling - An operating system with protected memory can guard
against many hardware and application errors.

• Hardware: This layer includes actual hardware and hardware controller which
interact with the device drivers and make hardware alive.

Device Drivers
Device drivers are software modules that can be plugged into an OS to handle a particular
device. Operating System takes help from device drivers to handle all I/O devices.
A device driver performs the following jobs:

• To accept requests from the device-independent software above to it.


• Interact with the device controller to take and give I/O and perform the required error
handling

• Making sure that the request is executed successfully.


Page 19 of 18
Interrupt handlers
Interrupt handlers, also known as interrupt service routines (ISRs), are critical components in
an operating system (OS) that handle interrupts, which are signals to the processor indicating
that an event needs immediate attention.
Types of Interrupts
1. Hardware Interrupts:
• Generated by hardware devices to signal the processor about events such as
input from a keyboard, completion of a data transfer, or arrival of network
packets.
• Hardware interrupts are asynchronous and can occur at any time.
2. Software Interrupts:
• Generated by software, typically through system calls, to request OS services.
• Software interrupts are synchronous, occurring at predictable times during the
execution of a program.
Interrupt Handling Process
1. Interrupt Request:
• An interrupt request (IRQ) is sent to the processor, typically through a hardware
interrupt line.
• The processor stops executing the current instruction and saves its state (the
program counter, registers, etc.).
2. Interrupt Acknowledgment:
• The processor acknowledges the interrupt, often in conjunction with an
interrupt controller like the Programmable Interrupt Controller (PIC) or
Advanced Programmable Interrupt Controller (APIC).
3. Determine the Source:
• The interrupt controller or the processor determines the source of the interrupt
by examining the interrupt vector.
• The interrupt vector is a unique identifier associated with each interrupt source.
4. Execute the Interrupt Handler:
• The processor transfers control to the corresponding interrupt handler, which is
a specialized function designed to handle the specific interrupt.
• Interrupt handlers are typically part of the OS kernel and are written in a low-
level language like C or assembly.

Page 20 of 18
5. Interrupt Service Routine (ISR):
• The ISR executes to handle the interrupt. This might involve reading data from a
device, processing an event, or performing necessary operations to address the
interrupt.
• The ISR must be efficient and quick to ensure the system's responsiveness.
6. Restore State and Resume:
• After handling the interrupt, the ISR restores the processor's state saved before
the interrupt occurred.
• The processor resumes execution of the interrupted program at the point where
it left off.
Disk Performance Parameters
Disk performance is crucial for a computer system's overall efficiency and speed. The key
parameters affecting disk performance include:
1. Seek Time:
• The time it takes for the disk’s read/write head to move to the track where the
desired data is located.
• Consists of the time to move the head across the disk surface.
2. Rotational Latency:
• The time it takes for the desired disk sector to rotate under the read/write
head.
• Depends on the disk's rotational speed, measured in revolutions per minute
(RPM).
3. Transfer Time:
• The time it takes to transfer data once the read/write head is positioned
correctly.
• Depends on the data transfer rate of the disk, typically measured in megabytes
per second (MB/s).
4. Disk Access Time:
• The total time it takes to read or write data, combining seek time and rotational
latency.
• Access time = Seek Time + Rotational Latency
5. Throughput:
• The amount of data that can be read from or written to the disk in a given
period.

Page 21 of 18
• Measured in megabytes per second (MB/s) or gigabytes per second (GB/s).
6. I/O Operations Per Second (IOPS):
• The number of read/write operations a disk can perform per second.
• Important for evaluating the performance of disks in environments with a high
volume of small transactions.
7. Queue Depth:
• The number of I/O operations that can be queued at the disk controller.
• Higher queue depth can improve throughput but may increase latency.
8. Cache Size:
• The amount of memory on the disk used to store frequently accessed data.
• Larger caches can improve performance by reducing the need to access the disk
media for frequently used data.
9. Average Seek Time:
• The average time for a disk head to move to any random track.
• Typically calculated as a weighted average of seek times for different distances.
10. Mean Time Between Failures (MTBF):
• A measure of disk reliability, indicating the average time the disk is expected to
operate before failing.
• Important for assessing the longevity and durability of the disk.
Performance Optimization
• Disk Scheduling Algorithms: Optimize the order of I/O requests to minimize seek time
and improve throughput (e.g., FCFS, SSTF, SCAN).
• Defragmentation: Rearrange fragmented data to improve access times (primarily for
HDDs).
• RAID Configurations: Combine multiple disks to improve redundancy, performance, or
both.
• Caching: Use disk caches and buffers to store frequently accessed data and improve
performance.
• Proper Sizing and Allocation: Ensure adequate disk space and appropriate partitioning
to optimize performance.
Disk Scheduling
Different types of scheduling algorithms are as follows.

Page 22 of 18
1. First Come, First Served scheduling algorithm (FCFS). The simplest form of scheduling
is first-in-first-out (FIFO) scheduling, which processes items from the queue in
sequential order.
2. Shortest Seek Time First (SSTF) algorithm. The SSTF policy is to select the disk I/O
request that requires the least movement of the disk arm from its current position.
3. SCAN scheduling algorithm. The scan algorithm has the head start at track 0 and
moves towards the highest numbered track, servicing all requests for a track as it
passes the track.
The disk arm moves from one end of the disk to the other, servicing requests in one
direction, and then reverses direction at the end.
4. LOOK Scheduling Algorithm. Start the head moving in one direction. Satisfy the request
for the closest track in that direction when there are no more requests in the direction,
the head is
traveling, reverse direction, and repeat.

Disk Management
The operating system is responsible for disk management. Following are some activities
involved.
1) Disk Formatting in OS
Disk formatting is the process of preparing a storage device such as a hard drive, SSD, or USB
flash drive for data storage. This involves setting up an empty file system on the disk, which
allows an operating system (OS) to read from and write to the disk. Here's a detailed
explanation of the process and its components:
1. Low-Level Formatting (Physical Formatting.
• Low-level formatting is the process of marking the surface of a disk with sectors
and tracks, creating the physical structure of the disk.
2. High-Level Formatting (Logical Formatting).
Common File Systems:
FAT32: Compatible across many systems but has limitations on file and partition sizes.
NTFS: Used by Windows, supports large files, security features, and recovery
capabilities.

2) Boot Block
The boot block is a critical component of a storage device, such as a hard drive or SSD, that
plays an essential role in the booting process of a computer system. Here's a detailed
explanation:

Unit IV – CPU Scheduling and AlgorithmSection 4.1


Scheduling types
Scheduling Objectives
Page 23 of 18
• Be Fair while allocating resources to the processes
• Maximize throughput of the system
• Maximize number of users receiving acceptable response times.
• Be predictable
• Balance resource use
• Avoid indefinite postponement
• Enforce Priorities
• Give preference to processes holding key resources
• Give better service to processes that have desirable behaviour patterns

CPU and I/O Burst Cycle:


• Process execution consists of a cycle of CPU execution and I/O wait.
• Processes alternate between these two states.
• Process execution begins with a CPU burst, followed by an I/O burst, then another CPU
burst ... etc
• The last CPU burst will end with a system request to terminate execution rather than
with another I/O burst.
• The duration of these CPU burst have been measured.
• An I/O-bound program would typically have many short CPU bursts, A CPU-bound
program might have a few very long CPU bursts.
• This can help to select an appropriate CPU-scheduling algorithm.

Page 24 of 18
Preemptive Scheduling:
• Preemptive scheduling is used when a process switches from running state to ready
state or from waiting state to ready state.
• The resources (mainly CPU cycles) are allocated to the process for the limited amount
of time and then is taken away, and the process is again placed back in the ready queue
if that process still has CPU burst time remaining.
• That process stays in ready queue till it gets next chance to execute.

Non-Preemptive Scheduling:
• Non-preemptive Scheduling is used when a process terminates, or a process switches
from running to waiting state.
• In this scheduling, once the resources (CPU cycles) is allocated to a process, the process
holds the CPU till it gets terminated or it reaches a waiting state.
• In case of non-preemptive scheduling does not interrupt a process running CPU in
middle of the execution.
• Instead, it waits till the process complete its CPU burst time and then it can allocate the
CPU to another process.

Basis for
Preemptive Scheduling Non Preemptive Scheduling
Comparison
Once resources are allocated to a
The resources are allocated to a process, the process holds it till it
Basic
process for a limited time. completes its burst time or switches to
waiting state.
Process can be interrupted in Process can not be interrupted till it
Interrupt
between. terminates or switches to waiting state.
If a high priority process
If a process with long burst time is
frequently arrives in the ready
Starvation running CPU, then another process with
queue, low priority process may
less CPU burst time may starve.
starve.
Preemptive scheduling has
Non-preemptive scheduling does not
Overhead overheads of scheduling the
have overheads.
processes.
Flexibility Preemptive scheduling is flexible. Non-preemptive scheduling is rigid.
Preemptive scheduling is cost Non-preemptive scheduling is not cost
Cost
associated. associative.

Scheduling Criteria

• There are several different criteria to consider when trying to select the "best"
scheduling algorithm for a particular situation and environment, including:
o CPU utilization - Ideally the CPU would be busy 100% of the time, so
as to waste 0 CPU cycles. On a real system CPU usage should range from
40% ( lightly loaded ) to 90% ( heavily loaded. )
o Throughput - Number of processes completed per unit time. May range
from 10 / second to 1 / hour depending on the specific processes.

Page 25 of 18
o Turnaround time - Time required for a particular process to complete,
from submission time to completion.
o Waiting time - How much time processes spend in the ready queue
waiting their turn to get on the CPU.
o Response time - The time taken in an interactive program from the
issuance of a command to the commence of a response to that command.

In brief:
Arrival Time: Time at which the process arrives in the ready queue. Completion
Time: Time at which process completes its execution. Burst Time: Time
required by a process for CPU execution. Turn Around Time: Time
Difference between completion time and arrival time. Turn Around Time = Completion
Time – Arrival Time
Waiting Time(W.T): Time Difference between turnaround time and burst time. Waiting
Time = Turn Around Time – Burst Time

4.2 Types of Scheduling Algorithm

(a) First Come First Serve (FCFS)


In FCFS Scheduling
• The process which arrives first in the ready queue is firstly assigned the CPU.
• In case of a tie, process with smaller process id is executed first.
• It is always non-preemptive in nature.
• Jobs are executed on first come, first serve basis.
• It is a non-preemptive, pre-emptive scheduling algorithm.
• Easy to understand and implement.
• Its implementation is based on FIFO queue.
• Poor in performance as average wait time is high.

Advantages-
• It is simple and easy to understand.
• It can be easily implemented using queue data structure.
• It does not lead to starvation.
Disadvantages-
• It does not consider the priority or burst time of the processes.
• It suffers from convoy effect i.e. processes with higher burst time arrived before
the processes with smaller burst time.

Page 26 of 18
Example 1:

Example 2:
Consider the processes P1, P2, P3 given in the below table, arrives for execution in
the same order, with Arrival Time 0, and given Burst Time,
PROCESS ARRIVAL TIME BURST TIME
P1 0 24
P2 0 3
P3 0 3
Gantt chart

P1 P2 P3
0 24 27 30

Page 27 of 18
PROCESS WAIT TIME TURN AROUND TIME
P1 0 24
P2 24 27
P3 27 30

Total Wait Time = 0 + 24 + 27 = 51 ms

Average Waiting Time = (Total Wait Time) / (Total number of processes) = 51/3 = 17 ms

Total Turn Around Time: 24 + 27 + 30 = 81 ms

Average Turn Around time = (Total Turn Around Time) / (Total number of processes)
= 81 / 3 = 27 ms
Throughput = 3 jobs/30 sec = 0.1 jobs/sec
Example 3:
Consider the processes P1, P2, P3, P4 given in the below table, arrives for execution
in the same order, with given Arrival Time and Burst Time.
PROCESS ARRIVAL TIME BURST TIME
P1 0 8
P2 1 4
P3 2 9
P4 3 5

Gantt chart
P1 P2 P3 P4
0 8 12 21 26

PROCESS WAIT TIME TURN AROUND TIME


P1 0 8–0=8
P2 8–1=7 12 – 1 = 11
P3 12 – 2 = 10 21 – 2 = 19
P4 21 – 3 = 18 26 – 3 = 23

Total Wait Time:= 0 + 7 + 10 + 18 = 35 ms

Average Waiting Time = (Total Wait Time) / (Total number of processes)= 35/4 = 8.75 ms

Total Turn Around Time: 8 + 11 + 19 + 23 = 61 ms

Average Turn Around time = (Total Turn Around Time) / (Total number of processes)
61/4 = 15.25 ms

Throughput: 4 jobs/26 sec = 0.15385 jobs/sec

Page 28 of 18
(b) Shortest Job First (SJF)
• Process which have the shortest burst time are scheduled first.
• If two processes have the same bust time, then FCFS is used to break the tie.
• This is a non-pre-emptive, pre-emptive scheduling algorithm.
• Best approach to minimize waiting time.
• Easy to implement in Batch systems where required CPU time is known in advance.
• Impossible to implement in interactive systems where required CPU time is not
known.
• The processer should know in advance how much time process will take.
• Pre-emptive mode of Shortest Job First is called as Shortest Remaining Time
First (SRTF).

Advantages-
• SRTF is optimal and guarantees the minimum average waiting time.
• It provides a standard for other algorithms since no other algorithm performs
better than it.

Disadvantages-
• It can not be implemented practically since burst time of the processes can not
be known in advance.
• It leads to starvation for processes with larger burst time.
• Priorities can not be set for the processes.
• Processes with larger burst time have poor response time.

Example-01:
Consider the set of 5 processes whose arrival time and burst time are given below-
Process Id Arrival time Burst time
P1 3 1
P2 1 4
P3 4 2
P4 0 6
P5 2 3
Solution-
If the CPU scheduling policy is SJF non-preemptive, calculate the average waiting
time and average turnaround time.
Gantt Chart-

Now, we know-
• Turn Around time = Exit time – Arrival time
• Waiting time = Turn Around time – Burst time

Page 29 of 18
Process Id Exit time Turn Around time Waiting time
P1 7 7–3=4 4–1=3
P2 16 16 – 1 = 15 15 – 4 = 11
P3 9 9–4=5 5–2=3
P4 6 6–0=6 6–6=0
P5 12 12 – 2 = 10 10 – 3 = 7
Now,
• Average Turn Around time = (4 + 15 + 5 + 6 + 10) / 5 = 40 / 5 = 8 unit
• Average waiting time = (3 + 11 + 3 + 0 + 7) / 5 = 24 / 5 = 4.8 unit

Example-02:
Consider the set of 5 processes whose arrival time and burst time are given below-
Process Id Arrival time Burst time
P1 3 1
P2 1 4
P3 4 2
P4 0 6
P5 2 3
If the CPU scheduling policy is SJF pre-emptive, calculate the average waiting time and
average turnaround time.
Solution-
Gantt Chart-

Process Id Exit time Turn Around time Waiting time


P1 4 4–3=1 1–1=0
P2 6 6–1=5 5–4=1
P3 8 8–4=4 4–2=2
P4 16 16 – 0 = 16 16 – 6 = 10
P5 11 11 – 2 = 9 9–3=6

Now,

• Average Turn Around time = (1 + 5 + 4 + 16 + 9) / 5 = 35 / 5 = 7 unit


• Average waiting time = (0 + 1 + 2 + 10 + 6) / 5 = 19 / 5 = 3.8 unit

Page 30 of 18
Example-03:

Consider the set of 6 processes whose arrival time and burst time are given below-

Process Id Arrival time Burst time


P1 0 7
P2 1 5
P3 2 3
P4 3 1
P5 4 2
P6 5 1

If the CPU scheduling policy is shortest remaining time first, calculate the average
waiting time and average turnaround time.
Solution-
Gantt Chart-

Now, we know-
• Turn Around time = Exit time – Arrival time
• Waiting time = Turn Around time – Burst time

Process Id Exit time Turn Around time Waiting time


P1 19 19 – 0 = 19 19 – 7 = 12
P2 13 13 – 1 = 12 12 – 5 = 7
P3 6 6–2=4 4–3=1
P4 4 4–3=1 1–1=0
P5 9 9–4=5 5–2=3
P6 7 7–5=2 2–1=1

Now,
• Average Turn Around time = (19 + 12 + 4 + 1 + 5 + 2) / 6 = 43 / 6 = 7.17 unit
• Average waiting time = (12 + 7 + 1 + 0 + 3 + 1) / 6 = 24 / 6 = 4 unit

Page 31 of 18
Example -04:

Consider the set of 3 processes whose arrival time and burst time are given below-

Process Id Arrival time Burst time


P1 0 9
P2 1 4
P3 2 9

If the CPU scheduling policy is SRTF, calculate the average waiting time and average
turn around time.

Solution-
Gantt Chart-

Now, we know-
• Turn Around time = Exit time – Arrival time
• Waiting time = Turn Around time – Burst time

Process Id Exit time Turn Around time Waiting time


P1 13 13 – 0 = 13 13 – 9 = 4
P2 5 5–1=4 4–4=0
P3 22 22- 2 = 20 20 – 9 = 11

Now,
• Average Turn Around time = (13 + 4 + 20) / 3 = 37 / 3 = 12.33 unit
• Average waiting time = (4 + 0 + 11) / 3 = 15 / 3 = 5 unit

Example-05:

Consider the set of 4 processes whose arrival time and burst time are given below-

Process Id Arrival time Burst time


P1 0 20
P2 15 25
P3 30 10
P4 45 15

Page 32 of 18
If the CPU scheduling policy is SRTF, calculate the waiting time of process P2.

Solution-

Gantt Chart-

Now, we know-
• Turn Around time = Exit time – Arrival time
• Waiting time = Turn Around time – Burst time

Thus,
• Turn Around Time of process P2 = 55 – 15 = 40 unit
• Waiting time of process P2 = 40 – 25 = 15 unit

(c) Round Robin Scheduling


• CPU is assigned to the process on the basis of FCFS for a fixed amount of time.
• This fixed amount of time is called as time quantum or time slice.
• After the time quantum expires, the running process is preempted and sent to the
ready queue.
• Then, the processor is assigned to the next arrived process.
• It is always preemptive in nature.

Page 33 of 18
Advantages-

• It gives the best performance in terms of average response time.


• It is best suited for time sharing system, client server architecture and
interactive system.

Disadvantages-

• It leads to starvation for processes with larger burst time as they have to repeat
the cycle many times.
• Its performance heavily depends on time quantum.
• Priorities can not be set for the processes.

With decreasing value of time quantum,


• Number of context switch increases
• Response time decreases
• Chances of starvation decreases

Thus, smaller value of time quantum is better in terms of response time.

With increasing value of time quantum,


• Number of context switch decreases
• Response time increases
• Chances of starvation increases

Thus, higher value of time quantum is better in terms of number of context switch.

• With increasing value of time quantum, Round Robin Scheduling tends to


become FCFS Scheduling.
• When time quantum tends to infinity, Round Robin Scheduling becomes FCFS
Scheduling.
• The performance of Round Robin scheduling heavily depends on the value of
time quantum.
• The value of time quantum should be such that it is neither too big nor too
small.

Example-01:
Consider the set of 5 processes whose arrival time and burst time are given below-

Process Id Arrival time Burst time


P1 0 5
P2 1 3
P3 2 1
P4 3 2
P5 4 3

Page 34 of 18
If the CPU scheduling policy is Round Robin with time quantum = 2 unit, calculate
the average waiting time and average turnaround time.
Solution-
Ready Queue- P5, P1, P2, P5, P4, P1, P3, P2, P1
Gantt Chart-

Now, we know-
• Turn Around time = Exit time – Arrival time
• Waiting time = Turn Around time – Burst time
Process Id Exit time Turn Around time Waiting time
P1 13 13 – 0 = 13 13 – 5 = 8
P2 12 12 – 1 = 11 11 – 3 = 8
P3 5 5–2=3 3–1=2
P4 9 9–3=6 6–2=4
P5 14 14 – 4 = 10 10 – 3 = 7
Now,
• Average Turn Around time = (13 + 11 + 3 + 6 + 10) / 5 = 43 / 5 = 8.6 unit
• Average waiting time = (8 + 8 + 2 + 4 + 7) / 5 = 29 / 5 = 5.8 unit
Problem-02:
Consider the set of 6 processes whose arrival time and burst time are given below-
Process Id Arrival time Burst time

P1 0 4

P2 1 5

P3 2 2

P4 3 1

P5 4 6

P6 6 3
If the CPU scheduling policy is Round Robin with time quantum = 2, calculate the average
waiting time and average turnaround time.
Solution-
Ready Queue- P5, P6, P2, P5, P6, P2, P5, P4, P1, P3, P2, P1
Gantt chart-

Now, we know-
• Turn Around time = Exit time – Arrival time
• Waiting time = Turn Around time – Burst time

Page 35 of 18
Process Id Exit time Turn Around time Waiting time
P1 8 8–0=8 8–4=4
P2 18 18 – 1 = 17 17 – 5 = 12
P3 6 6–2=4 4–2=2
P4 9 9–3=6 6–1=5
P5 21 21 – 4 = 17 17 – 6 = 11
P6 19 19 – 6 = 13 13 – 3 = 10
Now,
• Average Turn Around time = (8 + 17 + 4 + 6 + 17 + 13) / 6 = 65 / 6 = 10.84 unit
• Average waiting time = (4 + 12 + 2 + 5 + 11 + 10) / 6 = 44 / 6 = 7.33 unit
Problem-03: Consider the set of 6 processes whose arrival time and burst time are
given below-
Process Id Arrival time Burst time
P1 5 5
P2 4 6
P3 3 7
P4 1 9
P5 2 2
P6 6 3
If the CPU scheduling policy is Round Robin with time quantum = 3, calculate the
average waiting time and average turnaround time.
Solution-
Ready Queue- P3, P1, P4, P2, P3, P6, P1, P4, P2, P3, P5, P4
Gantt chart-

Now, we know-
• Turn Around time = Exit time – Arrival time
• Waiting time = Turn Around time – Burst time
Process Id Exit time Turn Around time Waiting time
P1 32 32 – 5 = 27 27 – 5 = 22
P2 27 27 – 4 = 23 23 – 6 = 17
P3 33 33 – 3 = 30 30 – 7 = 23
P4 30 30 – 1 = 29 29 – 9 = 20
P5 6 6–2=4 4–2=2
P6 21 21 – 6 = 15 15 – 3 = 12

Page 36 of 18
Now,

• Average Turn Around time = (27 + 23 + 30 + 29 + 4 + 15) / 6 = 128 / 6 = 21.33 unit


• Average waiting time = (22 + 17 + 23 + 20 + 2 + 12) / 6 = 96 / 6 = 16 unit

(d) Priority Scheduling


• Out of all the available processes, CPU is assigned to the process having the
highest priority.
• In case of a tie, it is broken by FCFS Scheduling.
• Priority Scheduling can be used in both preemptive and non-preemptive mode.

• The waiting time for the process having the highest priority will always be zero in
preemptive mode.
• The waiting time for the process having the highest priority may not be zero in non-
preemptive mode.
Priority scheduling in preemptive and non-preemptive mode behaves exactly same under
following conditions-
• The arrival time of all the processes is same
• All the processes become available
Advantages-
• It considers the priority of the processes and allows the important processes to
run first.
• Priority scheduling in pre-emptive mode is best suited for real time operating
system.
Disadvantages-
• Processes with lesser priority may starve for CPU.
• There is no idea of response time and waiting time.

Problem-01:
Consider the set of 5 processes whose arrival time and burst time are given below-
Process Id Arrival time Burst time Priority

P1 0 4 2

P2 1 3 3

P3 2 1 4

P4 3 5 5

P5 4 2 5

If the CPU scheduling policy is priority non-preemptive, calculate the average waiting time
and average turnaround time. (Higher number represents higher priority)

Page 37 of 18
Solution-
Gantt Chart-

Now, we know-
• Turn Around time = Exit time – Arrival time
• Waiting time = Turn Around time – Burst time
Process Id Exit time Turn Around time Waiting time
P1 4 4–0=4 4–4=0
P2 15 15 – 1 = 14 14 – 3 = 11
P3 12 12 – 2 = 10 10 – 1 = 9
P4 9 9–3=6 6–5=1
P5 11 11 – 4 = 7 7–2=5
Now,
• Average Turn Around time = (4 + 14 + 10 + 6 + 7) / 5 = 41 / 5 = 8.2 unit
• Average waiting time = (0 + 11 + 9 + 1 + 5) / 5 = 26 / 5 = 5.2 unit

Problem-02: Consider the set of 5 processes whose arrival time and burst time are
given below-
Process Id Arrival time Burst time Priority
P1 0 4 2
P2 1 3 3
P3 2 1 4
P4 3 5 5
P5 4 2 5
If the CPU scheduling policy is priority preemptive, calculate the average waiting
time and average turn around time. (Higher number represents higher priority).
Solution-
Gantt Chart-

Now, we know-
• Turn Around time = Exit time – Arrival time
• Waiting time = Turn Around time – Burst time
Process Id Exit time Turn Around time Waiting time
P1 15 15 – 0 = 15 15 – 4 = 11
P2 12 12 – 1 = 11 11 – 3 = 8
P3 3 3–2=1 1–1=0
P4 8 8–3=5 5–5=0
P5 10 10 – 4 = 6 6–2=4

Page 38 of 18
Now,
• Average Turn Around time = (15 + 11 + 1 + 5 + 6) / 5 = 38 / 5 = 7.6 unit
• Average waiting time = (11 + 8 + 0 + 0 + 4) / 5 = 23 / 5 = 4.6 unit

(d) Multilevel Queue Scheduling


A multi-level queue scheduling algorithm partitions the ready queue into several separate
queues. The processes are permanently assigned to one queue, generally based on some property
of the process, such as memory size, process priority, or process type. Each queue hasits own
scheduling algorithm.
Let us consider an example of a multilevel queue-scheduling algorithm with five queues:
1. System Processes
2. Interactive Processes
3. Interactive Editing Processes
4. Batch Processes
5. Student Processes
Each queue has absolute priority over lower-priority queues. No process in the batch queue, for
example, could run unless the queues for system processes, interactive processes, and interactive
editing processes were all empty. If an interactive editing process entered the readyqueue while
a batch process was running, the batch process will be pre-empted.

4.3 Deadlock
• Deadlock is a situation where a set of processes are blocked because each process is
holding a resource and waiting for another resource acquired by some other process.
• For example, in the below diagram, Process 1 is holding Resource 1 and waiting for
resource 2 which is acquired by process 2, and process 2 is waiting for resource 1.

Page 39 of 18
Deadlock can arise if following four necessary conditions hold simultaneously.
1. Mutual Exclusion: One or more than one resource are non-sharable means Only one
process can use at a time.
2. Hold and Wait: A process is holding at least one resource and waiting for another
resources.
3. No Pre-emption: A resource cannot be taken from a process unless the process releases
the resource means the process which once scheduled will be executed till the
completion and no other process can be scheduled by the scheduler meanwhile.
4. Circular Wait: A set of processes are waiting for each other in circular form means
All the processes must be waiting for the resources in a cyclic manner so that the last
process is waiting for the resource which is being held by the first process.
Difference between Starvation and Deadlock
Sr. Deadlock Starvation

Starvation is a situation where the low


Deadlock is a situation where no process got
1 priority process got blocked and the high
blocked and no process proceeds
priority processes proceed.

2 Deadlock is an infinite waiting. Starvation is a long waiting but not infinite.

3 Every Deadlock is always a starvation. Every starvation need not be deadlock.

The requested resource is blocked by the other The requested resource is continuously be
4
process. used by the higher priority processes.

Deadlock happens when Mutual exclusion, hold


It occurs due to the uncontrolled priority and
5 and wait, No preemption and circular wait
resource management.
occurs simultaneously.

Deadlock Handling
The various strategies for handling deadlock are-
1. Deadlock Prevention
2. Deadlock Avoidance
3. Deadlock Detection and Recovery
4. Deadlock Ignorance
1. Deadlock Prevention
• Deadlocks can be prevented by preventing at least one of the four required
conditions:
Mutual Exclusion
• Shared resources such as read-only files do not lead to deadlocks.
• Unfortunately, some resources, such as printers and tape drives, require exclusive
access by a single process.
Hold and Wait
• To prevent this condition processes must be prevented from holding one or more
resources while simultaneously waiting for one or more others.

Page 40 of 18
No Preemption
• Preemption of process resource allocations can prevent this condition of
deadlocks,when it is possible.
Circular Wait
• One way to avoid circular wait is to number all resources, and to require that
processesrequest resources only in strictly increasing ( or decreasing ) order.
2. Deadlock Avoidance
• In deadlock avoidance, the operating system checks whether the system is in safe
stateor in unsafe state at every step which the operating system performs.
• The process continues until the system is in safe state.
• Once the system moves to unsafe state, the OS has to backtrack one step.
• In simple words, The OS reviews each allocation so that the allocation doesn't
causethe deadlock in the system.

3. Deadlock detection and recovery


• This strategy involves waiting until a deadlock occurs.
• After deadlock occurs, the system state is recovered.
• The main challenge with this approach is detecting the deadlock.

4. Deadlock Ignorance
• This strategy involves ignoring the concept of deadlock and assuming as if it does
notexist.
• This strategy helps to avoid the extra overhead of handling deadlock.
• Windows and Linux use this strategy and it is the most widely used method.

MEMORY MANAGEMENT
Memory Allocation:
Memory allocation refers to the process of assigning memory space to programs and
processes running on a computer system. There are several memory allocation
techniques used by operating systems to manage memory efficiently. Some of the
common memory allocation techniques include contiguous allocation, paging, and
swapping.
1. Contiguous Allocation:
In contiguous allocation, each process is allocated a contiguous block of memory.
This means that the entire memory space required by a process must be available as
a single block of consecutive memory addresses. Contiguous allocation is simple and
efficient in terms of memory access since there is no need for translation of
addresses. However, it can lead to fragmentation, where small gaps of unusable
memory form between allocated blocks, reducing overall memory utilization.
Advantages:

Page 41 of 18
• Efficiency: Memory access is efficient since the entire process is stored in a single
contiguous block.
• Simplicity: It's relatively simple to implement compared to other techniques.
• No Overhead: There is no overhead for managing page tables or fragmentation.
Disadvantages:
• Fragmentation: External fragmentation can occur when memory is allocated and
deallocated over time, leading to inefficient use of memory.
• Limited Flexibility: It's challenging to allocate memory for processes with varying
sizes due to fragmentation.

TYPES OF CONTIGUOUS ALLOCATION


1. Single Partition Allocation:
In single partition allocation, the entire memory is divided into two partitions: one
for the operating system and the other for user processes. Only one user process can
run at a time. When a process is loaded into memory, it occupies the entire user
partition.
Example: Single Partition Allocation
Suppose we have a computer system with 1000 bytes of available memory. In single
partition allocation:
• OS Partition: The operating system (OS) is loaded into the first partition, typically at
the lowest memory addresses.
• User Partition: The remaining memory is allocated to user processes.

In this scenario:
• The operating system occupies the memory addresses from 0 to 199.
• The user processes are allocated memory addresses from 200 to 999.
• Each user process is loaded into the entire user partition when it is running.
Advantages:
• Straightforward implementation.

Page 42 of 18
• Easy to manage since only one process runs at a time.
Disadvantages:
• Inefficient memory use, as only one process can run at a time.
• Limited multitasking capability, as multiple processes cannot run concurrently.
2. Multiple Partition Allocation:
In multiple partition allocation, memory is divided into multiple fixed-size partitions.
Each partition can accommodate one process. When a process arrives, it is allocated
memory from a free partition that is large enough to hold it.
Example: Multiple Partition Allocation
Suppose we have a computer system with 1000 bytes of available memory, and we
divide it into two partitions of equal size:


When a process arrives requiring 300 bytes, it is allocated into Partition 1.
• If another process requiring 200 bytes arrives, it can be allocated into Partition 2.
• If a process requiring more memory than the size of any partition arrives, it cannot
be accommodated.
Advantages:
• Allows for better memory utilization compared to single partition allocation.
• Supports multitasking by allowing multiple processes to run concurrently.
Disadvantages:
• External fragmentation can occur as processes are loaded and unloaded, leaving
small unusable gaps between partitions.
• It's challenging to accommodate processes of varying sizes efficiently, especially if the
available memory becomes fragmented.
3. Allocation Strategies:
In both single and multiple partition allocation, various allocation strategies can be
used to assign processes to memory partitions:
• First Fit: The operating system allocates the first available partition that is large
enough to hold the process.

Page 43 of 18
• Best Fit: The operating system allocates the smallest available partition that is large
enough to hold the process.
• Worst Fit: The operating system allocates the largest available partition.

Page 44 of 18
Advantages and Disadvantages of Allocation Strategies:
• First Fit:
• Simple to implement.
• May lead to less fragmentation compared to Best Fit.
• May waste some space if the first available partition is significantly larger than
the process.
• Best Fit:
• Reduces wastage by selecting the smallest partition that fits the process.
• May lead to more fragmentation compared to First Fit.
• Requires additional time for searching the entire list of partitions for the best
fit.
• Worst Fit:
• Reduces fragmentation by using larger partitions.
• May lead to more wastage compared to First and Best Fit.
• Like Best Fit, requires additional time for searching.

Page 45 of 18
2. Paging:
Paging is a memory management scheme that allows the physical memory to be divided
into fixed-size blocks called pages. Similarly, the logical memory is divided into blocks of
the same size called frames. When a process is loaded into memory, it is divided into
fixed-size blocks called pages. These pages are then mapped to frames in physical
memory using a page table. Paging allows for more efficient memory management and
helps overcome fragmentation problems associated with contiguous allocation.

ROM
RAM

1. Physical Memory:
Imagine physical memory (RAM) as a series of frames, each capable of holding one page
of data. In our example, we have 8 frames in physical memory.

2. Logical Memory:
Logical memory, on the other hand, consists of a series of pages, each of the same size
as a frame in physical memory. In our example, each page is 4KB in size, and we have 16
pages of data.

3. Page Table:
The page table is used by the operating system to map logical pages to physical frames. It
contains an entry for each page, indicating which frame it is currently located in.

Page 46 of 18
4.

Memory Access:
When a program accesses memory, it does so use logical addresses. The operating
system translates these logical addresses into physical addresses using the page table.
For example, if the program accesses Page 2, the page table indicates that Page 2 is
currently in Frame 5. So, the operating system maps Page 2 to Frame 5 and retrieves the
data.

Advantages:
• No External Fragmentation: Paging eliminates external fragmentation by dividing
memory into fixed-size blocks (pages).
• Flexible Allocation: Processes can be allocated memory in non-contiguous chunks,
allowing for more flexible memory management.
• Simpler Address Translation: Address translation is simplified since it involves
mapping logical page numbers to physical frame numbers.
Disadvantages:
• Internal Fragmentation: Paging can suffer from internal fragmentation if the last
page of a process does not fully utilize its allocated frame.
• Page Table Overhead: Maintaining page tables can consume additional memory and
CPU resources.
• Page Faults: Paging introduces the concept of page faults, which can lead to
performance overhead if not managed efficiently.
3. Swapping:
1. Swapping. Swapping is a mechanism in which a process can be swapped temporarily
out of main memory (or move) to secondary storage (disk) and make that memory
available to other methods. At some later time, the system swaps back the process
from the secondary storage to the main memory

Page 47 of 18
2. Swapping is a technique used by operating systems to manage memory by moving
pages of data between main memory (RAM) and secondary storage (usually a hard
disk) when they are not actively being used. When the operating system detects that
the amount of free memory is low, it selects some pages that are not currently being
used and swaps them out to the secondary storage to free up space in RAM. Later,
when the swapped-out pages are needed again, the operating system can swap them
back into the main memory. Swapping allows for more efficient memory usage by
allowing the operating system to utilize secondary storage as an extension of RAM
when necessary.

Advantages:
• Increases Effective Memory Size: Swapping allows the operating system to
effectively increase the amount of available memory by using secondary storage as
an extension of RAM.
• Better Memory Utilization: It helps in better utilization of physical memory by
moving inactive pages out to secondary storage.
• Allows Multi-programming: Swapping enables multi-programming by allowing more
processes to be loaded into memory than would otherwise fit.
Disadvantages:

Page 48 of 18
• Performance Overhead: Swapping introduces overhead due to the time taken to
move pages between main memory and secondary storage.
• Disk I/O Bottleneck: Excessive swapping can lead to a bottleneck on disk I/O,
especially if the secondary storage is slower than RAM.
• Complexity: Managing swapping requires complex algorithms to decide which pages
to swap and when, as well as to handle page faults efficiently.

FRAGMENTATION
Fragmentation in operating systems occurs when memory is allocated and deallocated in a
way that leaves unusable memory fragments scattered throughout the system. There are
two main types of fragmentation: external fragmentation and internal fragmentation.
1. External Fragmentation:
• External fragmentation occurs when there is enough total memory space to
satisfy a request, but it is not contiguous.
• This type of fragmentation typically occurs in systems that use contiguous
memory allocation techniques, such as fixed or dynamic partitioning.
• As memory is allocated and deallocated, free memory blocks become
scattered throughout the memory space, leaving small unusable gaps
between allocated blocks.
• External fragmentation can lead to inefficient memory utilization since
available memory cannot be used if it is fragmented into smaller pieces.
2. Internal Fragmentation:
• Internal fragmentation occurs when allocated memory is larger than what the
process actually needs.
• This typically happens in systems that allocate memory in fixed-size blocks or
segments.
• When a process is allocated memory, it may be given a larger block than
necessary, leading to wasted space within that block.
• Although the entire allocated block is used by the process, the extra space
within it is not utilized efficiently, resulting in internal fragmentation.
DISADVANTAGES OF FRAGMENTATION:
• Reduced Memory Utilization: Both external and internal fragmentation leads to
inefficient use of memory, as some memory space becomes unusable.

Page 49 of 18
• Performance Degradation: Fragmentation can degrade system performance. For
example, accessing fragmented memory may require additional time for memory
management operations like searching for contiguous blocks or moving data around.
• Memory Management Overhead: Fragmentation may require additional overhead
for memory management. For instance, the operating system may need to
implement complex algorithms to handle fragmentation, which can consume CPU
cycles and memory resources.
• Increased Swapping: Swapping may occur more frequently in systems with high
fragmentation as the operating system tries to free up contiguous memory space by
moving pages to secondary storage.

QUESTIONS
1. What is an operating system?
2. Why is an operating system necessary for a computer?
3. What are the main functions of an operating system?
4. Define process in the context of an operating system.
5. What is process management and why is it important?
6. Explain the concept of multitasking in an operating system.
7. Differentiate between process and thread.
8. What is a process control block (PCB)?
9. How does an operating system handle process scheduling?
10. What are the criteria for selecting a scheduling algorithm?
11. Describe the difference between preemptive and non-preemptive scheduling.
12. What is CPU burst time and how does it relate to scheduling?
13. Explain the terms "context switch" and "dispatch latency."
14. How does an operating system manage processes in a multi-user environment?
15. What is process synchronization and why is it necessary?
16. Define deadlock in the context of process management.
17. How does an operating system prevent or resolve deadlocks?
18. What is a semaphore and how is it used for synchronization?
19. Explain the concept of inter-process communication (IPC).
20. What are some common IPC mechanisms used by operating systems?

Page 50 of 18
QUESTIONS
21. What is an operating system?
22. An operating system is software that acts as an intermediary between computer
hardware and user applications. It manages computer resources, provides essential
services, and facilitates communication between software and hardware
components.
23. Why is an operating system necessary for a computer?
24. An operating system is necessary for a computer because it provides a user-friendly
interface, manages hardware resources efficiently, facilitates multitasking, enables
software execution, ensures security, and offers various services such as file
management and networking.
25. What are the main functions of an operating system?
26. The main functions of an operating system include process management, memory
management, file system management, device management, security and access
control, user interface management, and networking.
27. Define process in the context of an operating system.
28. A process is an instance of a running program. It consists of the program code,
program counter, registers, stack, heap, and other necessary data. Processes are
managed by the operating system and can execute concurrently.
29. What is process management and why is it important?
30. Process management involves creating, scheduling, terminating, and controlling
processes. It is important for efficient utilization of CPU resources, ensuring fair
access to resources, providing multitasking capabilities, and facilitating concurrent
execution of multiple programs.
31. Explain the concept of multitasking in an operating system.
32. Multitasking is the ability of an operating system to execute multiple processes
concurrently on a single CPU. It allows users to run multiple programs simultaneously
and provides the illusion of parallel execution by rapidly switching between
processes.
33. Differentiate between process and thread.
34. A process is an independent entity that runs in its own memory space, whereas a
thread is a lightweight execution unit within a process. Multiple threads can exist
within a single process and share the same memory space, enabling concurrent
execution.
35. What is a process control block (PCB)?

Page 51 of 18
36. A process control block (PCB) is a data structure used by the operating system to
store information about a process. It contains essential details such as process state,
program counter, CPU registers, memory allocation, and scheduling information.
37. How does an operating system handle process scheduling?
38. The operating system uses process scheduling algorithms to determine the order in
which processes are executed on the CPU. It selects processes from the ready queue
and allocates CPU time based on scheduling policies and priorities.
39. What are the criteria for selecting a scheduling algorithm?
40. Criteria for selecting a scheduling algorithm include CPU utilization, throughput,
turnaround time, waiting time, response time, fairness, and scalability.
41. Describe the difference between preemptive and non-preemptive scheduling.
42. Preemptive scheduling allows the operating system to interrupt a currently running
process to allocate CPU time to a higher-priority process. Non-preemptive scheduling
does not allow such interruptions and lets processes run until they voluntarily yield
the CPU.
43. What is CPU burst time and how does it relate to scheduling?
44. CPU burst time is the amount of time a process spends executing on the CPU without
being interrupted. It is a crucial factor in scheduling algorithms, as scheduling
decisions are often based on predictions of future CPU burst times.
45. Explain the terms "context switch" and "dispatch latency."
46. A context switch is the process of saving the state of a currently running process,
loading the state of another process, and transferring control from one process to
another. Dispatch latency refers to the time taken by the operating system to
perform a context switch.
47. How does an operating system manage processes in a multi-user environment?
48. In a multi-user environment, the operating system allocates resources fairly among
multiple users, enforces access control mechanisms to protect user data, and
provides facilities for user authentication, session management, and inter-process
communication.
49. What is process synchronization and why is it necessary?
50. Process synchronization is the coordination of multiple processes to ensure that they
cooperate and share resources in a controlled manner. It is necessary to prevent race
conditions, avoid data inconsistency, and maintain system integrity.
51. Define deadlock in the context of process management.

Page 52 of 18
52. Deadlock occurs when two or more processes are unable to proceed because each is
waiting for a resource held by the other. Deadlocks can lead to system-wide resource
starvation and must be prevented or resolved by the operating system.
53. How does an operating system prevent or resolve deadlocks?
54. Operating systems prevent or resolve deadlocks using techniques such as resource
allocation policies, deadlock detection algorithms, deadlock avoidance strategies,
and deadlock recovery mechanisms like process termination or resource preemption.
55. What is a semaphore and how is it used for synchronization?
56. A semaphore is a synchronization primitive used to control access to shared
resources by multiple processes or threads. It can be used to implement mutual
exclusion, synchronization, and signaling mechanisms by providing atomic operations
such as wait (P) and signal (V).
57. Explain the concept of inter-process communication (IPC).
58. Inter-process communication (IPC) refers to mechanisms used by processes to
exchange data, synchronize their actions, and communicate with each other. IPC
allows processes to cooperate, coordinate, and share information in a multi-tasking
environment.
59. What are some common IPC mechanisms used by operating systems?
60. Common IPC mechanisms include pipes, sockets, message queues, shared memory,
signals, semaphores, and remote procedure calls (RPC). These mechanisms facilitate
communication and synchronization between processes running on the same system
or across networked computers.
61.

Page 53 of 18

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy