Operating System Unit 1 & 2 Intro Notes

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 48

Operating system

UNIT - I
I. Introduction:

An Operating System (OS) is software that acts as an interface between computer hardware
components and the user. An operating system is a software programme required to manage and
operate a computing device like smartphones, tablets, computers, supercomputers, web servers,
cars, network towers, smartwatches, etc. It is the operating system that eliminates the need to
know coding language to interact with computing devices.

For the most part, the IT industry largely focuses on the top five OSs, including Apple macOS,
Microsoft Windows, Google's Android OS, Linux Operating System, and Apple iOS.

Functions of Operating system – Operating system performs four functions: 


Convenience: An OS makes a computer more convenient to use.
Efficiency: An OS allows the computer system resources to be used efficiently.
Ability to Evolve: An OS should be constructed in such a way as to permit the effective
development, testing, and introduction of new system functions at the same time without
interfering with service.
Throughput: An OS should be constructed so that It can give maximum throughput(Number
of tasks per unit time).
Major Functionalities of Operating System: 
 Resource Management: When parallel accessing happens in the OS means when multiple
users are accessing the system the OS works as Resource Manager, Its responsibility is to
provide hardware to the user. It decreases the load in the system.
 Process Management: It includes various tasks like scheduling, termination of the
process. OS manages various tasks at a time. Here CPU Scheduling happens means all the
tasks would be done by the many algorithms that use for scheduling.
 Storage Management: The file system mechanism used for the management of the
storage. NIFS, CFS, CIFS, NFS, etc. are some file systems. All the data stores in various
tracks of Hard disks that all managed by the storage manager. It included Hard Disk.
 Memory Management: Refers to the management of primary memory. The operating
system has to keep track, how much memory has been used and by whom. It has to decide
which process needs memory space and how much. OS also has to allocate and deallocate
the memory space.
 Security/Privacy Management: Privacy is also provided by the Operating system by
means of passwords so that unauthorized applications can’t access programs or data. For
example, Windows uses Kerberos authentication to prevent unauthorized access to data.
The process operating system as User Interface: 

1. User
2. System and application programs
3. Operating system
4. Hardware
Every general-purpose computer consists of the hardware, operating system, system programs,
and application programs. The hardware consists of memory, CPU, ALU, and I/O devices,
peripheral devices, and storage devices. System program consists of compilers, loaders,
editors, OS, etc. The application program consists of business programs, database programs. 

Types of Operating System  


 Batch Operating System- Sequence of jobs in a program on a computer without manual
interventions.
 Time-sharing operating System- allows many users to share the computer resources. (Max
utilization of the resources).
 Distributed operating System- Manages a group of different computers and makes appear
to be a single computer.
 Network operating system- computers running in different operating systems can
participate in a common network (It is used for security purposes).
 Real-time operating system – meant applications to fix the deadlines.
Examples of Operating System are – 

 Windows (GUI based, PC)


 GNU/Linux (Personal, Workstations, ISP, File and print server, Three-tier client/Server)
 macOS (Macintosh), used for Apple’s personal computers and workstations (MacBook,
iMac).
 Android (Google’s Operating System for smartphones/tablets/smartwatches)

 iOS (Apple’s OS for iPhone, iPad, and iPod Touch)

II. Types of OS:


1. Mainframe System:

In simplest terms, an operating system is a collection of programs that manage a computer


system's internal workings— its memory, processors, devices, and file system. Mainframe
operating systems are sophisticated products with substantially different characteristics and
purposes.

i. A mainframe operating system is networking software infrastructure that allows a


mainframe computer to run programs, connect linked machines, and process complex
numerical and data-driven tasks.
ii. A mainframe system works on a mainframe computer, which is usually thought of as
sort of the server for a computer network.
iii. A mainframe computer is a large integrated machine with a lot of memory, a lot of
storage capacity, and a lot of high-end processors. For such a large functioning it has
a lot of computational power in comparison to normal computer systems.
iv. Businesses today rely on the mainframe to: Perform large-scale transaction
processing (thousands of transactions per second) Support thousands of users and
application programs concurrently accessing numerous resources. Manage terabytes
of information in databases.

Features:

 Storage: These systems have good storage capacity that makes the system process a huge amount
of data as and when needed.
 Centralized server
 RAS
 Scalability
 Security
 Compatibility
 Throughput Computing
 Transactional Processing

2. Desktop System:
i. The desktop OS is the environment where the user controls a personal
computer (Desktop, Notebook PC).
ii. It aids in the management of computer hardware and software resources.
iii. It supports fundamental features such as task scheduling, peripheral control, printing,
input/output, and memory allocation.
iv. The three most common operating systems for personal computers are Microsoft
Windows, macOS, and Linux.

Features:

 Protected and supervisor mode.


 Allows disk access and file systems Device drivers Networking Security.
 Program Execution.
 Memory management Virtual Memory Multitasking.
 Handling I/O operations.
 Manipulation of the file system.
 Error Detection and handling.
 Resource allocation.

3. Multiprocessor operating system:


A shared-memory multiprocessor (or just multiprocessor henceforth) is a computer system in
which two or more CPUs share full access to a common RAM.
Multiprocessor system means, there are more than one processor which work parallel to perform
the required operations. It allows the multiple processors, and they are connected with physical
memory, computer buses, clocks, and peripheral devices. The main objective of using a
multiprocessor operating system is to increase the execution speed of the system and consume
high computing power.

Advantages
The advantages of multiprocessor systems are as follows −

 If there are multiple processors working at the same time, more processes can be executed
parallel at the same time. Therefore the throughput of the system will increase.
 Multiprocessor systems are more reliable. Due to the fact that there are more than one
processor, in case of failure of any one processor will not make the system come to a halt.
Although the system will become slow if it happens but still it will work.
 Electricity consumption of a multiprocessor system is less than the single processor
system. This is because, in single processor systems, many processes have to be executed
by only one processor so there is a lot of load on it. But in case of multiple processor
systems, there are many processors to execute the processes so the load on each processor
will be comparatively less so electricity consumed will also be less.
Fields
The different fields of multiprocessor operating systems used are as follows −
 Asymmetric Multiprocessor − Every processor is given seeded tasks in this operating
system, and the master processor has the power for running the entire system. In the
course, it uses the master-slave relationship.
 Symmetric Multiprocessor − In this system, every processor owns a similar copy of the
OS, and they can make communication in between one another. All processors are
connected with peering relationship nature, meaning it won’t be using master & slave
relation.
 Shared memory Multiprocessor − As the name indicates, each central processing unit
contains distributable common memory.
 Uniform Memory Access Multiprocessor (UMA) − In this system, it allows accessing
all memory at a consistent speed rate for all processors.
 Distributed memory Multiprocessor − A computer system consisting of a range of
processors, each with its own local memory, connected through a network, which means
all the kinds of processors consist of their own private memory.
 NUMA Multiprocessor − The abbreviation of NUMA is Non-Uniform Memory Access
Multiprocessor. It entails some areas of the memory for approaching at a swift rate and
the remaining parts of memory are used for other tasks.
The best Operating system in multiprocessor and parallel computing environment is UNIX,
because it has many advantages such as,

 It is multi-user.
 It is portable.
 It is good for multitasking.
 It has an organized file system.
 It has device independence.
 Utilities are brief and operation commands can be combined in a single line.
 Unix provides various services, as it has built-in administrative tools,
 UNIX can share files over electronic networks with many various kinds of equipment.

4. Distributed operating system:


A distributed operating system (DOS) is an essential type of operating system. Distributed
systems use many central processors to serve multiple real-time applications and users. As a
result, data processing jobs are distributed between the processors.
It connects multiple computers via a single communication channel. Furthermore, each of
these systems has its own processor and memory. Additionally, these CPUs communicate via
high-speed buses or telephone lines. Individual systems that communicate via a single channel
are regarded as a single entity. They're also known as loosely coupled systems.
This operating system consists of numerous computers, nodes, and sites joined together
via LAN/WAN lines. It enables the distribution of full systems on a couple of center processors,
and it supports many real-time products and different users. Distributed operating systems can
share their computing resources and I/O files while providing users with virtual machine
abstraction.

Types of Distributed Operating System


There are various types of Distributed Operating systems. Some of them are as follows:
1. Client-Server Systems
2. Peer-to-Peer Systems
3. Middleware
4. Three-tier
5. N-tier
Client-Server System
This type of system requires the client to request a resource, after which the server gives the
requested resource. When a client connects to a server, the server may serve multiple clients at
the same time.
Client-Server Systems are also referred to as "Tightly Coupled Operating Systems". This system
is primarily intended for multiprocessors and homogenous multicomputer. Client-Server Systems
function as a centralized server since they approve all requests issued by client systems.
Server systems can be divided into two parts:
1. Computer Server System
This system allows the interface, and the client then sends its own requests to be executed as an
action. After completing the activity, it sends a back response and transfers the result to the
client.
2. File Server System
It provides a file system interface for clients, allowing them to execute actions like file creation,
updating, deletion, and more.
Peer-to-Peer System
The nodes play an important role in this system. The task is evenly distributed among the nodes.
Additionally, these nodes can share data and resources as needed. Once again, they require a
network to connect.
The Peer-to-Peer System is known as a "Loosely Couple System". This concept is used in
computer network applications since they contain a large number of processors that do not share
memory or clocks. Each processor has its own local memory, and they interact with one another
via a variety of communication methods like telephone lines or high-speed buses.
Middleware
Middleware enables the interoperability of all applications running on different operating
systems. Those programs are capable of transferring all data to one other by using these services.
Three-tier
The information about the client is saved in the intermediate tier rather than in the client, which
simplifies development. This type of architecture is most commonly used in online applications.
N-tier
When a server or application has to transmit requests to other enterprise services on the network,
n-tier systems are used.
Features of Distributed Operating System
There are various features of the distributed operating system. Some of them are as follows:
Openness
It means that the system's services are freely displayed through interfaces. Furthermore, these
interfaces only give the service syntax. For example, the type of function, its return type,
parameters, and so on. Interface Definition Languages are used to create these interfaces (IDL).
Scalability
It refers to the fact that the system's efficiency should not vary as new nodes are added to the
system. Furthermore, the performance of a system with 100 nodes should be the same as that of a
system with 1000 nodes.
Resource Sharing
Its most essential feature is that it allows users to share resources. They can also share resources
in a secure and controlled manner. Printers, files, data, storage, web pages, etc., are examples of
shared resources.
Flexibility
A DOS's flexibility is enhanced by modular qualities and delivers a more advanced range of
high-level services. The kernel/ microkernel's quality and completeness simplify the
implementation of such services.
Transparency
It is the most important feature of the distributed operating system. The primary purpose of a
distributed operating system is to hide the fact that resources are shared. Transparency also
implies that the user should be unaware that the resources he is accessing are shared.
Furthermore, the system should be a separate independent unit for the user.
Heterogeneity
The components of distributed systems may differ and vary in operating systems, networks,
programming languages, computer hardware, and implementations by different developers.
Fault Tolerance
Fault tolerance is that process in which user may continue their work if the software or hardware
fails.
Examples of Distributed Operating System
There are various examples of the distributed operating system. Some of them are as follows:
Solaris
It is designed for the SUN multiprocessor workstations
OSF/1
It's compatible with Unix and was designed by the Open Foundation Software Company.
Micros
The MICROS operating system ensures a balanced data load while allocating jobs to all nodes in
the system.
DYNIX
It is developed for the Symmetry multiprocessor computers.
Applications of Distributed Operating System
There are various applications of the distributed operating system. Some of them are as follows:
Network Applications
DOS is used by many network applications, including the Web, peer-to-peer networks,
multiplayer web-based games, and virtual communities.
Telecommunication Networks
DOS is useful in phones and cellular networks. A DOS can be found in networks like the
Internet, wireless sensor networks, and routing algorithms.
Parallel Computation
DOS is the basis of systematic computing, which includes cluster computing and grid
computing, and a variety of volunteer computing projects.
Real-Time Process Control
The real-time process control system operates with a deadline, and such examples include
aircraft control systems.
Advantages
1. It may share all resources (CPU, disk, network interface, nodes, computers, and so on)
from one site to another, increasing data availability across the entire system.
2. It reduces the probability of data corruption because all data is replicated across all sites;
if one site fails, the user can access data from another operational site.
3. The entire system operates independently of one another, and as a result, if one site
crashes, the entire system does not halt.
4. It increases the speed of data exchange from one site to another site.
5. It is an open system since it may be accessed from both local and remote locations.
6. It helps in the reduction of data processing time.
7. Most distributed systems are made up of several nodes that interact to make them fault-
tolerant. If a single machine fails, the system remains operational.
Disadvantages
There are various disadvantages of the distributed operating system. Some of them are as
follows:
1. The system must decide which jobs must be executed when they must be executed, and
where they must be executed. A scheduler has limitations, which can lead to
underutilized hardware and unpredictable runtimes.
2. It is hard to implement adequate security in DOS since the nodes and connections must
be secured.
3. The database connected to a DOS is relatively complicated and hard to manage in
contrast to a single-user system.
4. The underlying software is extremely complex and is not understood very well compared
to other systems.
5. The more widely distributed a system is, the more communication latency can be
expected. As a result, teams and developers must choose between availability,
consistency, and latency.
6. These systems aren't widely available because they're thought to be too expensive.
5. Clustered operating system:
Cluster systems are similar to parallel systems because both systems use multiple CPUs. The
primary difference is that clustered systems are made up of two or more independent systems
linked together. They have independent computer systems and a shared storage media, and all
systems work together to complete all tasks. All cluster nodes use two different approaches to
interact with one another, like message passing interface (MPI) and parallel virtual machine
(PVM).
Cluster operating systems are a combination of software and hardware clusters. Hardware
clusters aid in the sharing of high-performance disks among all computer systems, while
software clusters give a better environment for all systems to operate. A cluster system consists
of various nodes, each of which contains its cluster software. The cluster software is installed on
each node in the clustered system, and it monitors the cluster system and ensures that it is
operating properly. If one of the clustered system's nodes fails, the other nodes take over its
storage and resources and try to restart.

Cluster components are generally linked via fast area networks, and each node executing its
instance of an operating system. In most cases, all nodes share the same hardware and operating
system, while different hardware or different operating systems could be used in other cases. The
primary purpose of using a cluster system is to assist with weather forecasting, scientific
computing, and supercomputing systems.

There are two clusters available to make a more efficient cluster. These are as follows:

1. Software Cluster
2. Hardware Cluster

Software Cluster

The Software Clusters allows all the systems to work together.

Hardware Cluster

It helps to allow high-performance disk sharing among systems.

Types of Clustered Operating System

There are mainly three types of the clustered operating system:

1. Asymmetric Clustering System


2. Symmetric Clustering System
3. Parallel Cluster System

Asymmetric Clustering System

In the asymmetric cluster system, one node out of all nodes is in hot standby mode, while the
remaining nodes run the essential applications. Hot standby mode is completely fail-safe and also
a component of the cluster system. The node monitors all server functions; the hot standby node
swaps this position if it comes to a halt.

Symmetric Clustering System

Multiple nodes help run all applications in this system, and it monitors all nodes simultaneously.
Because it uses all hardware resources, this cluster system is more reliable than asymmetric
cluster systems.

Parallel Cluster System

A parallel cluster system enables several users to access similar data on the same shared storage
system. The system is made possible by a particular software version and other apps.

Classification of clusters

Computer clusters are managed to support various purposes, from general-purpose business
requirements like web-service support to computation-intensive scientific calculations. There are
various classifications of clusters. Some of them are as follows:

1. Fail Over Clusters

The process of moving applications and data resources from a failed system to another system in
the cluster is referred to as fail-over. These are the databases used to cluster important missions,
application servers, mail, and file.

2. Load Balancing Cluster

The cluster requires better load balancing abilities amongst all available computer systems. All
nodes in this type of cluster can share their computing workload with other nodes, resulting in
better overall performance. For example, a web-based cluster can allot various web queries to
various nodes, so it helps to improve the system speed. When it comes to grabbing requests, only
a few cluster systems use the round-robin method.

3. High Availability Clusters


These are also referred to as "HA clusters". They provide a high probability that all resources
will be available. If a failure occurs, such as a system failure or the loss of a disk volume, the
queries in the process are lost. If a lost query is retried, it will be handled by a different cluster
computer. It is widely used in news, email, FTP servers, and the web.

Advantages

Various advantages of Clustered Operating System are as follows:

1. High Availability

Although every node in a cluster is a standalone computer, the failure of a single node doesn't
mean a loss of service. A single node could be pulled down for maintenance while the remaining
clusters take on a load of that single node.

2. Cost Efficiency

When compared to highly reliable and larger storage mainframe computers, these types of cluster
computing systems are thought to be more cost-effective and cheaper. Furthermore, most of
these systems outperform mainframe computer systems in terms of performance.

3. Additional Scalability

A cluster is set up in such a way that more systems could be added to it in minor increments.
Clusters may add systems in a horizontal fashion. It means that additional systems could be
added to clusters to improve their performance, fault tolerance, and redundancy.

4. Fault Tolerance

Clustered systems are quite fault-tolerance, and the loss of a single node does not result in the
system's failure. They might also have one or more nodes in hot standby mode, which allows
them to replace failed nodes.
5. Performance The clusters are commonly used to improve the availability and performance
over the single computer systems, whereas usually being much more cost-effective than the
single computer system of comparable speed or availability.

6. Processing Speed

The processing speed is also similar to mainframe systems and other types of supercomputers on
the market.

Disadvantages

Various disadvantages of the Clustered Operating System are as follows:

1. Cost-Effective

One major disadvantage of this design is that it is not cost-effective. The cost is high, and the
cluster will be more expensive than a non-clustered server management design since it requires
good hardware and a design.

2. Required Resources

Clustering necessitates the use of additional servers and hardware, making monitoring and
maintenance difficult. As a result, infrastructure must be improved.

3. Maintenance

It isn't easy to system establishment, monitor, and maintenance this system.

6. Real time operating system:


Real-time operating systems (RTOS) are used in environments where a large number of
events, mostly external to the computer system, must be accepted and processed in a short time
or within certain deadlines. Such applications are industrial control, telephone switching
equipment, flight control, and real-time simulations.  With an RTOS, the processing time is
measured in tenths of seconds. This system is time-bound and has a fixed deadline. The
processing in this type of system must occur within the specified constraints. Otherwise, This
will lead to system failure.

Examples of the real-time operating systems: Airline traffic control systems, Command
Control Systems, Airlines reservation system, Heart Pacemaker, Network Multimedia Systems,
and Robot. etc.

The real-time operating systems can be of 3 types –

1. Hard Real-Time operating system:
These operating systems guarantee that critical tasks be completed within a range of time. For
example, a robot is hired to weld a car body. If the robot welds too early or too late, the car
cannot be sold, so it is a hard real-time system that requires complete car welding by robot
hardly on the time.
2. Soft real-time operating system:
This operating system provides some relaxation in the time limit. 
For example – Multimedia systems, digital audio systems etc. Explicit, programmer-defined
and controlled processes are encountered in real-time systems. A separate process is changed
with handling a single external event. The process is activated upon occurrence of the related
event signalled by an interrupt. 
Multitasking operation is accomplished by scheduling processes for execution independently
of each other. Each process is assigned a certain level of priority that corresponds to the
relative importance of the event that it services. The processor is allocated to the highest
priority processes. This type of schedule, called, priority-based pre-emptive scheduling is used
by real-time systems.

3. Firm Real-time Operating System:
RTOS of this type have to follow deadlines as well. In spite of its small impact, missing a
deadline can have unintended consequences, including a reduction in the quality of the
product. Example: Multimedia applications.
Advantages: 

The advantages of real-time operating systems are as follows- 


Maximum consumption – Maximum utilization of devices and systems. Thus more output
from all the resources.
Task Shifting – Time assigned for shifting tasks in these systems is very less. For example, in
older systems, it takes about 10 microseconds. Shifting one task to another and in the latest
systems, it takes 3 microseconds.
Focus On Application – Focus on running applications and less importance to applications
that are in the queue.
Real-Time Operating System In Embedded System – Since the size of programs is small,
RTOS can also be embedded systems like in transport and others. 
Error Free – These types of systems are error-free.
Memory Allocation – Memory allocation is best managed in these types of systems.
Disadvantages: 
The disadvantages of real-time operating systems are as follows- 
Limited Tasks – Very few tasks run simultaneously, and their concentration is very less on
few applications to avoid errors.
Use Heavy System Resources – Sometimes the system resources are not so good and they are
expensive as well.
Complex Algorithms – The algorithms are very complex and difficult for the designer to write
on. 
Device Driver And Interrupt signals – It needs specific device drivers and interrupts signals
to respond earliest to interrupts.
Thread Priority – It is not good to set thread priority as these systems are very less prone to
switching tasks.
Minimum Switching – RTOS performs minimal task switching.
III. Hardware Protection
we know that a computer system contains the hardware like processor, monitor, RAM and
many more, and one thing that the operating system ensures that these devices can not directly
accessible by the user. 

Basically, hardware protection is divided into 3 categories: CPU protection, Memory


Protection, and I/O protection. These are explained as following below. 
1. CPU Protection: 
CPU protection is referred to as we can not give CPU to a process forever, it should be for
some limited time otherwise other processes will not get the chance to execute the process. So
for that, a timer is used to get over from this situation. which is basically give a certain amount
of time a process and after the timer execution a signal will be sent to the process to leave the
CPU. hence process will not hold CPU for more time. 
2. Memory Protection:
In memory protection, we are talking about that situation when two or more processes are in
memory and one process may access the other process memory. and to prevent this situation
we are using two registers as: 
1. Bare register
2. Limit register
So basically Bare register store the starting address of program and limit register store the size
of the process, so when a process wants to access the memory then it is checked that it can
access or can not access the memory. 

3. I/O Protection:
So when we’re ensuring the I/O protection then some cases will never have occurred in the
system as: 
1. Termination I/O of other process
2. View I/O of other process
3. Giving priority to a particular process I/O

IV. Components of Operating System

An operating system is a large and complex system that can only be created by partitioning into
small parts. These pieces should be a well-defined part of the system, carefully defining inputs,
outputs, and functions.

Although Windows, Mac, UNIX, Linux, and other OS do not have the same structure, most
operating systems share similar OS system components, such as file, memory, process, I/O
device management.
The components of an operating system play a key role to make a variety of computer system
parts work together. There are the following components of an operating system, such as:

1. Process Management
2. File Management
3. Network Management
4. Main Memory Management
5. Secondary Storage Management
6. I/O Device Management
7. Security Management
8. Command Interpreter System

Operating system components help you get the correct computing by detecting CPU and memory
hardware errors.

Process Management

The process management component is a procedure for managing many processes running
simultaneously on the operating system. Every running software application program has one or
more processes associated with them.

For example, when you use a search engine like Chrome, there is a process running for that
browser program.

Process management keeps processes running efficiently. It also uses memory allocated to them
and shutting them down when needed.

The execution of a process must be sequential so, at least one instruction should be executed on
behalf of the process.

Functions of process management

Here are the following functions of process management in the operating system, such as:
o Process creation and deletion.

o Suspension and resumption.

o Synchronization process

o Communication process

File Management

A file is a set of related information defined by its creator. It commonly represents programs
(both source and object forms) and data. Data files can be alphabetic, numeric, or alphanumeric.

Functions:

The operating system has the following important activities in connection with file management:

o File and directory creation and deletion.

o For manipulating files and directories.

o Mapping files onto secondary storage.

o Backup files on stable storage media.

Network Management

Network management is the process of administering and managing computer networks. It


includes performance management, provisioning of networks, fault analysis, and maintaining the
quality of service.

A distributed system is a collection of computers or processors that never share their memory
and clock. In this type of system, all the processors have their local memory, and the processors
communicate with each other using different communication cables, such as fibre optics or
telephone lines.

The computers in the network are connected through a communication network, which can
configure in many different ways. The network can fully or partially connect in network
management, which helps users design routing and connection strategies that overcome
connection and security issues.

Functions of Network management

Network management provides the following functions, such as:

o Distributed systems help you to various computing resources in size and function. They
may involve minicomputers, microprocessors, and many general-purpose computer
systems.
o A distributed system also offers the user access to the various resources the network
shares.
o It helps to access shared resources that help computation to speed up or offers data
availability and reliability.

Main Memory management

Main memory is a large array of storage or bytes, which has an address. The memory
management process is conducted by using a sequence of reads or writes of specific memory
addresses.

It should be mapped to absolute addresses and loaded inside the memory to execute a program.
The selection of a memory management method depends on several factors.

However, it is mainly based on the hardware design of the system. Each algorithm requires
corresponding hardware support. Main memory offers fast storage that can be accessed directly
by the CPU. It is costly and hence has a lower storage capacity. However, for a program to be
executed, it must be in the main memory.

Functions of Memory management

An Operating System performs the following functions for Memory Management in the
operating system:
o It helps you to keep track of primary memory.

o Determine what part of it are in use by whom, what part is not in use.

o In a multiprogramming system, the OS decides which process will get memory and how
much.
o Allocates the memory when a process requests.

o It also de-allocates the memory when a process no longer requires or has been
terminated.

Secondary-Storage Management

The most important task of a computer system is to execute programs. These programs help you
to access the data from the main memory during execution. This memory of the computer is very
small to store all data and programs permanently. The computer system offers secondary storage
to back up the main memory.

Functions of Secondary storage management

Here are some major functions of secondary storage management in the operating system:

o Storage allocation

o Free space management

o Disk scheduling

I/O Device Management

One of the important use of an operating system that helps to hide the variations of specific
hardware devices from the user.
Functions of I/O management

The I/O management system offers the following functions, such as:

o It offers a buffer caching system

o It provides general device driver code

o It provides drivers for particular hardware devices.

o I/O helps you to know the individualities of a specific device.

Security Management

The various processes in an operating system need to be secured from other activities. Therefore,
various mechanisms can ensure those processes that want to operate files, memory CPU, and
other hardware resources should have proper authorization from the operating system.

Security refers to a mechanism for controlling the access of programs, processes, or users to the
resources defined by computer controls to be imposed, together with some means of
enforcement.

For example, memory addressing hardware helps to confirm that a process can be executed
within its own address space. The time ensures that no process has control of the CPU without
renouncing it. Lastly, no process is allowed to do its own I/O to protect, which helps you to keep
the integrity of the various peripheral devices.
Security can improve reliability by detecting latent errors at the interfaces between component
subsystems. Early detection of interface errors can prevent the foulness of a healthy subsystem
by a malfunctioning subsystem. An unprotected resource cannot misuse by an unauthorized or
incompetent user.

Command Interpreter System

One of the most important components of an operating system is its command interpreter. The
command interpreter is the primary interface between the user and the rest of the system.

Many commands are given to the operating system by control statements. A program that reads
and interprets control statements is automatically executed when a new job is started in a batch
system or a user logs in to a time-shared system. This program is variously called.

o The control card interpreter,

o The command-line interpreter,

o The shell (in UNIX), and so on.

Its function is quite simple, get the next command statement, and execute it. The command
statements deal with process management, I/O handling, secondary storage management, main
memory management, file system access, protection, and networking.
V. Handheld Systems:
A handheld, handheld PC or handheld computer is a computer device that can be held in the
palm of one's hand. These computers are commonly used to store appointments, phone numbers,
tasks, and other data commonly needed while away from home or office.

Personal Digital Assistants (PDAs) Cellular telephones Issues: Limited memory Slow
processors Small display screens.

VI. Operating System services:

An Operating System provides services to both the users and to the programs.

 It provides programs an environment to execute.


 It provides users the services to execute the programs in a convenient manner.
Following are a few common services provided by an operating system −

 Program execution
 I/O operations
 File System manipulation
 Communication
 Error Detection
 Resource Allocation
 Protection

Program execution

Operating systems handle many kinds of activities from user programs to system programs like
printer spooler, name servers, file server, etc. Each of these activities is encapsulated as a
process.
A process includes the complete execution context (code to execute, data to manipulate,
registers, OS resources in use). Following are the major activities of an operating system with
respect to program management −

 Loads a program into memory.


 Executes the program.
 Handles program's execution.
 Provides a mechanism for process synchronization.
 Provides a mechanism for process communication.
 Provides a mechanism for deadlock handling.
I/O Operation

An I/O subsystem comprises of I/O devices and their corresponding driver software. Drivers hide
the peculiarities of specific hardware devices from the users.
An Operating System manages the communication between user and device drivers.

 I/O operation means read or write operation with any file or any specific I/O device.
 Operating system provides the access to the required I/O device when required.

File system manipulation

A file represents a collection of related information. Computers can store files on the disk
(secondary storage), for long-term storage purpose. Examples of storage media include magnetic
tape, magnetic disk and optical disk drives like CD, DVD. Each of these media has its own
properties like speed, capacity, data transfer rate and data access methods.
A file system is normally organized into directories for easy navigation and usage. These
directories may contain files and other directions. Following are the major activities of an
operating system with respect to file management −

 Program needs to read a file or write a file.


 The operating system gives the permission to the program for operation on file.
 Permission varies from read-only, read-write, denied and so on.
 Operating System provides an interface to the user to create/delete files.
 Operating System provides an interface to the user to create/delete directories.
 Operating System provides an interface to create the backup of file system.

Communication

In case of distributed systems which are a collection of processors that do not share memory,
peripheral devices, or a clock, the operating system manages communications between all the
processes. Multiple processes communicate with one another through communication lines in the
network.
The OS handles routing and connection strategies, and the problems of contention and security.
Following are the major activities of an operating system with respect to communication −

 Two processes often require data to be transferred between them


 Both the processes can be on one computer or on different computers, but are connected
through a computer network.
 Communication may be implemented by two methods, either by Shared Memory or by
Message Passing.

Error handling
Errors can occur anytime and anywhere. An error may occur in CPU, in I/O devices or in the
memory hardware. Following are the major activities of an operating system with respect to error
handling −

 The OS constantly checks for possible errors.


 The OS takes an appropriate action to ensure correct and consistent computing.

Resource Management

In case of multi-user or multi-tasking environment, resources such as main memory, CPU cycles
and files storage are to be allocated to each user or job. Following are the major activities of an
operating system with respect to resource management −

 The OS manages all kinds of resources using schedulers.


 CPU scheduling algorithms are used for better utilization of CPU.

Protection

Considering a computer system having multiple users and concurrent execution of multiple
processes, the various processes must be protected from each other's activities.
Protection refers to a mechanism or a way to control the access of programs, processes, or users
to the resources defined by a computer system. Following are the major activities of an operating
system with respect to protection −

 The OS ensures that all access to system resources is controlled.


 The OS ensures that external I/O devices are protected from invalid access attempts.
 The OS provides authentication features for each user by means of passwords.

VII. System calls:

System Programming can be defined as the act of building Systems Software using System
Programming Languages. According to Computer Hierarchy, one which comes at last is
Hardware. Then it is Operating System, System Programs, and finally Application Programs.
Program Development and Execution can be done conveniently in System Programs. Some of
the System Programs are simply user interfaces, others are complex. It traditionally lies
between the user interface and system calls. 
 
So here, the user can only view up-to-the System Programs he can’t see System Calls. 
System Programs can be divided into these categories: 

1. File Management – A file is a collection of specific information stored in the memory of a


computer system. File management is defined as the process of manipulating files in the
computer system; its management includes the process of creating, modifying and deleting
files. 
 It helps to create new files in the computer system and placing them at specific
locations. 
 It helps in easily and quickly locating these files in the computer system. 
 It makes the process of sharing files among different users very easy and user-friendly. 
 It helps to store files in separate folders known as directories. 
 These directories help users to search files quickly or to manage files according to their
types of uses. 
 It helps users to modify the data of files or to modify the name of files in directories. 
 
2. Status Information – Information like date, time amount of available memory, or disk
space is asked by some users. Others providing detailed performance, logging, and
debugging information which is more complex. All this information is formatted and
displayed on output devices or printed. Terminal or other output devices or files or a
window of GUI is used for showing the output of programs. 
 
3. File Modification – For modifying the contents of files we use this. For Files stored on
disks or other storage devices, we used different types of editors. For searching contents of
files or perform transformations of files we use special commands. 

4. Programming-Language support – For common programming languages, we use


Compilers, Assemblers, Debuggers, and interpreters which are already provided to users. It
provides all support to users. We can run any programming language. All languages of
importance are already provided. 
5. Program Loading and Execution – When the program is ready after Assembling and
compilation, it must be loaded into memory for execution. A loader is part of an operating
system that is responsible for loading programs and libraries. It is one of the essential
stages for starting a program. Loaders, relocatable loaders, linkage editors, and Overlay
loaders are provided by the system. 
6. Communications – 
Virtual connections among processes, users, and computer systems are provided by
programs. Users can send messages to another user on their screen, User can send e-mail,
browsing on web pages, remote login, the transformation of files from one user to another. 
Some examples of system program in O.S. are – 

 Windows 10 
 Mac OS X 
 Ubuntu 
 Linux 
 Unix 
 Android 
 Anti-virus 
 Disk formatting 
 Computer language translators  
VIII. Process concepts:

A process is basically a program in execution. The execution of a process must progress in a


sequential fashion.
When a program is loaded into the memory and it becomes a process, it can be divided into four
sections ─ stack, heap, text and data. The following image shows a simplified layout of a process
inside main memory −

S.N. Component & Description

1 Stack

The process Stack contains the temporary data such as method/function parameters, return address
and local variables.

2 Heap

This is dynamically allocated memory to a process during its run time.

3 Text

This includes the current activity represented by the value of Program Counter and the contents of the
processor's registers.

4 Data

This section contains the global and static variables.


Program

A program is a piece of code which may be a single line or millions of lines. A computer
program is usually written by a computer programmer in a programming language. For example,
here is a simple program written in C programming language −

#include <stdio.h>

int main() {
printf("Hello, World! \n");
return 0;
}

A computer program is a collection of instructions that performs a specific task when executed
by a computer. When we compare a program with a process, we can conclude that a process is a
dynamic instance of a computer program.

A part of a computer program that performs a well-defined task is known as an algorithm. A


collection of computer programs, libraries and related data are referred to as a software.

Process Life Cycle

When a process executes, it passes through different states. These stages may differ in different
operating systems, and the names of these states are also not standardized.

In general, a process can have one of the following five states at a time.

S.N. State & Description

1 Start

This is the initial state when a process is first started/created.

2 Ready
The process is waiting to be assigned to a processor. Ready processes are waiting to have the
processor allocated to them by the operating system so that they can run. Process may come into this
state after Start state or while running it by but interrupted by the scheduler to assign CPU to some
other process.

3 Running

Once the process has been assigned to a processor by the OS scheduler, the process state is set to
running and the processor executes its instructions.

4 Waiting

Process moves into the waiting state if it needs to wait for a resource, such as waiting for user input,
or waiting for a file to become available.

5 Terminated or Exit

Once the process finishes its execution, or it is terminated by the operating system, it is moved to the
terminated state where it waits to be removed from main memory.

Process Control Block (PCB)


A Process Control Block is a data structure maintained by the Operating System for every
process. The PCB is identified by an integer process ID (PID). A PCB keeps all the information
needed to keep track of a process as listed below in the table −

S.N. Information & Description

1 Process State

The current state of the process i.e., whether it is ready, running, waiting, or whatever.

2 Process privileges

This is required to allow/disallow access to system resources.

3 Process ID

Unique identification for each of the process in the operating system.

4 Pointer

A pointer to parent process.

5 Program Counter

Program Counter is a pointer to the address of the next instruction to be executed for this process.

6 CPU registers

Various CPU registers where process need to be stored for execution for running state.

7 CPU Scheduling Information

Process priority and other scheduling information which is required to schedule the process.
8 Memory management information

This includes the information of page table, memory limits, Segment table depending on memory
used by the operating system.

9 Accounting information

This includes the amount of CPU used for process execution, time limits, execution ID etc.

10 IO status information

This includes a list of I/O devices allocated to the process.

The architecture of a PCB is completely dependent on Operating System and may contain
different information in different operating systems. Here is a simplified diagram of a PCB −

The PCB is maintained for a process throughout its lifetime, and is deleted once the process
terminates.
UNIT – 2
I. The process concept in Operating System

A process is defined as an entity which represents the basic unit of work to be implemented in
the system. To put it in simple terms, we write our computer programs in a text file and when we
execute this program, it becomes a process which performs all the tasks mentioned in the
program.

Process Scheduling:

The process scheduling is the activity of the process manager that handles the removal of the
running process from the CPU and the selection of another process on the basis of a particular
strategy.

Schedulers

Schedulers are special system software which handle process scheduling in various ways. Their
main task is to select the jobs to be submitted into the system and to decide which process to run.
Schedulers are of three types −
 Long-Term Scheduler
 Short-Term Scheduler
 Medium-Term Scheduler

Long Term Scheduler

It is also called a job scheduler. A long-term scheduler determines which programs are admitted
to the system for processing. It selects processes from the queue and loads them into memory for
execution. Process loads into the memory for CPU scheduling.
The primary objective of the job scheduler is to provide a balanced mix of jobs, such as I/O
bound and processor bound. It also controls the degree of multiprogramming. If the degree of
multiprogramming is stable, then the average rate of process creation must be equal to the
average departure rate of processes leaving the system.
On some systems, the long-term scheduler may not be available or minimal. Time-sharing
operating systems have no long term scheduler. When a process changes the state from new to
ready, then there is use of long-term scheduler.

Short Term Scheduler

It is also called as CPU scheduler. Its main objective is to increase system performance in
accordance with the chosen set of criteria. It is the change of ready state to running state of the
process. CPU scheduler selects a process among the processes that are ready to execute and
allocates CPU to one of them.
Short-term schedulers, also known as dispatchers, make the decision of which process to execute
next. Short-term schedulers are faster than long-term schedulers.

Medium Term Scheduler

Medium-term scheduling is a part of swapping. It removes the processes from the memory. It
reduces the degree of multiprogramming. The medium-term scheduler is in-charge of handling
the swapped out-processes.
A running process may become suspended if it makes an I/O request. A suspended processes
cannot make any progress towards completion. In this condition, to remove the process from
memory and make space for other processes, the suspended process is moved to the secondary
storage. This process is called swapping, and the process is said to be swapped out or rolled out.
Swapping may be necessary to improve the process mix.
Types:

Capacity schedule: The Capacity Scheduler allows multiple occupants to share a large size
cluster. Capacity Scheduler also provides a level of abstraction to know which occupant is
utilizing the more cluster resource or slots, so that the single user or application doesn’t take
disappropriate or unnecessary slots in the cluster. The capacity Scheduler mainly contains 3
types of the queue that are root, parent, and leaf which are used to represent cluster,
organization, or any subgroup, application submission respectively.  

Resource schedule: A resource schedule is organized around individual people or objects.


Resources can be consultants, teachers, boats, meeting rooms, or other items that can only be
used for single bookings and

Service schedule: The service schedule’s main benefit is that it can take into account the
availability of resources in other schedules. If your service depends on a number of resources
being available, the service schedule checks when all the required resources (objects and/or
people) are available so that the user can make a booking. 

Types of CPU Scheduling

There are essential 4 conditions under which CPU scheduling decisions are taken:

1. If a process is making the switch between the running state to the waiting state (could be


for I/O request, or invocation of wait() for terminating one of its child processes)
2. If a process is making the switch from running state to the ready state (on the occurrence
of an interrupt, for example)
3. If a process is making the switch between waiting and ready state (e.g. when
its I/O request completes)
4. If a process terminates upon completion of execution.

So in the case of conditions 1 and 4, the CPU does not really have a choice of scheduling, if a
process exists in the ready queue the CPU's response to this would be to select it for execution.
In cases 2 and 3, the CPU has a choice of selecting a particular process for executing next.

There are mainly two types of CPU scheduling:

Non-Preemptive Scheduling

In the case of non-preemptive scheduling, new processes are executed only after the current
process has completed its execution. The process holds the resources of the CPU (CPU time) till
its state changes to terminated or is pushed to the process waiting state. If a process is currently
being executed by the CPU, it is not interrupted till it is completed. Once the process has
completed its execution, the processer picks the next process from the ready queue (the queue in
which all processes that are ready for execution are stored).

For Example: In this image above, we can see that all the processes were executed in the order
of which they appeared, and none of the processes were interrupted by another, making this a
non-preemptive, FCFS (First Come, First Served) CPU scheduling algorithm. P2was the first
process to arrive (arrived at time = 0), and was hence executed first. Let's ignore the third column
for a moment; we'll get to that soon. Process P3arrived next (at time = 1) and was executed after
the previous process - P2 was done executing, and so on.

Some examples of non-preemptive scheduling algorithms are - Shortest Job First (SJF, non-
preemptive), and Priority scheduling (non-preemptive).
Preemptive Scheduling

Preemptive scheduling takes into consideration the fact that some processes could have a higher
priority and hence must be executed before the processes that have a lower priority. In
preemptive scheduling, the CPU resource are allocated to a process for only a limited period of
time and then those resources are taken back and assigned to another process (the next in
execution). If the process was yet to complete its execution, it is placed back in the ready state,
where it will remain till it gets a chance to execute once again.

So, when we take a look at the conditions under which CPU scheduling decisions are taken on
the basis of which CPU provides its resources to processes, we can see that there isn't really a
choice in making a decision when it comes to condition 1 and 4. If we have a process in the
ready queue, we must select it for execution. However, we do have a choicein condition 2 and 3.
If we opt to make the choice of scheduling only if a process terminates (condition 4) or if the
current process execution is waiting for I/O (condition 1) then we can say that our scheduling
is non-preemptive, however, if we make scheduling decisions in other conditions as well, we can
say that our scheduling process is preemptive.

II. Important CPU Scheduling Terminologies:

Let's now discuss some important terminologies that are relevant to CPU scheduling.

1. Arrival time: Arrival time (AT) is the time at which a process arrives at the ready queue.
2. Burst Time: As you may have seen the third column being 'burst time', it is the time
required by the CPU to complete the execution of a process, or the amount of time
required for the execution of a process. It is also sometimes called the execution
time or running time.
3. Completion Time: As the name suggests, completion time is the time at which a process
completes its execution. It is not to be confused with burst time.
4. Turn-Around Time: Also written as TAT, turn-around time is simply the difference
between completion time and arrival time (Completion time - arrival time).
5. Waiting Time: Waiting time (WT) of a process is the difference between turn-around
time and burst time (TAT - BT), i.e. the amount of time a process waits for getting CPU
resources in the ready queue.
6. Response Time: Response time (RT) of a process is the time after which any process
gets CPU resources allocated after entering the ready queue.

III. CPU Scheduling Criteria:

Now if the CPU needs to schedule these processes, it must definitely do it wisely. What are the
wise decisions it should make to create the "best" scheduling?
 CPU Utilization: It would make sense if the scheduling was done in such a way that the
CPU is utilized to its maximum. If a scheduling algorithm is not wasting any CPU cycle
or makes the CPU work most of the time (100% of the time, ideally), then the scheduling
algorithm can be considered as good.
 Throughput: Throughput by definition is the total number of processes that are
completed (executed) per unit time or, in simpler terms, it is the total work done by the
CPU in a unit of time. Now of course, an algorithm must work to maximize throughput.
 TurnAround Time: The turnaround time is essentially the total time that it took for a
process to arrive in the ready queue and complete. A good CPU scheduling criteria would
be able to minimize this time taken.
 Waiting Time: A scheduling algorithm obviously cannot change the time that is required
by a process to complete its execution, however, it can minimize the waiting time of the
process.
 Response Time: If your system is interactive, then taking into consideration simply the
turnaround time to judge a scheduling algorithm is not good enough. A process might
produce results quicker, and then continue to compute new results while the outputs of
previous operations are being shown to the user. Hence, we have another CPU scheduling
criteria, which is the response time (time taken from submission of the process until its
first 'response' is produced). The goal of the CPU would be to minimize this time.

IV. Types of Schedulers:

A Process Scheduler schedules different processes to be assigned to the CPU based on


particular scheduling algorithms.

1. FCFS:

 Jobs are executed on first come, first serve basis.


 It is a non-preemptive, pre-emptive scheduling algorithm.
 Easy to understand and implement.
 Its implementation is based on FIFO queue.
 Poor in performance as average wait time is high.

Wait time of each process is as follows −

Process Wait Time : Service Time - Arrival Time

P0 0-0=0

P1 5-1=4

P2 8-2=6

P3 16 - 3 = 13

Average Wait Time: (0+4+6+13) / 4 = 5.75

2. Shortest Job Next (SJN)

 This is also known as shortest job first, or SJF


 This is a non-preemptive, pre-emptive scheduling algorithm.
 Best approach to minimize waiting time.
 Easy to implement in Batch systems where required CPU time is known in advance.
 Impossible to implement in interactive systems where required CPU time is not known.
 The processer should know in advance how much time process will take.
Process Arrival Time Execution Time Service Time

P0 0 5 0

P1 1 3 5
P2 2 8 14

P3 3 6 8

Given: Table of processes, and their Arrival time, Execution time

Waiting time of each process is as follows −

Process Waiting Time

P0 0-0=0

P1 5-1=4

P2 14 - 2 = 12

P3 8-3=5

Average Wait Time: (0 + 4 + 12 + 5)/4 = 21 / 4 = 5.25


3. Round robin:

Round Robin is the preemptive process scheduling algorithm.


Each process is provided a fix time to execute, it is called a quantum.
Once a process is executed for a given time period, it is preempted and other process executes for
a given time period.
Context switching is used to save states of preempted processes.
Wait time of each process is as follows −

Process Wait Time : Service Time - Arrival Time

P0 (0 - 0) + (12 - 3) = 9

P1 (3 - 1) = 2

P2 (6 - 2) + (14 - 9) + (20 - 17) = 12

P3 (9 - 3) + (17 - 12) = 11

Average Wait Time: (9+2+12+11) / 4 = 8.5


V. Real time scheduling algorithms:
Real-time systems are systems that carry real-time tasks. These tasks need to be performed
immediately with a certain degree of urgency. In particular, these tasks are related to control of
certain events (or) reacting to them. Real-time tasks can be classified as hard real-time tasks
and soft real-time tasks. 
A hard real-time task must be performed at a specified time which could otherwise lead to
huge losses. In soft real-time tasks, a specified deadline can be missed. This is because the task
can be rescheduled (or) can be completed after the specified time, 

In real-time systems, the scheduler is considered as the most important component which is
typically a short-term task scheduler. The main focus of this scheduler is to reduce the
response time associated with each of the associated processes instead of handling the
deadline. 

If a preemptive scheduler is used, the real-time task needs to wait until its corresponding tasks
time slice completes. In the case of a non-preemptive scheduler, even if the highest priority is
allocated to the task, it needs to wait until the completion of the current task. This task can be
slow (or) of the lower priority and can lead to a longer wait. 

A better approach is designed by combining both preemptive and non-preemptive scheduling.


This can be done by introducing time-based interrupts in priority based systems which means
the currently running process is interrupted on a time-based interval and if a higher priority
process is present in a ready queue, it is executed by preempting the current process. 

Based on schedulability, implementation (static or dynamic), and the result (self or dependent)
of analysis, the scheduling algorithm are classified as follows. 

 Static table-driven approaches: These algorithms usually perform a static analysis


associated with scheduling and capture the schedules that are advantageous. This helps in
providing a schedule that can point out a task with which the execution must be started at run
time. 
Static priority-driven preemptive approaches: Similar to the first approach, these type of
algorithms also uses static analysis of scheduling. The difference is that instead of selecting a
particular schedule, it provides a useful way of assigning priorities among various tasks in
preemptive scheduling.

Dynamic planning-based approaches: Here, the feasible schedules are identified


dynamically (at run time). It carries a certain fixed time interval and a process is executed if
and only if satisfies the time constraint.
Dynamic best effort approaches: These types of approaches consider deadlines instead of
feasible schedules. Therefore the task is aborted if its deadline is reached. This approach is
used widely is most of the real-time systems. 

1. RM (Rate monotonic scheduling):


Rate monotonic scheduling is a priority algorithm that belongs to the static priority
scheduling category of Real Time Operating Systems. It is preemptive in nature. The priority is
decided according to the cycle time of the processes that are involved. If the process has a
small job duration, then it has the highest priority. Thus if a process with highest priority starts
execution, it will preempt the other running processes. The priority of a process is inversely
proportional to the period it will run for.
A set of processes can be scheduled only if they satisfy the following equation :

Where n is the number of processes in the process set, Ci is the computation time of the
process, Ti is the Time period for the process to run and U is the processor utilization.

Example:
An example to understand the working of Rate monotonic scheduling algorithm.

Processes Execution Time (C) Time period (T)

P1 3 20

P2 2 5

P3 2 10

n( 2^1/n - 1 ) = 3 ( 2^1/3 - 1 ) = 0.7977

U = 3/20 + 2/5 + 2/10 = 0.75

It is less than 1 or 100% utilization. The combined utilization of three processes is less than the
threshold of these processes that means the above set of processes are schedulable and thus
satisfies the above equation of the algorithm.

1. Scheduling time – For calculating the Scheduling time of algorithm we have to take the
LCM of the Time period of all the processes. LCM ( 20, 5, 10 ) of the above example is 20.
Thus we can schedule it by 20 time units.
2. Priority – As discussed above, the priority will be the highest for the process which has the
least running time period. Thus P2 will have the highest priority, after that P3 and lastly P1.
P2 > P3 > P1

3. Representation and flow –

Above figure says that, Process P2 will execute two times for every 5 time units, Process
P3 will execute two times for every 10 time units and Process P1 will execute three times
in 20 time units. This has to be kept in mind for understanding the entire execution of the
algorithm below.

Process P2 will run first for 2 time units because it has the highest priority. After
completing its two units, P3 will get the chance and thus it will run for 2 time units.

As we know that process P2 will run 2 times in the interval of 5 time units and process P3
will run 2 times in the interval of 10 time units, they have fulfilled the criteria and thus now
process P1 which has the least priority will get the chance and it will run for 1 time. And
here the interval of five time units have completed. Because of its priority P2 will preempt
P1 and thus will run 2 times. As P3 have completed its 2 time units for its interval of 10
time units, P1 will get chance and it will run for the remaining 2 times, completing its
execution which was thrice in 20 time units.

Now 9-10 interval remains idle as no process needs it. At 10 time units, process P2 will run
for 2 times completing its criteria for the third interval ( 10-15 ). Process P3 will now run
for two times completing its execution. Interval 14-15 will again remain idle for the same
reason mentioned above. At 15 time unit, process P2 will execute for two times completing
its execution. This is how the rate monotonic scheduling works.

Conditions:
The analysis of Rate monotonic scheduling assumes few properties that every process should
possess. They are:
1. Processes involved should not share the resources with other processes.
2. Deadlines must be similar to the time periods. Deadlines are deterministic.
3. Process running with highest priority that needs to run, will preempt all the other processes.
4. Priorities must be assigned to all the processes according to the protocol of Rate monotonic
scheduling.
Advantages:
1. It is easy to implement.
2. If any static priority assignment algorithm can meet the deadlines then rate monotonic
scheduling can also do the same. It is optimal.
3. It consists of calculated copy of the time periods unlike other time-sharing algorithms as
Round robin which neglects the scheduling needs of the processes.
Disadvantages:
1. It is very difficult to support aperiodic and sporadic tasks under RMA.
2. RMA is not optimal when tasks period and deadline differ.

2. EDF:
Earliest Deadline First (EDF) is an optimal dynamic priority scheduling algorithm used in
real-time systems. It can be used for both static and dynamic real-time scheduling.
EDF uses priorities to the jobs for scheduling. It assigns priorities to the task according to the
absolute deadline. The task whose deadline is closest gets the highest priority. The priorities
are assigned and changed in a dynamic fashion. EDF is very efficient as compared to other
scheduling algorithms in real-time systems. It can make the CPU utilization to about 100%
while still guaranteeing the deadlines of all the tasks.

EDF includes the kernel overload. In EDF, if the CPU usage is less than 100%, then it means
that all the tasks have met the deadline. EDF finds an optimal feasible schedule. The feasible
schedule is one in which all the tasks in the system are executed within the deadline. If EDF is
not able to find a feasible schedule for all the tasks in the real-time system, then it means that
no other task scheduling algorithms in real-time systems can give a feasible schedule. All the
tasks which are ready for execution should announce their deadline to EDF when the task
becomes runnable.

EDF scheduling algorithm does not need the tasks or processes to be periodic and also the
tasks or processes require a fixed CPU burst time. In EDF, any executing task can be
preempted if any other periodic instance with an earlier deadline is ready for execution and
becomes active. Preemption is allowed in the Earliest Deadline First scheduling algorithm.

Example:
Consider two processes P1 and P2.
Let the period of P1 be p1 = 50
Let the processing time of P1 be t1 = 25
Let the period of P2 be period2 = 75
Let the processing time of P2 be t2 = 30

Steps for solution:


1. Deadline pf P1 is earlier, so priority of P1>P2.
2. Initially P1 runs and completes its execution of 25 time.
3. After 25 times, P2 starts to execute until 50 times, when P1 is able to execute.
4. Now, comparing the deadline of (P1, P2) = (100, 75), P2 continues to execute.
5. P2 completes its processing at time 55.
6. P1 starts to execute until time 75, when P2 is able to execute.
7. Now, again comparing the deadline of (P1, P2) = (100, 150), P1 continues to execute.
8. Repeat the above steps…
9. Finally at time 150, both P1 and P2 have the same deadline, so P2 will continue to execute
till its processing time after which P1 starts to execute.
Limitations of EDF scheduling algorithm:
 Transient Overload Problem
 Resource Sharing Problem
 Efficient Implementation Problem
VI. Interprocess communication:

Interprocess communication is the mechanism provided by the operating system that allows
processes to communicate with each other. This communication could involve a process letting
another process know that some event has occurred or the transferring of data from one process
to another.

A system can have two types of processes i.e. independent or cooperating. Cooperating processes
affect each other and may share data and information among themselves. Interprocess
Communication or IPC provides a mechanism to exchange data and information across multiple
processes, which might be on single or multiple computers connected by a network.

IPC helps achieve these things:

 Computational Speedup
 Modularity
 Information and data sharing
 Privilege separation
 Processes can communicate with each other and synchronize their action.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy