Operating System Unit 1 & 2 Intro Notes
Operating System Unit 1 & 2 Intro Notes
Operating System Unit 1 & 2 Intro Notes
UNIT - I
I. Introduction:
An Operating System (OS) is software that acts as an interface between computer hardware
components and the user. An operating system is a software programme required to manage and
operate a computing device like smartphones, tablets, computers, supercomputers, web servers,
cars, network towers, smartwatches, etc. It is the operating system that eliminates the need to
know coding language to interact with computing devices.
For the most part, the IT industry largely focuses on the top five OSs, including Apple macOS,
Microsoft Windows, Google's Android OS, Linux Operating System, and Apple iOS.
1. User
2. System and application programs
3. Operating system
4. Hardware
Every general-purpose computer consists of the hardware, operating system, system programs,
and application programs. The hardware consists of memory, CPU, ALU, and I/O devices,
peripheral devices, and storage devices. System program consists of compilers, loaders,
editors, OS, etc. The application program consists of business programs, database programs.
Features:
Storage: These systems have good storage capacity that makes the system process a huge amount
of data as and when needed.
Centralized server
RAS
Scalability
Security
Compatibility
Throughput Computing
Transactional Processing
2. Desktop System:
i. The desktop OS is the environment where the user controls a personal
computer (Desktop, Notebook PC).
ii. It aids in the management of computer hardware and software resources.
iii. It supports fundamental features such as task scheduling, peripheral control, printing,
input/output, and memory allocation.
iv. The three most common operating systems for personal computers are Microsoft
Windows, macOS, and Linux.
Features:
Advantages
The advantages of multiprocessor systems are as follows −
If there are multiple processors working at the same time, more processes can be executed
parallel at the same time. Therefore the throughput of the system will increase.
Multiprocessor systems are more reliable. Due to the fact that there are more than one
processor, in case of failure of any one processor will not make the system come to a halt.
Although the system will become slow if it happens but still it will work.
Electricity consumption of a multiprocessor system is less than the single processor
system. This is because, in single processor systems, many processes have to be executed
by only one processor so there is a lot of load on it. But in case of multiple processor
systems, there are many processors to execute the processes so the load on each processor
will be comparatively less so electricity consumed will also be less.
Fields
The different fields of multiprocessor operating systems used are as follows −
Asymmetric Multiprocessor − Every processor is given seeded tasks in this operating
system, and the master processor has the power for running the entire system. In the
course, it uses the master-slave relationship.
Symmetric Multiprocessor − In this system, every processor owns a similar copy of the
OS, and they can make communication in between one another. All processors are
connected with peering relationship nature, meaning it won’t be using master & slave
relation.
Shared memory Multiprocessor − As the name indicates, each central processing unit
contains distributable common memory.
Uniform Memory Access Multiprocessor (UMA) − In this system, it allows accessing
all memory at a consistent speed rate for all processors.
Distributed memory Multiprocessor − A computer system consisting of a range of
processors, each with its own local memory, connected through a network, which means
all the kinds of processors consist of their own private memory.
NUMA Multiprocessor − The abbreviation of NUMA is Non-Uniform Memory Access
Multiprocessor. It entails some areas of the memory for approaching at a swift rate and
the remaining parts of memory are used for other tasks.
The best Operating system in multiprocessor and parallel computing environment is UNIX,
because it has many advantages such as,
It is multi-user.
It is portable.
It is good for multitasking.
It has an organized file system.
It has device independence.
Utilities are brief and operation commands can be combined in a single line.
Unix provides various services, as it has built-in administrative tools,
UNIX can share files over electronic networks with many various kinds of equipment.
Cluster components are generally linked via fast area networks, and each node executing its
instance of an operating system. In most cases, all nodes share the same hardware and operating
system, while different hardware or different operating systems could be used in other cases. The
primary purpose of using a cluster system is to assist with weather forecasting, scientific
computing, and supercomputing systems.
There are two clusters available to make a more efficient cluster. These are as follows:
1. Software Cluster
2. Hardware Cluster
Software Cluster
Hardware Cluster
In the asymmetric cluster system, one node out of all nodes is in hot standby mode, while the
remaining nodes run the essential applications. Hot standby mode is completely fail-safe and also
a component of the cluster system. The node monitors all server functions; the hot standby node
swaps this position if it comes to a halt.
Multiple nodes help run all applications in this system, and it monitors all nodes simultaneously.
Because it uses all hardware resources, this cluster system is more reliable than asymmetric
cluster systems.
A parallel cluster system enables several users to access similar data on the same shared storage
system. The system is made possible by a particular software version and other apps.
Classification of clusters
Computer clusters are managed to support various purposes, from general-purpose business
requirements like web-service support to computation-intensive scientific calculations. There are
various classifications of clusters. Some of them are as follows:
The process of moving applications and data resources from a failed system to another system in
the cluster is referred to as fail-over. These are the databases used to cluster important missions,
application servers, mail, and file.
The cluster requires better load balancing abilities amongst all available computer systems. All
nodes in this type of cluster can share their computing workload with other nodes, resulting in
better overall performance. For example, a web-based cluster can allot various web queries to
various nodes, so it helps to improve the system speed. When it comes to grabbing requests, only
a few cluster systems use the round-robin method.
Advantages
1. High Availability
Although every node in a cluster is a standalone computer, the failure of a single node doesn't
mean a loss of service. A single node could be pulled down for maintenance while the remaining
clusters take on a load of that single node.
2. Cost Efficiency
When compared to highly reliable and larger storage mainframe computers, these types of cluster
computing systems are thought to be more cost-effective and cheaper. Furthermore, most of
these systems outperform mainframe computer systems in terms of performance.
3. Additional Scalability
A cluster is set up in such a way that more systems could be added to it in minor increments.
Clusters may add systems in a horizontal fashion. It means that additional systems could be
added to clusters to improve their performance, fault tolerance, and redundancy.
4. Fault Tolerance
Clustered systems are quite fault-tolerance, and the loss of a single node does not result in the
system's failure. They might also have one or more nodes in hot standby mode, which allows
them to replace failed nodes.
5. Performance The clusters are commonly used to improve the availability and performance
over the single computer systems, whereas usually being much more cost-effective than the
single computer system of comparable speed or availability.
6. Processing Speed
The processing speed is also similar to mainframe systems and other types of supercomputers on
the market.
Disadvantages
1. Cost-Effective
One major disadvantage of this design is that it is not cost-effective. The cost is high, and the
cluster will be more expensive than a non-clustered server management design since it requires
good hardware and a design.
2. Required Resources
Clustering necessitates the use of additional servers and hardware, making monitoring and
maintenance difficult. As a result, infrastructure must be improved.
3. Maintenance
Examples of the real-time operating systems: Airline traffic control systems, Command
Control Systems, Airlines reservation system, Heart Pacemaker, Network Multimedia Systems,
and Robot. etc.
1. Hard Real-Time operating system:
These operating systems guarantee that critical tasks be completed within a range of time. For
example, a robot is hired to weld a car body. If the robot welds too early or too late, the car
cannot be sold, so it is a hard real-time system that requires complete car welding by robot
hardly on the time.
2. Soft real-time operating system:
This operating system provides some relaxation in the time limit.
For example – Multimedia systems, digital audio systems etc. Explicit, programmer-defined
and controlled processes are encountered in real-time systems. A separate process is changed
with handling a single external event. The process is activated upon occurrence of the related
event signalled by an interrupt.
Multitasking operation is accomplished by scheduling processes for execution independently
of each other. Each process is assigned a certain level of priority that corresponds to the
relative importance of the event that it services. The processor is allocated to the highest
priority processes. This type of schedule, called, priority-based pre-emptive scheduling is used
by real-time systems.
3. Firm Real-time Operating System:
RTOS of this type have to follow deadlines as well. In spite of its small impact, missing a
deadline can have unintended consequences, including a reduction in the quality of the
product. Example: Multimedia applications.
Advantages:
3. I/O Protection:
So when we’re ensuring the I/O protection then some cases will never have occurred in the
system as:
1. Termination I/O of other process
2. View I/O of other process
3. Giving priority to a particular process I/O
An operating system is a large and complex system that can only be created by partitioning into
small parts. These pieces should be a well-defined part of the system, carefully defining inputs,
outputs, and functions.
Although Windows, Mac, UNIX, Linux, and other OS do not have the same structure, most
operating systems share similar OS system components, such as file, memory, process, I/O
device management.
The components of an operating system play a key role to make a variety of computer system
parts work together. There are the following components of an operating system, such as:
1. Process Management
2. File Management
3. Network Management
4. Main Memory Management
5. Secondary Storage Management
6. I/O Device Management
7. Security Management
8. Command Interpreter System
Operating system components help you get the correct computing by detecting CPU and memory
hardware errors.
Process Management
The process management component is a procedure for managing many processes running
simultaneously on the operating system. Every running software application program has one or
more processes associated with them.
For example, when you use a search engine like Chrome, there is a process running for that
browser program.
Process management keeps processes running efficiently. It also uses memory allocated to them
and shutting them down when needed.
The execution of a process must be sequential so, at least one instruction should be executed on
behalf of the process.
Here are the following functions of process management in the operating system, such as:
o Process creation and deletion.
o Synchronization process
o Communication process
File Management
A file is a set of related information defined by its creator. It commonly represents programs
(both source and object forms) and data. Data files can be alphabetic, numeric, or alphanumeric.
Functions:
The operating system has the following important activities in connection with file management:
Network Management
A distributed system is a collection of computers or processors that never share their memory
and clock. In this type of system, all the processors have their local memory, and the processors
communicate with each other using different communication cables, such as fibre optics or
telephone lines.
The computers in the network are connected through a communication network, which can
configure in many different ways. The network can fully or partially connect in network
management, which helps users design routing and connection strategies that overcome
connection and security issues.
o Distributed systems help you to various computing resources in size and function. They
may involve minicomputers, microprocessors, and many general-purpose computer
systems.
o A distributed system also offers the user access to the various resources the network
shares.
o It helps to access shared resources that help computation to speed up or offers data
availability and reliability.
Main memory is a large array of storage or bytes, which has an address. The memory
management process is conducted by using a sequence of reads or writes of specific memory
addresses.
It should be mapped to absolute addresses and loaded inside the memory to execute a program.
The selection of a memory management method depends on several factors.
However, it is mainly based on the hardware design of the system. Each algorithm requires
corresponding hardware support. Main memory offers fast storage that can be accessed directly
by the CPU. It is costly and hence has a lower storage capacity. However, for a program to be
executed, it must be in the main memory.
An Operating System performs the following functions for Memory Management in the
operating system:
o It helps you to keep track of primary memory.
o Determine what part of it are in use by whom, what part is not in use.
o In a multiprogramming system, the OS decides which process will get memory and how
much.
o Allocates the memory when a process requests.
o It also de-allocates the memory when a process no longer requires or has been
terminated.
Secondary-Storage Management
The most important task of a computer system is to execute programs. These programs help you
to access the data from the main memory during execution. This memory of the computer is very
small to store all data and programs permanently. The computer system offers secondary storage
to back up the main memory.
Here are some major functions of secondary storage management in the operating system:
o Storage allocation
o Disk scheduling
One of the important use of an operating system that helps to hide the variations of specific
hardware devices from the user.
Functions of I/O management
The I/O management system offers the following functions, such as:
Security Management
The various processes in an operating system need to be secured from other activities. Therefore,
various mechanisms can ensure those processes that want to operate files, memory CPU, and
other hardware resources should have proper authorization from the operating system.
Security refers to a mechanism for controlling the access of programs, processes, or users to the
resources defined by computer controls to be imposed, together with some means of
enforcement.
For example, memory addressing hardware helps to confirm that a process can be executed
within its own address space. The time ensures that no process has control of the CPU without
renouncing it. Lastly, no process is allowed to do its own I/O to protect, which helps you to keep
the integrity of the various peripheral devices.
Security can improve reliability by detecting latent errors at the interfaces between component
subsystems. Early detection of interface errors can prevent the foulness of a healthy subsystem
by a malfunctioning subsystem. An unprotected resource cannot misuse by an unauthorized or
incompetent user.
One of the most important components of an operating system is its command interpreter. The
command interpreter is the primary interface between the user and the rest of the system.
Many commands are given to the operating system by control statements. A program that reads
and interprets control statements is automatically executed when a new job is started in a batch
system or a user logs in to a time-shared system. This program is variously called.
Its function is quite simple, get the next command statement, and execute it. The command
statements deal with process management, I/O handling, secondary storage management, main
memory management, file system access, protection, and networking.
V. Handheld Systems:
A handheld, handheld PC or handheld computer is a computer device that can be held in the
palm of one's hand. These computers are commonly used to store appointments, phone numbers,
tasks, and other data commonly needed while away from home or office.
Personal Digital Assistants (PDAs) Cellular telephones Issues: Limited memory Slow
processors Small display screens.
An Operating System provides services to both the users and to the programs.
Program execution
I/O operations
File System manipulation
Communication
Error Detection
Resource Allocation
Protection
Program execution
Operating systems handle many kinds of activities from user programs to system programs like
printer spooler, name servers, file server, etc. Each of these activities is encapsulated as a
process.
A process includes the complete execution context (code to execute, data to manipulate,
registers, OS resources in use). Following are the major activities of an operating system with
respect to program management −
An I/O subsystem comprises of I/O devices and their corresponding driver software. Drivers hide
the peculiarities of specific hardware devices from the users.
An Operating System manages the communication between user and device drivers.
I/O operation means read or write operation with any file or any specific I/O device.
Operating system provides the access to the required I/O device when required.
A file represents a collection of related information. Computers can store files on the disk
(secondary storage), for long-term storage purpose. Examples of storage media include magnetic
tape, magnetic disk and optical disk drives like CD, DVD. Each of these media has its own
properties like speed, capacity, data transfer rate and data access methods.
A file system is normally organized into directories for easy navigation and usage. These
directories may contain files and other directions. Following are the major activities of an
operating system with respect to file management −
Communication
In case of distributed systems which are a collection of processors that do not share memory,
peripheral devices, or a clock, the operating system manages communications between all the
processes. Multiple processes communicate with one another through communication lines in the
network.
The OS handles routing and connection strategies, and the problems of contention and security.
Following are the major activities of an operating system with respect to communication −
Error handling
Errors can occur anytime and anywhere. An error may occur in CPU, in I/O devices or in the
memory hardware. Following are the major activities of an operating system with respect to error
handling −
Resource Management
In case of multi-user or multi-tasking environment, resources such as main memory, CPU cycles
and files storage are to be allocated to each user or job. Following are the major activities of an
operating system with respect to resource management −
Protection
Considering a computer system having multiple users and concurrent execution of multiple
processes, the various processes must be protected from each other's activities.
Protection refers to a mechanism or a way to control the access of programs, processes, or users
to the resources defined by a computer system. Following are the major activities of an operating
system with respect to protection −
System Programming can be defined as the act of building Systems Software using System
Programming Languages. According to Computer Hierarchy, one which comes at last is
Hardware. Then it is Operating System, System Programs, and finally Application Programs.
Program Development and Execution can be done conveniently in System Programs. Some of
the System Programs are simply user interfaces, others are complex. It traditionally lies
between the user interface and system calls.
So here, the user can only view up-to-the System Programs he can’t see System Calls.
System Programs can be divided into these categories:
Windows 10
Mac OS X
Ubuntu
Linux
Unix
Android
Anti-virus
Disk formatting
Computer language translators
VIII. Process concepts:
1 Stack
The process Stack contains the temporary data such as method/function parameters, return address
and local variables.
2 Heap
3 Text
This includes the current activity represented by the value of Program Counter and the contents of the
processor's registers.
4 Data
A program is a piece of code which may be a single line or millions of lines. A computer
program is usually written by a computer programmer in a programming language. For example,
here is a simple program written in C programming language −
#include <stdio.h>
int main() {
printf("Hello, World! \n");
return 0;
}
A computer program is a collection of instructions that performs a specific task when executed
by a computer. When we compare a program with a process, we can conclude that a process is a
dynamic instance of a computer program.
When a process executes, it passes through different states. These stages may differ in different
operating systems, and the names of these states are also not standardized.
In general, a process can have one of the following five states at a time.
1 Start
2 Ready
The process is waiting to be assigned to a processor. Ready processes are waiting to have the
processor allocated to them by the operating system so that they can run. Process may come into this
state after Start state or while running it by but interrupted by the scheduler to assign CPU to some
other process.
3 Running
Once the process has been assigned to a processor by the OS scheduler, the process state is set to
running and the processor executes its instructions.
4 Waiting
Process moves into the waiting state if it needs to wait for a resource, such as waiting for user input,
or waiting for a file to become available.
5 Terminated or Exit
Once the process finishes its execution, or it is terminated by the operating system, it is moved to the
terminated state where it waits to be removed from main memory.
1 Process State
The current state of the process i.e., whether it is ready, running, waiting, or whatever.
2 Process privileges
3 Process ID
4 Pointer
5 Program Counter
Program Counter is a pointer to the address of the next instruction to be executed for this process.
6 CPU registers
Various CPU registers where process need to be stored for execution for running state.
Process priority and other scheduling information which is required to schedule the process.
8 Memory management information
This includes the information of page table, memory limits, Segment table depending on memory
used by the operating system.
9 Accounting information
This includes the amount of CPU used for process execution, time limits, execution ID etc.
10 IO status information
The architecture of a PCB is completely dependent on Operating System and may contain
different information in different operating systems. Here is a simplified diagram of a PCB −
The PCB is maintained for a process throughout its lifetime, and is deleted once the process
terminates.
UNIT – 2
I. The process concept in Operating System
A process is defined as an entity which represents the basic unit of work to be implemented in
the system. To put it in simple terms, we write our computer programs in a text file and when we
execute this program, it becomes a process which performs all the tasks mentioned in the
program.
Process Scheduling:
The process scheduling is the activity of the process manager that handles the removal of the
running process from the CPU and the selection of another process on the basis of a particular
strategy.
Schedulers
Schedulers are special system software which handle process scheduling in various ways. Their
main task is to select the jobs to be submitted into the system and to decide which process to run.
Schedulers are of three types −
Long-Term Scheduler
Short-Term Scheduler
Medium-Term Scheduler
It is also called a job scheduler. A long-term scheduler determines which programs are admitted
to the system for processing. It selects processes from the queue and loads them into memory for
execution. Process loads into the memory for CPU scheduling.
The primary objective of the job scheduler is to provide a balanced mix of jobs, such as I/O
bound and processor bound. It also controls the degree of multiprogramming. If the degree of
multiprogramming is stable, then the average rate of process creation must be equal to the
average departure rate of processes leaving the system.
On some systems, the long-term scheduler may not be available or minimal. Time-sharing
operating systems have no long term scheduler. When a process changes the state from new to
ready, then there is use of long-term scheduler.
It is also called as CPU scheduler. Its main objective is to increase system performance in
accordance with the chosen set of criteria. It is the change of ready state to running state of the
process. CPU scheduler selects a process among the processes that are ready to execute and
allocates CPU to one of them.
Short-term schedulers, also known as dispatchers, make the decision of which process to execute
next. Short-term schedulers are faster than long-term schedulers.
Medium-term scheduling is a part of swapping. It removes the processes from the memory. It
reduces the degree of multiprogramming. The medium-term scheduler is in-charge of handling
the swapped out-processes.
A running process may become suspended if it makes an I/O request. A suspended processes
cannot make any progress towards completion. In this condition, to remove the process from
memory and make space for other processes, the suspended process is moved to the secondary
storage. This process is called swapping, and the process is said to be swapped out or rolled out.
Swapping may be necessary to improve the process mix.
Types:
Capacity schedule: The Capacity Scheduler allows multiple occupants to share a large size
cluster. Capacity Scheduler also provides a level of abstraction to know which occupant is
utilizing the more cluster resource or slots, so that the single user or application doesn’t take
disappropriate or unnecessary slots in the cluster. The capacity Scheduler mainly contains 3
types of the queue that are root, parent, and leaf which are used to represent cluster,
organization, or any subgroup, application submission respectively.
Service schedule: The service schedule’s main benefit is that it can take into account the
availability of resources in other schedules. If your service depends on a number of resources
being available, the service schedule checks when all the required resources (objects and/or
people) are available so that the user can make a booking.
There are essential 4 conditions under which CPU scheduling decisions are taken:
So in the case of conditions 1 and 4, the CPU does not really have a choice of scheduling, if a
process exists in the ready queue the CPU's response to this would be to select it for execution.
In cases 2 and 3, the CPU has a choice of selecting a particular process for executing next.
Non-Preemptive Scheduling
In the case of non-preemptive scheduling, new processes are executed only after the current
process has completed its execution. The process holds the resources of the CPU (CPU time) till
its state changes to terminated or is pushed to the process waiting state. If a process is currently
being executed by the CPU, it is not interrupted till it is completed. Once the process has
completed its execution, the processer picks the next process from the ready queue (the queue in
which all processes that are ready for execution are stored).
For Example: In this image above, we can see that all the processes were executed in the order
of which they appeared, and none of the processes were interrupted by another, making this a
non-preemptive, FCFS (First Come, First Served) CPU scheduling algorithm. P2was the first
process to arrive (arrived at time = 0), and was hence executed first. Let's ignore the third column
for a moment; we'll get to that soon. Process P3arrived next (at time = 1) and was executed after
the previous process - P2 was done executing, and so on.
Some examples of non-preemptive scheduling algorithms are - Shortest Job First (SJF, non-
preemptive), and Priority scheduling (non-preemptive).
Preemptive Scheduling
Preemptive scheduling takes into consideration the fact that some processes could have a higher
priority and hence must be executed before the processes that have a lower priority. In
preemptive scheduling, the CPU resource are allocated to a process for only a limited period of
time and then those resources are taken back and assigned to another process (the next in
execution). If the process was yet to complete its execution, it is placed back in the ready state,
where it will remain till it gets a chance to execute once again.
So, when we take a look at the conditions under which CPU scheduling decisions are taken on
the basis of which CPU provides its resources to processes, we can see that there isn't really a
choice in making a decision when it comes to condition 1 and 4. If we have a process in the
ready queue, we must select it for execution. However, we do have a choicein condition 2 and 3.
If we opt to make the choice of scheduling only if a process terminates (condition 4) or if the
current process execution is waiting for I/O (condition 1) then we can say that our scheduling
is non-preemptive, however, if we make scheduling decisions in other conditions as well, we can
say that our scheduling process is preemptive.
Let's now discuss some important terminologies that are relevant to CPU scheduling.
1. Arrival time: Arrival time (AT) is the time at which a process arrives at the ready queue.
2. Burst Time: As you may have seen the third column being 'burst time', it is the time
required by the CPU to complete the execution of a process, or the amount of time
required for the execution of a process. It is also sometimes called the execution
time or running time.
3. Completion Time: As the name suggests, completion time is the time at which a process
completes its execution. It is not to be confused with burst time.
4. Turn-Around Time: Also written as TAT, turn-around time is simply the difference
between completion time and arrival time (Completion time - arrival time).
5. Waiting Time: Waiting time (WT) of a process is the difference between turn-around
time and burst time (TAT - BT), i.e. the amount of time a process waits for getting CPU
resources in the ready queue.
6. Response Time: Response time (RT) of a process is the time after which any process
gets CPU resources allocated after entering the ready queue.
Now if the CPU needs to schedule these processes, it must definitely do it wisely. What are the
wise decisions it should make to create the "best" scheduling?
CPU Utilization: It would make sense if the scheduling was done in such a way that the
CPU is utilized to its maximum. If a scheduling algorithm is not wasting any CPU cycle
or makes the CPU work most of the time (100% of the time, ideally), then the scheduling
algorithm can be considered as good.
Throughput: Throughput by definition is the total number of processes that are
completed (executed) per unit time or, in simpler terms, it is the total work done by the
CPU in a unit of time. Now of course, an algorithm must work to maximize throughput.
TurnAround Time: The turnaround time is essentially the total time that it took for a
process to arrive in the ready queue and complete. A good CPU scheduling criteria would
be able to minimize this time taken.
Waiting Time: A scheduling algorithm obviously cannot change the time that is required
by a process to complete its execution, however, it can minimize the waiting time of the
process.
Response Time: If your system is interactive, then taking into consideration simply the
turnaround time to judge a scheduling algorithm is not good enough. A process might
produce results quicker, and then continue to compute new results while the outputs of
previous operations are being shown to the user. Hence, we have another CPU scheduling
criteria, which is the response time (time taken from submission of the process until its
first 'response' is produced). The goal of the CPU would be to minimize this time.
1. FCFS:
P0 0-0=0
P1 5-1=4
P2 8-2=6
P3 16 - 3 = 13
P0 0 5 0
P1 1 3 5
P2 2 8 14
P3 3 6 8
P0 0-0=0
P1 5-1=4
P2 14 - 2 = 12
P3 8-3=5
P0 (0 - 0) + (12 - 3) = 9
P1 (3 - 1) = 2
P3 (9 - 3) + (17 - 12) = 11
In real-time systems, the scheduler is considered as the most important component which is
typically a short-term task scheduler. The main focus of this scheduler is to reduce the
response time associated with each of the associated processes instead of handling the
deadline.
If a preemptive scheduler is used, the real-time task needs to wait until its corresponding tasks
time slice completes. In the case of a non-preemptive scheduler, even if the highest priority is
allocated to the task, it needs to wait until the completion of the current task. This task can be
slow (or) of the lower priority and can lead to a longer wait.
Based on schedulability, implementation (static or dynamic), and the result (self or dependent)
of analysis, the scheduling algorithm are classified as follows.
Where n is the number of processes in the process set, Ci is the computation time of the
process, Ti is the Time period for the process to run and U is the processor utilization.
Example:
An example to understand the working of Rate monotonic scheduling algorithm.
P1 3 20
P2 2 5
P3 2 10
It is less than 1 or 100% utilization. The combined utilization of three processes is less than the
threshold of these processes that means the above set of processes are schedulable and thus
satisfies the above equation of the algorithm.
1. Scheduling time – For calculating the Scheduling time of algorithm we have to take the
LCM of the Time period of all the processes. LCM ( 20, 5, 10 ) of the above example is 20.
Thus we can schedule it by 20 time units.
2. Priority – As discussed above, the priority will be the highest for the process which has the
least running time period. Thus P2 will have the highest priority, after that P3 and lastly P1.
P2 > P3 > P1
Above figure says that, Process P2 will execute two times for every 5 time units, Process
P3 will execute two times for every 10 time units and Process P1 will execute three times
in 20 time units. This has to be kept in mind for understanding the entire execution of the
algorithm below.
Process P2 will run first for 2 time units because it has the highest priority. After
completing its two units, P3 will get the chance and thus it will run for 2 time units.
As we know that process P2 will run 2 times in the interval of 5 time units and process P3
will run 2 times in the interval of 10 time units, they have fulfilled the criteria and thus now
process P1 which has the least priority will get the chance and it will run for 1 time. And
here the interval of five time units have completed. Because of its priority P2 will preempt
P1 and thus will run 2 times. As P3 have completed its 2 time units for its interval of 10
time units, P1 will get chance and it will run for the remaining 2 times, completing its
execution which was thrice in 20 time units.
Now 9-10 interval remains idle as no process needs it. At 10 time units, process P2 will run
for 2 times completing its criteria for the third interval ( 10-15 ). Process P3 will now run
for two times completing its execution. Interval 14-15 will again remain idle for the same
reason mentioned above. At 15 time unit, process P2 will execute for two times completing
its execution. This is how the rate monotonic scheduling works.
Conditions:
The analysis of Rate monotonic scheduling assumes few properties that every process should
possess. They are:
1. Processes involved should not share the resources with other processes.
2. Deadlines must be similar to the time periods. Deadlines are deterministic.
3. Process running with highest priority that needs to run, will preempt all the other processes.
4. Priorities must be assigned to all the processes according to the protocol of Rate monotonic
scheduling.
Advantages:
1. It is easy to implement.
2. If any static priority assignment algorithm can meet the deadlines then rate monotonic
scheduling can also do the same. It is optimal.
3. It consists of calculated copy of the time periods unlike other time-sharing algorithms as
Round robin which neglects the scheduling needs of the processes.
Disadvantages:
1. It is very difficult to support aperiodic and sporadic tasks under RMA.
2. RMA is not optimal when tasks period and deadline differ.
2. EDF:
Earliest Deadline First (EDF) is an optimal dynamic priority scheduling algorithm used in
real-time systems. It can be used for both static and dynamic real-time scheduling.
EDF uses priorities to the jobs for scheduling. It assigns priorities to the task according to the
absolute deadline. The task whose deadline is closest gets the highest priority. The priorities
are assigned and changed in a dynamic fashion. EDF is very efficient as compared to other
scheduling algorithms in real-time systems. It can make the CPU utilization to about 100%
while still guaranteeing the deadlines of all the tasks.
EDF includes the kernel overload. In EDF, if the CPU usage is less than 100%, then it means
that all the tasks have met the deadline. EDF finds an optimal feasible schedule. The feasible
schedule is one in which all the tasks in the system are executed within the deadline. If EDF is
not able to find a feasible schedule for all the tasks in the real-time system, then it means that
no other task scheduling algorithms in real-time systems can give a feasible schedule. All the
tasks which are ready for execution should announce their deadline to EDF when the task
becomes runnable.
EDF scheduling algorithm does not need the tasks or processes to be periodic and also the
tasks or processes require a fixed CPU burst time. In EDF, any executing task can be
preempted if any other periodic instance with an earlier deadline is ready for execution and
becomes active. Preemption is allowed in the Earliest Deadline First scheduling algorithm.
Example:
Consider two processes P1 and P2.
Let the period of P1 be p1 = 50
Let the processing time of P1 be t1 = 25
Let the period of P2 be period2 = 75
Let the processing time of P2 be t2 = 30
Interprocess communication is the mechanism provided by the operating system that allows
processes to communicate with each other. This communication could involve a process letting
another process know that some event has occurred or the transferring of data from one process
to another.
A system can have two types of processes i.e. independent or cooperating. Cooperating processes
affect each other and may share data and information among themselves. Interprocess
Communication or IPC provides a mechanism to exchange data and information across multiple
processes, which might be on single or multiple computers connected by a network.
Computational Speedup
Modularity
Information and data sharing
Privilege separation
Processes can communicate with each other and synchronize their action.