Unit 4PPT

Download as pdf or txt
Download as pdf or txt
You are on page 1of 61

Unit -4

CPU Scheduling and Algorithm


(Marks-14)
• Basic Concepts / Scheduling Objectives
• In a single-processor system, only one process can run at a time. Others must
wait until the CPU is free and can be rescheduled.
• The objective of multiprogramming is to have some process running at all
times, to maximize CPU utilization.
• The idea is relatively simple. A process is executed until it must wait, typically
for the completion of some I/O request. In a simple computer system, the CPU
then just sits idle. All this waiting time is wasted; no useful work is
accomplished.
• With multiprogramming, we try to use this time productively. Several processes
are kept in memory at one time. When one process has to wait, the operating
system takes the CPU away from that process and gives the CPU to another
process. This pattern continues. Every time one process has to wait, another
process can take over use of the CPU.
• What is CPU Scheduling :
• CPU Scheduling is a process that allows one process to use the CPU while
another process is delayed (in standby) due to unavailability of any resources
such as I / O etc, thus making full use of the CPU the CPU scheduling is
needed.
• The purpose of CPU Scheduling is to make the system more efficient, faster.
Whenever the CPU becomes idle, the operating system must select one of the
processes in the line ready for launch.
• CPU and I/O Burst Cycles :
• Process execution consists of a cycle of CPU execution and I/O wait. Processes
alternate between these two states.

• CPU burst is when the process is being executed in the CPU. I/O burst is when
the CPU is waiting for I/O for further execution. After I/O burst, the process
goes into the ready queue for the next CPU burst.

• Process execution begins with a CPU burst. That is followed by an I/O burst,
then another CPU burst, then another I/O burst, and so on. Eventually, the last
CPU burst will end with a system request to terminate execution, rather than
with another I/O burst.
• Types of CPU Scheduling :
• There are two types of CPU scheduling
1) Pre-emptive Scheduling :
•In this type of Scheduling, the tasks are usually assigned with
priorities. At times it is necessary to run a certain task that has a higher
priority before another task although it is running.
• Therefore, the running task is interrupted for some time and resumed
later when the priority task has finished its execution.
•Thus this type of scheduling is used mainly when a process switches
either from running state to ready state or from waiting state to ready
state.
•The resources (that is CPU cycles) are mainly allocated to the process
for a limited amount of time and then are taken away, and after that,
the process is again placed back in the ready queue in the case if that
process still has a CPU burst time remaining. That process stays in the
ready queue until it gets the next chance to execute.
∙ Some Algorithms that are based on preemptive scheduling are
•Round Robin Scheduling (RR),

•Shortest Remaining Time First (SRTF),

• Priority (preemptive version) Scheduling,


Non- Pre-emptive Scheduling :
•Under non-preemptive scheduling, once the CPU has been allocated to
a process, the process keeps the CPU until it releases the CPU either
by terminating or by switching to the waiting state.
•It is the only method that can be used on certain hardware platforms
because It does not require the special hardware(for example a timer)
needed for preemptive scheduling.
•In non-preemptive scheduling, it does not interrupt a process running
CPU in the middle of the execution. Instead, it waits till the process
completes its CPU burst time, and then after that it can allocate the
CPU to any other process.
•Some Algorithms based on non-preemptive scheduling are: Shortest
Job First (SJF basically non-preemptive) Scheduling Priority (non-
preemptive version) Scheduling, etc.
•What are the Scheduling Criteria:
•Now if the CPU needs to schedule these processes, it must definitely
do it wisely. What are the wise decisions it should make to create the
"best" scheduling?
•CPU Utilization: It would make sense if the scheduling was done in
such a way that the CPU is utilized to its maximum. If a scheduling
algorithm is not wasting any CPU cycle or makes the CPU work most
of the time (100% of the time, ideally), then the scheduling algorithm
can be considered as good.
•Throughput: Throughput by definition is the total number of
processes that are completed (executed) per unit time or, in simpler
terms, it is the total work done by the CPU in a unit of time. Now of
course, an algorithm must work to maximize throughput.
•Turn Around Time: The turnaround time is essentially the total time
that it took for a process to arrive in the ready queue and complete. A
good CPU scheduling criteria would be able to minimize this time
taken.

•Waiting Time: A scheduling algorithm obviously cannot change the


time that is required by a process to complete its execution, however,
it can minimize the waiting time of the process.
•Response Time:
•If your system is interactive, then taking into consideration simply the
turnaround time to judge a scheduling algorithm is not good enough.
•A process might produce results quicker, and then continue to
compute new results while the outputs of previous operations are
being shown to the user.
•Hence, we have another CPU scheduling criteria, which is the
response time (time taken from submission of the process until its first
'response' is produced). The goal of the CPU would be to minimize
this time.
•CPU Scheduling Terminologies :
∙ Arrival time: Arrival time (AT) is the time at which a process arrives at
the ready queue.
∙ Burst Time: As you may have seen the third column being 'burst time', it is
the time required by the CPU to complete the execution of a process, or the
amount of time required for the execution of a process. It is also
sometimes called the execution time or running time.
∙ Completion Time: As the name suggests, completion time is the time at
which a process completes its execution. It is not to be confused with burst
time.
∙ Turn-Around Time: Also written as TAT, turnaround time is simply the
difference between completion time and arrival time

∙ (Turn around time=Completion time - arrival time).

∙ Waiting Time: Waiting time (WT) of a process is the difference between


turnaround time and burst time (TAT - BT), i.e. the amount of time a
process waits for getting CPU resources in the ready queue.

∙ Response Time: Response time (RT) of a process is the time after which
any process gets CPU resources allocated after entering the ready queue.
•Types of Scheduling Algorithms :
•CPU Scheduling deals with the problem of deciding which of
the processes in the ready queue is to be allocated the CPU.
•The algorithm used by the scheduler to carry out the selection
of process for execution is known as scheduling algorithm.
a. First Come First Serve (FCFS)
b. Shortest Job First (SJF)
c. Shortest Remaining Time (SRTN)
d. Round Robin Scheduling
e. Priority Scheduling
f. Multilevel Queue Scheduling
First Come First Serve (FCFS):
•By far the simplest CPU-scheduling algorithm is the first-come,
first-served (FCFS) scheduling algorithm.
•With this scheme, the process that requests the CPU first is allocated the
CPU first.
• The implementation of the FCFS policy is easily managed with a FIFO
queue.
•When a process enters the ready queue, its PCB is linked onto the tail of
the queue.
• When the CPU is free, it is allocated to the process at the head of the
queue.
•The running process is then removed from the queue. The code for FCFS
scheduling is simple to write and understand.
•On the negative side, the average waiting time under the FCFS policy is
often quite long.
•Consider the following set of processes that arrive at time 0, with the
length of the CPU burst given in milliseconds:
•Process Burst Time:
•P1 24
•P2 3
•P3 3
•If the processes arrive in the order P1, P2, P3, and are served in FCFS
order, we get the result shown in the following Gantt chart, which is a
bar chart that illustrates a particular schedule, including the start and
finish times of each of the participating processes.
•The waiting time is 0 milliseconds for process P1, 24 milliseconds for
process P2, and 27 milliseconds for process P3. Thus, the average
waiting time is (0 + 24 + 27)/3 = 17 milliseconds. If the processes
arrive in the order P2, P3, P1, however, the results will be as shown in
the following Gantt chart:

•The average waiting time is now (6 + 0 + 3)/3 = 3 milliseconds.


•This reduction is substantial. Thus, the average waiting time under an
FCFS policy is generally not minimal and may vary substantially if the
processes’ CPU burst times vary greatly.
•Advantages-
•It is simple and easy to understand.
•It can be easily implemented using queue data structure.
•It does not lead to starvation.
•Disadvantages-
•Average waiting time is very large.
•FCFS is not an attractive alternative on its own for a single
processor system.
Example of FCFS:
2) Example of FCFS:
Shortest Job First (SJF)
•We were scheduling the processes according to their arrival
time (in FCFS scheduling).
•However, SJF scheduling algorithm, schedules the processes
according to the length of the CPU burst they require.
•In SJF scheduling, the process with the lowest burst time,
among the list of available processes in the ready queue, is
going to be scheduled next.
•When the CPU is available ,it is assigned to the process that has
the smallest next CPU burst.
•If the next CPU bursts of two processes are the same , FCFS
scheduling is used to break the tie.
•It is also called as Shortest process next scheduling.
•Advantages of SJF
1) Maximum throughput(number of processes completed per unit
time)
2) Minimum average waiting and turnaround time
• Disadvantages of SJF:
1) May suffer with the problem of starvation
2) It is not implementable because the exact Burst time for a
process can't be known in advance

•SJF can be evaluated in two different manners:


•1) Non-Preemptive SJF
•2) Preemptive SJF
1) Non-Preemptive SJF:
•In this method , if CPU is executing one job ,it is not stopped in
between before completion.
2) Preemptive SJF:
•In this method ,while CPU is executing a job ,if new job arrives
with smaller burst time ,then the current job is pre-empted (sent
back to ready queue) and the new job is executed .it is also
called shortest Remaining time first.
•Shortest Remaining Time Scheduling (SRTN):
•The Preemptive version of Shortest Job First(SJF) scheduling is
known as Shortest Remaining Time First (SRTF). With the help
of the SRTF algorithm, the process having the smallest amount
of time remaining until completion is selected first to execute.
•So basically in SRTF, the processes are scheduled according to
the shortest remaining time.
•However, the SRTF algorithm involves more overheads than
the Shortest job first (SJF)scheduling, because in SRTF OS is
required frequently in order to monitor the CPU time of the jobs
in the READY queue and to perform context switching.
•In the SRTF scheduling algorithm, the execution of any
process can be stopped after a certain amount of time. On
arrival of every process, the short-term scheduler
schedules those processes from the list of available
processes & running processes that have the least
remaining burst time.
•After all the processes are available in the ready queue,
then, No preemption will be done and then the algorithm
will work the same as SJF scheduling. In the Process
Control Block, the context of the process is saved, when
the process is removed from the execution and when the
next process is scheduled. The PCB is accessed on the next
execution of this process.

•Advantages of SRTF
• The main advantage of the SRTF algorithm is that it makes the
processing of the jobs faster than the SJF algorithm, mentioned
it’s overhead charges are not counted.
•Disadvantages of SRTF
• It favors only those long processes that are just about to complete and
not those who have just started their operations. thus, starvation of long
processes still may occur.
• Like SJF,SRTF also requires an estimate of the next CPU burst of a
process in advance.
•Round Robin Scheduling:
•Round Robin(RR) scheduling algorithm is mainly designed for
time-sharing systems. This algorithm is similar to FCFS
scheduling, but in Round Robin(RR) scheduling, preemption is
added which enables the system to switch between processes.
•A fixed time is allotted to each process, called a quantum, for
execution.
•Once a process is executed for the given time period that
process is preempted and another process executes for the given
time period.
∙ Context switching is used to save states of preempted
processes.
∙ This algorithm is simple and easy to implement and the
most important is thing is this algorithm is starvation-free as
all processes get a fair share of CPU.
∙ It is important to note here that the length of time quantum
is generally from 10 to 100 milliseconds in length.

• Some important characteristics of the Round Robin(RR) Algorithm are as
follows:

• Round Robin Scheduling algorithm resides under the category of


Preemptive Algorithms.

• This algorithm is one of the oldest, easiest, and fairest(Equal)


algorithm.

• This Algorithm is a real-time algorithm because it responds to the


event within a specific time limit.
• In this algorithm, the time slice should be the minimum that is
assigned to a specific task that needs to be processed. Though it
may vary for different operating systems.
• This is a widely used scheduling method in the traditional
operating system.

•Advantages of RR :
1. It can be actually implementable in the system because it is
not depending on the burst time.
2. It doesn't suffer from the problem of starvation.
3. All the jobs get a fare allocation of CPU.
•Disadvantages of RR:

1. The higher the time quantum, the higher the response time in
the system.
2. The lower the time quantum, the higher the context
switching overhead in the system.
3. Deciding a perfect time quantum is really a very difficult task
in the system
•Priority Scheduling Algorithm :
•Priority Scheduling is a process scheduling algorithm based on
priority where the scheduler selects tasks according to priority.
Thus, processes with higher priority execute first followed by
processes with lower priorities.
•If two jobs have the same priorities then the process that should
execute first is chosen on the basis of round-robin or FCFS.
Which process should have what priority depends on a process’
memory requirements, time requirements, the ratio of I/O burst
to CPU burst, etc.
•Types of Priority Scheduling
•Following are the two main types of priority scheduling:

1) Preemptive Scheduling: Tasks execute according to their


priorities. In case a lower priority task is running and a higher
priority task arrives in the waiting state then the lower priority
task is put on hold. The higher priority task replaces it and
once done executing the lower priority task resumes execution
from when it was paused. This process requires special
hardware like a timer.
2) Non-Preemptive Scheduling: The OS allocates the CPU to
a specific process that releases the CPU either through context
switching or after termination. We can use this method on
various hardware platforms because unlike preemptive
scheduling it doesn’t require any special hardware.
•Advantages :
•Following are the benefits of priority scheduling method:
•Easy to use.
•Processes with higher priority execute first which saves time.
•The importance of each process is precisely defined.
•A good algorithm for applications with fluctuating time and
resource requirements
•Disadvantages :
•Following are the disadvantages of priority scheduling:

∙ We can lose all the low-priority processes if the system


crashes.
∙ This process can cause starvation if high-priority
processes take too much CPU time. The lower priority
process can also be postponed for an indefinite time.
∙ There is a chance that a process can’t run even when it is
ready as some other process is running currently.
Multilevel Queue Scheduling :
• Multilevel queue scheduling based on response time
requirement.
•Some process required a quick response by the processor ;some
processes can wait.
•A multilevel queue scheduling algorithm partitions the ready
queue into separate queues.
•Another class of scheduling algorithms has been created for
situations in which processes are easily classified into different
groups.
•For example, a common division is made between foreground
(interactive) processes and background (batch) processes. These two
types of processes have different response-time requirements and so
may have different scheduling needs. In addition, foreground
processes may have priority (externally defined) over background
processes.
•A multilevel queue scheduling algorithm partitions the ready queue
into several separate queues .
•The processes are permanently assigned to one queue, generally based
on some property of the process, such as memory size, process
priority, or process type. Each queue has its own scheduling
algorithm.
•For example, separate queues might be used for foreground and
background processes. The foreground queue might be scheduled by
an RR algorithm, while the background queue is scheduled by an
FCFS algorithm.
•In addition, there must be scheduling among the queues, which is
commonly implemented as fixed-priority preemptive scheduling. For
example, the foreground queue may have absolute priority over the
background queue.
•Let’s look at an example of a multilevel queue scheduling algorithm
with five queues, listed below in order of priority:
•System processes
•Interactive processes
•Interactive editing processes
•Batch processes
•Student processes
•Advantages
•In MLQ the processes are permanently assigned to their
respective queue and do not move between queues. this results
in low scheduling overhead.
•In MLQ one can apply different scheduling algorithms to
different processes.
•Disadvantages:
•The main disadvantage of Multilevel Queue Scheduling is
the problem of starvation for lower-level processes.

•what is starvation?
•Starvation:
•Due to starvation lower-level processes either never execute
or have to wait for a long amount of time because of lower
priority or higher priority process taking a large amount of
time.
•What is Deadlock ?
•In a multiprogramming environment, several processes may
compete for a finite number of resources. A process requests
resources; if the resources are not available at that time, the
process enters a waiting state. Sometimes, a waiting process is
never again able to change state, because the resources it has
requested are held by other waiting processes. This situation is
called a deadlock.
•All the processes in a system require some resources such as central
processing unit(CPU), file storage, input/output devices, etc. to
execute it.
•Once the execution is finished, the process releases the resource it
was holding. However, when many processes run on a system
they also compete(achive) for these resources they require for
execution.
•This may arise a deadlock situation.
•“A deadlock is a situation where a set of processes are
blocked because each process is holding a resource and
waiting for another resource acquire by some other process.”

• example: In multiprogramming system ,suppose two process


are there and each want to print a very large file. Process A
requests permission to use the printer and is granted.
•Process B then requests permission to use the tape drive and is
also granted.
•Now A asks for the tape drive, but the request is denied until B
is releases it .
•Instead of releasing the tape drive, B asks for the printer.at this
point both processes are blocked and will remain so forever.
This situation called deadlock.
• Necessary Conditions to Deadlock:
• Processes involved in a deadlock remain blocked permanently and this affects OS
performance indices like throughput and resource efficiency.
• In a multiprogramming environment ,multiple processes may try to access resource . A
deadlock is a situation when a process waits endlessly for a resource and the requested
resource is being used by another process i.e. waiting for some other resource.
• A deadlock situation can arise if the following four conditions hold true
simultaneously in a system:

1) Mutual Exclusion
2) No Pre-emption
3) Hold and Wait
4)Circular Wait
1) Mutual Exclusion:
•At least one resource must be held in a non-sharable mode;
•that is, only one process at a time can use the resource.
• If another process requests that resource, the requesting
process must be delayed until the resource has been released.
•Each resource is either currently assigned to exactly one
process or is available.
2) Hold and Wait:
•A process must be holding at least one resource and waiting to
acquire additional resources that are currently being held by
other processes.
•Process currently holding resources granted earlier can request
new resources.
3) No Pre-emption:
• Resources cannot be Pre-empted ; i.e resource can only be released
voluntarily by the process holding it, after the process has completed
its task.
• Resources previously granted cannot be forcibly taken away from a
process.
• They must be explicitly releases by the process holding them.
• 4) Circular Wait: A set of processes are waiting for each other in a
circular fashion.
•There exist a set (P0,P1,…… Pn) of waiting processes such that P0 is
waiting for a resource which is held by P2. Pn-1 is waiting for
resources which are held by Pn and Pn is waiting for resource which
is held by P0.
•Thus there must be a circular chain of two or more processes , each of
which is waiting for a resource held by the next member of the chain.
•Deadlock handling :
•A deadlock in OS can be handled in following four different
ways:
•Adopt methods for avoiding the deadlock.
•Prevent the deadlock from occurring (use a protocol)
•Ignore the deadlock.
•Allow the deadlock to occur ,detect it and recover from it.
•Methods of Handling Deadlocks in Operating System/
Methods of deadlock prevention:
•A deadlock occurs when all of the four conditions (Mutual
Exclusion ,Hold and wait,no Preemption and Circular wait)are
satisfied at any point if time .
•The deadlock can be prevented by not allowing all four
conditions to be satisfied simultaneously i.e by making sure that
at least one of the four condition does not hold.
1)Eliminating Mutual Exclusion Condition :
•The mutual exclusion condition must hold for non shareable
resources.
•Eg. Several processes cannot simultaneously share a printer.
•Shareable resources do not require mutually exclusive access,
and thus can not be involved in a deadlock.
•Read only files are good example of shareable resource . if
several processes attempt to open a read only file at the same
time ,they can be granted simultaneous access to the file .
• a process never needs to wait for a shareable resource. It is not
possible to prevent deadlock by denying the mutual exclusion
condition .
•Eliminating Hold and Wait :
•To ensure that the hold-and-wait condition never occurs in the system,
we must guarantee that, whenever a process requests a resource, it
does not hold any other resources.
•One protocol that we can use requires each process to request and be
allocated all its resources before it begins execution. We can
implement this provision by requiring that system calls requesting
resources for a process precede all other system calls.

• Eliminating no Preemption:
•The third necessary condition for deadlocks is that there be no
preemption of resources that have already been allocated.
•To ensure that this condition does not hold, we can use the following
protocol.
•If a process is holding some resources and requests another resource
that cannot be immediately allocated to it (that is, the process must
wait), then all resources the process is currently holding are
preempted. In other word, these resources are implicitly released. The
preempted resources are added to the list of resources for which the
process is waiting. The process will be restarted only when it can
regain its old resources, as well as the new ones that it is requesting.
•Eliminating Circular wait condition :
•The Circular wait condition of deadlock can be eliminated by
assigning a priority number to each available resource and a process
can request resources only in increasing order.
•Whenever a process requests for a resource , the priority number of
the required resource is compared with the priority numbers of the
resources already held by it.
• 1.if the priority number of a requested resource is greater than that of all the
currently held resources the request is granted.
• 2.if the priority number of a requested resource is less than that of the currently
head resources, all the resources with greater priority number must be released
first, before acquiring the new resource.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy