0% found this document useful (0 votes)
4 views

CPU Scheduling

Uploaded by

tushikasahu5
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

CPU Scheduling

Uploaded by

tushikasahu5
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

CPU Scheduling

CPU scheduling is the basis of multi-programmed operating systems. By switching the


CPU among processes, the operating system can make the computer more productive.
In a single-processor system, only one process can run at a time. Others must wait
until the CPU is free and can be rescheduled. The objective of multiprogramming is to have
some process running at all times, to maximize CPU utilization. A process is executed until it
must wait, typically for the completion of some I/O request. In a simple computer system, the
CPU then just sits idle. All this waiting time is wasted; no useful work is accomplished. In
multiprogramming, several processes are kept in memory at one time. When one process has
to wait, the operating system takes the CPU away from that process and gives the CPU to
another process. This pattern continues. Every time one process has to wait, another process
can take over use of the CPU.

CPU–I/O Burst Cycle:


The success of CPU scheduling depends on an observed property of processes: process
execution consists of a cycle of CPU execution and I/O wait. Processes alternate between
these two states. Process execution begins with a CPU burst. That is followed by an I/O
burst, which is followed by another CPU burst, then another I/O burst, and so on. Eventually,
the final CPU burst ends with a system request to terminate execution.

CPU Scheduler:
Whenever the CPU becomes idle, the operating system must select one of the processes in the
ready queue to be executed. The selection process is carried out by the short-term
scheduler, or CPU scheduler.

CPU Scheduling:
CPU-scheduling decisions may take place under the following four circumstances:
1. When a process switches from the running state to the waiting state (for example, as
the result of an I/O request or an invocation of wait() for the termination of a child
process).
2. When a process switches from the running state to the ready state (for example, when
an interrupt occurs).
3. When a process switches from the waiting state to the ready state (for example, at
completion of I/O).
4. When a process terminates.

When scheduling takes place only under circumstances 1 and 4, we say that the scheduling
scheme is nonpreemptive or cooperative. Otherwise, it is preemptive. Under
nonpreemptive scheduling, once the CPU has been
allocated to a process, the process keeps the CPU until it releases the CPU either by
terminating or by switching to the waiting state.

Dispatcher:
The dispatcher is the module that gives control of the CPU to the process selected by the
short-term scheduler. This function involves the following:
 Switching context
 Switching to user mode
 Jumping to the proper location in the user program to restart that program.
The time it takes for the dispatcher to stop one process and start another running is known as
the dispatch latency.

Scheduling Criteria:
Many criteria have been suggested for comparing CPU-scheduling algorithms. Which
characteristics are used for comparison can make a substantial difference in which algorithm
is judged to be best. The criteria include the following:

 CPU utilization: We want to keep the CPU as busy as possible. Conceptually, CPU
utilization can range from 0 to 100 percent. In a real system, it should range from 40
percent (for a lightly loaded system) to 90 percent (for a heavily loaded system).
 Throughput: If the CPU is busy executing processes, then work is being done. One
measure of work is the number of processes that are completed per time unit, called
throughput.
 Turnaround time: The interval from the time of submission of a process to the time
of completion is the turnaround time. Turnaround time is the sum of the periods spent
waiting to get into memory, waiting in the ready queue, executing on the CPU, and
doing I/O.
 Waiting time: The CPU-scheduling algorithm does not affect the amount of time
during which a process executes or does I/O. It affects only the amount of time that a
process spends waiting in the ready queue. Waiting time is the sum of the periods
spent waiting in the ready queue.
 Response time: A measure is the time from the submission of a request until the first
response is produced. This measure, called response time, is the time it takes to start
responding, not the time it takes to output the response.

It is desirable to maximize CPU utilization and throughput and to minimize turnaround time,
waiting time, and response time.

Scheduling Algorithms:
1. First-Come, First-Served Scheduling:
The simplest CPU-scheduling algorithm is the first-come, first-served (FCFS)
scheduling algorithm. With this scheme, the process that requests the CPU first is
allocated the CPU first. The implementation of the FCFS policy is easily managed
with a FIFO queue. When a process enters the ready queue, its PCB is linked onto the
tail of the queue. When the CPU is free, it is allocated to the process at the head of the
queue. The running process is then removed from the queue. The code for FCFS
scheduling is simple to write and understand. On the negative side, the average
waiting time under the FCFS policy is often quite long.
Note: Problems have been solved in class.
There is a convoy effect as all the other processes wait for the one big process to get
off the CPU. This effect results in lower CPU and device utilization than might be
possible if the shorter processes were allowed to go first.

2. Shortest-Job-First Scheduling:
This algorithm associates with each process the length of the process’s next CPU
burst. When the CPU is available, it is assigned to the process that has the smallest
next CPU burst. If the next CPU bursts of two processes are the same, FCFS
scheduling is used to break the tie.
Note: Problems have been solved in class.
The SJF scheduling algorithm is probably optimal, in that it gives the minimum
average waiting time for a given set of processes. Moving a short process before a
long one decreases the waiting time of the short process more than it increases the
waiting time of the long process.
The real difficulty with the SJF algorithm knows the length of the next CPU request.
Although the SJF algorithm is optimal, it cannot be implemented at the level of short-
term CPU scheduling. With short-term scheduling, there is no way to know the length
of the next CPU burst.

3. Priority Scheduling:
The SJF algorithm is a special case of the general priority-scheduling algorithm. A
priority is associated with each process, and the CPU is allocated to the process with
the highest priority. Equal-priority processes are scheduled in FCFS order. An SJF
algorithm is simply a priority algorithm where the priority (p) is the inverse of the
(predicted) next CPU burst. The larger the CPU burst, the lower the priority, and vice
versa. Priorities are generally indicated by some fixed range of numbers, such as 0 to
7 or 0 to 4,095.
Note: Problems have been solved in class.
Priority scheduling can be either preemptive or nonpreemptive. When a process
arrives at the ready queue, its priority is compared with the priority of the currently
running process. A preemptive priority scheduling algorithm will preempt the CPU if
the priority of the newly arrived process is higher than the priority of the currently
running process. A nonpreemptive priority scheduling algorithm will simply put the
new process at the head of the ready queue. A major problem with priority scheduling
algorithms is indefinite block ing, or starvation. A process that is ready to run but
waiting for the CPU can be considered blocked. A priority scheduling algorithm can
leave some low priority processes waiting indefinitely.
A solution to the problem of indefinite blockage of low-priority processes is aging. Aging
involves gradually increasing the priority of processes that wait in the system for a long time.

4. Round-Robin Scheduling:
The round-robin (RR) scheduling algorithm is designed especially for time sharing
systems. It is similar to FCFS scheduling, but preemption is added to enable the
system to switch between processes. A small unit of time, called a time quantum or
time slice, is defined. The ready queue is treated as a circular queue.
The CPU scheduler goes around the ready queue, allocating the CPU to each process
for a time interval of up to 1 time quantum. To implement RR scheduling, we again
treat the ready queue as a FIFO queue of processes. New processes are added to the
tail of the ready queue. The CPU scheduler picks the first process from the ready
queue, sets a timer to interrupt after 1 time quantum, and dispatches the process. One
of two things will then happen. The process may have a CPU burst of less than 1 time
quantum. In this case, the process itself will release the CPU voluntarily. The
scheduler will then proceed to the next process in the ready queue. If the CPU burst of
the currently running process is longer than 1 time quantum, the timer will go off and
will cause an interrupt to the operating system. A context switch will be executed, and
the process will be put at the tail of the ready queue. The CPU scheduler will then
select the next process in the ready queue.
The average waiting time under the RR policy is often long. Turnaround time also
depends on the size of the time quantum. In general, the average turnaround time can
be improved if most processes finish their next CPU burst in a single time quantum.

5. Shortest Remaining Time First (SRTF):


Numerical is solved in class.

6. Multilevel Queue Scheduling:


Another class of scheduling algorithms has been created for situations in which
processes are easily classified into different groups. A multilevel queue scheduling
algorithm partitions the ready queue into several separate queues. The processes are
permanently assigned to one queue, generally based on some property of the process,
such as memory size, process priority, or process type. Each queue has its own
scheduling algorithm.
In addition, there must be scheduling among the queues, which is commonly
implemented as fixed-priority preemptive scheduling.
Let’s look at an example of a multilevel queue scheduling algorithm with five queues,
listed below in order of priority:
1. System processes
2. Interactive processes
3. Interactive editing processes
4. Batch processes
5. Student processes

Fig: Multilevel queue scheduling

Each queue has absolute priority over lower-priority queues. No process in the batch queue,
for example, could run unless the queues for system processes, interactive processes, and
interactive editing processes were all empty. If an interactive editing process entered the
ready queue while a batch process was running, the batch process would be preempted.
Another possibility is to time-slice among the queues. Here, each queue gets a certain portion
of the CPU time, which it can then schedule among its various processes. For instance, in the
foreground–background queue example, the foreground queue can be given 80 percent of the
CPU time for RR scheduling among its processes, while the background queue receives 20
percent of the CPU to give to its processes on an FCFS basis.

7. Multilevel Feedback Queue Scheduling:


Normally, when the multilevel queue scheduling algorithm is used, processes are
permanently assigned to a queue when they enter the system. If there are separate queues for
foreground and background processes, for example, processes do not move from one queue
to the other, since processes do not change their foreground or background nature.

The multilevel feedback queue scheduling algorithm allows a process to move between
queues. The idea is to separate processes according to the characteristics of their CPU bursts.
If a process uses too much CPU time, it will be moved to a lower-priority queue. This scheme
leaves I/O-bound and interactive processes in the higher-priority queues. In addition, a
process that waits too long in a lower-priority queue may be moved to a higher-priority
queue. This form of aging prevents starvation.

For example, consider a multilevel feedback queue scheduler with three queues, numbered
from 0 to 2. The scheduler first executes all processes in queue 0.
Fig: Multilevel feedback queue scheduling

Only when queue 0 is empty will it execute processes in queue 1. Similarly, processes in
queue 2 will be executed only if queues 0 and 1 are empty. A process that arrives for queue 1
will preempt a process in queue 2. A process in queue 1 will in turn be preempted by a
process arriving for queue 0. A process entering the ready queue is put in queue 0. A process
in queue 0 is given a time quantum of 8 milliseconds. If it does not finish within this time, it
is moved to the tail of queue 1. If queue 0 is empty, the process at the head of queue 1 is
given a quantum of 16 milliseconds. If it does not complete, it is preempted and is put into
queue 2. Processes in queue 2 are run on an FCFS basis but are run only when queues 0 and 1
are empty. This scheduling algorithm gives highest priority to any process with a CPU burst
of 8 milliseconds or less. Such a process will quickly get the CPU, finish its CPU burst, and
go off to its next I/O burst. Processes that need more than 8 but less than 24 milliseconds are
also served quickly, although with lower priority than shorter processes. Long processes
automatically sink to queue 2 and are served in FCFS order with any CPU cycles left over
from queues 0 and 1.
In general, a multilevel feedback queue scheduler is defined by the following parameters:

 The number of queues


 The scheduling algorithm for each queue
 The method used to determine when to upgrade a process to a higher priority queue
 The method used to determine when to demote a process to a lower priority queue
 The method used to determine which queue a process will enter when that process
needs service

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy