0% found this document useful (0 votes)
17 views

06 - CPU Scheduling

The document discusses CPU scheduling algorithms including first-come, first-served (FCFS) and shortest-job-first (SJF). FCFS is the simplest algorithm but does not minimize waiting time. SJF aims to select the process with the shortest remaining CPU burst time to improve average waiting time.

Uploaded by

khaledhamead
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views

06 - CPU Scheduling

The document discusses CPU scheduling algorithms including first-come, first-served (FCFS) and shortest-job-first (SJF). FCFS is the simplest algorithm but does not minimize waiting time. SJF aims to select the process with the shortest remaining CPU burst time to improve average waiting time.

Uploaded by

khaledhamead
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 65

Operating Systems 1

CS 241
Spring 2021

By
Marwa M. A. Elfattah

Main Reference
Operating System Concepts, Abraham Silbrschatz,
10th Edition
CPU Scheduling
Basic Concepts
 Scheduling is a fundamental operating-system
function.
• Almost all computer resources are scheduled
before use.
 With multiprogramming:
• Several processes are kept in memory at a time.
• When one process has to wait, the operating
system takes the CPU away from that process
and gives the CPU to another process.
Maximize CPU utilization
Process Execution Cycle
 Process execution consists of a
cycle of CPU execution (CPU
burst) followed by I/O wait (I/O
burst)
• An I/O-bound program has
many short CPU bursts.
• A CPU-bound program
might have a few long CPU
bursts.
 The final CPU burst ends with a
system request to terminate
execution
CPU Scheduler
 The CPU scheduler selects from among the
processes in ready queue, and allocates a CPU
core to one of them
• The ready queue is not necessarily a first-in, first-
out (FIFO) queue.
• The records in the queues are generally process
control blocks (PCBs) of the processes
CPU Scheduler
 CPU scheduling decisions may take place when a
process:
1. Switches from running to waiting state
2. Switches from running to ready state (for
example, when an interrupt occurs)
3. Switches from waiting to ready
4. Terminates
 For situations 1 and 4, a new process (if one exists
in the ready queue) must be selected for execution.
• The scheduling scheme is nonpreemptive.
CPU Scheduler
 CPU scheduling decisions may take place when a
process: Nonpreemptive Scheduling
1.The
Switches
processfrom running
keeps to waiting
the CPU until itstate
releases it
either by terminating
2. Switches from runningortobyready
switching
state to the
(for
example, whenwaiting state.occurs)
an interrupt
3. Switches from waiting to ready
Virtually all modern operating systems
4. Terminates
including Windows, MacOS, Linux, and UNIX
 For situations 1 and 4,scheduling
use preemptive a new process (if one exists
algorithms.
in the ready queue) must be selected for execution.
• The scheduling scheme is nonpreemptive.
CPU Scheduler
 CPU scheduling decisions may take place when a
process:
1. Switches from running to waiting state
2. Switches from running to ready state (for
example, when an interrupt occurs)
3. Switches from waiting to ready
4. Terminates
 For situations 2 and 3, however, the scheduling
scheme is preemptive.
CPU Scheduler
 CPU scheduling decisions may take
Preemptive Scheduling place when a
process:
Can result in race conditions when data are
1. Switches
shared from running
among to waiting
several state
processes.
2. Switches from running to ready state (for
Consider
example,thewhen
case of
antwo processes
interrupt that share data.
occurs)
While one process is updating the data, it is preempted
3. Switches
so that thefrom
second waiting
processto can
ready
run. The second
4. process then tries to read the data, which are in an
Terminates
inconsistent state.
 For situations 2 and 3, however, the scheduling
scheme is preemptive.
Dispatcher
 Dispatcher module gives control of
the CPU core to the process selected
by the CPU scheduler; this involves:
• Switching context
• Switching to user mode
• Jumping to the proper location in
the user program to be resumed
 Dispatch latency – time it takes for
the dispatcher to stop one process
and start another running
 The dispatcher needs to be as fast
as possible
Scheduling Criteria
 In which algorithm is judged to be best:
• CPU utilization – keep the CPU as busy as
possible – Need to be Maximized
• Throughput – # of processes that complete their
execution per time unit – Need to be Maximized
Dependent on the process length
• Turnaround time – amount of time to execute a
particular process – Need to be Minimized
 The sum of the periods spent waiting in the
ready queue, executing on the CPU, and
doing I/O.
Scheduling Criteria
 In which algorithm is judged to be best:
• Waiting time – amount of time a process has
been waiting in the ready queue – Need to be
Minimized
• Response time – amount of time it takes from
when a request was submitted until the first
response is produced – Need to be Minimized
A process can produce some output fairly
early and continue computing new results
while previous results are being output to the
user.
Thread Scheduling
 On most modern operating systems it is kernel-level
threads—not processes—that are being scheduled
• Thread library schedules user-level threads to run
on lightweight process (LWP).
• Each Light Weight Process is attached to a kernel
thread.
• It is kernel threads that the operating system
schedules to run on physical core.
CPU Scheduling Algorithm
 The problem  which of the processes in the
ready queue is to be allocated the CPU’s core.

 Simplifying Assumptions:
• Single processing core
One process at a time
• One thread per process
• Processes are independent
First- Come, First-Served (FCFS) Scheduling
 The simplest CPU-scheduling algorithm.
• Implemented as a FIFO queue.
• It is nonpreemptive.
 EX: Process Burst Time
P1 24
P2 3
P3 3
 Suppose the processes arriving order: P1 , P2 , P3

 Waiting time for P1 = 0; P2 = 24; P3 = 27


 Average waiting time: (0 + 24 + 27)/3 = 17
First- Come, First-Served (FCFS) Scheduling
 The simplest CPU-scheduling algorithm.
• Implemented as a FIFO queue.
• It is nonpreemptive.
 EX: Process Burst Time
P1 24
P2 3
P3 3
 Suppose the processes arriving order: P2 , P3 , P1

 Waiting time for P1 = 6; P2 = 0; P3 = 3


 Average waiting time: (0 + 6 + 3)/3 = 3
First- Come, First-Served (FCFS) Scheduling
 The simplest CPU-scheduling algorithm.
• Implemented as a FIFO queue.
• It is nonpreemptive.
 EX: Process Burst Time
P1 24
P2 waiting3time under an FCFS
Thus, the average
policy is generally
P3 not minimal
3 and may vary
 Suppose the processes arriving order: P2 , P3 , P1

 Waiting time for P1 = 6; P2 = 0; P3 = 3


 Average waiting time: (0 + 6 + 3)/3 = 3
First- Come, First-Served (FCFS) Scheduling
 Assume we have one CPU-bound process and
many I/O-bound processes, the following
scenario may result:
• When the CPU-bound process will get the CPU:
all the other processes will finish their I/O and
will move into the ready queue,
the I/O devices are idle.
• The CPU-bound process finishes its CPU burst
and moves to an I/O device.
Allthe I/O-bound processes execute quickly
and move back to the I/O queues.
The CPU sits idle.
First- Come, First-Served (FCFS) Scheduling
 Assume we have one CPU-bound process and
many I/O-bound processes, the following
scenario may result:
• When theisCPU-bound
There process
a convoy effect as will get other
all the the CPU:
processes waitprocesses
all the other for the one big
will process
finish theirto
I/Oget
and
off ready
will move into the the CPU. queue,
This effect
the I/O results
devices in idle.
are lower CPU and device
utilization
• The CPU-bound process finishes its CPU burst
and moves to an I/O device.
All the I/O-bound processes execute quickly
and move back to the I/O queues.
The CPU sits idle.
Shortest-Job-First (SJF) Scheduling
 Associate with each process the length of its next
CPU burst
• Use these lengths to schedule the process with
the shortest time
• If the next CPU bursts of two processes are the
same, FCFS scheduling is used to break the tie.
 A more appropriate term for this scheduling method
would be the shortest-next-CPU-burst
Shortest-Job-First (SJF) Scheduling
 EX: ProcessArriva lBurst Time
P1 0.0 6
P2 2.0 8
P3 4.0 7
P4 5.0 3
 SJF scheduling Gantt chart

 Average waiting time = (3 + 16 + 9 + 0) / 4 = 7


 If we were using the FCFS scheduling scheme, the
average waiting time would be 10.25 milliseconds.
Shortest-Job-First (SJF) Scheduling
 EX: ProcessArriva lBurst Time
P1 0.0 6
P2 2.0 8
Optimal – gives minimum average waiting time.
P3 4.0 7
But,
P4 5.0 3
The difficulty is knowing the length of the next
 SJF scheduling Gantt
CPU chart
request.

 Average waiting time = (3 + 16 + 9 + 0) / 4 = 7


 If we were using the FCFS scheduling scheme, the
average waiting time would be 10.25 milliseconds.
Shortest-Job-First (SJF) Scheduling
 Can be preemptive with different arrival times
processes (Shortest-Remaining-Time-First).
 EX: ProcessAArrival TimeT Burst Time
P1 0 8
P2 1 4
P3 2 9
P4 3 5
Shortest-Job-First (SJF) Scheduling
ProcessAArrival TimeT Burst Time 1 Remaining
2 3 5
P1 0 8 7 7 7 7
P2 1 4 3 2 0
P3 2 9 9 9
P4 3 5 5

Average waiting time = [(10-1)+(1-1)+(17-2)+(5-3)]/4 =


26/4 = 6.5
Round Robin (RR)
 Similar to FCFS scheduling, but preemption is added
 Each process gets a small unit of CPU time (time
quantum q). After this time has elapsed, the
process is preempted and added to the end of the
ready queue.
• Timer interrupts every time quantum to schedule
next process.
Example of RR with Time Quantum = 4
Process Burst Time
P1 24
P2 3
P3 3

 P1 waits for 6 milliseconds (10 − 4)


 P2 waits for 4 milliseconds
 P3 waits for 7 milliseconds.
The average waiting time is 17/3 = 5.66 milliseconds.
Example of RR with Time Quantum = 4
Process Burst Time
P1 24
P2 3
P3 3

 P1 waits for 6 milliseconds (10 − 4)


Higher average
 P2 waits for 4 milliseconds waiting than SJF, but
 P3 waits for 7 milliseconds. better response
The average waiting time is 17/3 = 5.66 milliseconds.
Round Robin (RR)
 Performance depends heavily on q size
• q large  FIFO
• q small  context switch overhead is too high
Round Robin (RR)
 Performance depends heavily on q size
• q large  FIFO
• q small  context switch overhead is too high

 q must be large with respect to context switch,


• If the context-switch time is approximately 10
percent of the time quantum, then about 10
percent of the CPU time will be spent in context
switching.
Priority Scheduling
 A priority number (integer) is associated with each
process
 The CPU is allocated to the process with the highest
priority (assume smallest integer  highest priority)
• Preemptive or Nonpreemptive
• SJF is priority scheduling where priority is the
inverse of predicted next CPU burst time
Example of Nonpreemptive Priority Scheduling
ProcessAaBurst TimeT Priority
P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2

 Priority scheduling Gantt Chart

 Average waiting time = 8.2


Example of Preemptive Priority Scheduling
ProcessAaArrival Time Burst TimeT Priority
P1 0 10 3
P2 1 1 1
P3 2 2 4
P4 3 3 2

P1 P2 P1 P4 P1 P3
0 1 2 3 6 14 16

 Average waiting time =( (4) + (0) + (14-2) + 0 ) /4 = 4


Priority Scheduling
 Priorities can be defined either internally or externally.
• Internally Use some measurable quantities, like
memory requirements, the ratio of average I/O
burst to average CPU burst …
• External  set by criteria outside the operating
system, such as the importance of the process, the
type and amount of funds …
 Problem: Starvation – low priority processes may
never execute
• Solution: Aging – as time progresses increase the
priority of the process
Priority Scheduling
 Equal-priority processes:
• Scheduled in FCFS order.
• Or using round-robin:
The system executes the highest-priority
process
Then runs processes with the same priority
using round-robin scheduling.
Priority Scheduling w/ Round-Robin
ProcessABurst TimeT Priority
P1 4 3
P2 5 2
P3 8 2
P4 7 1
P5 3 3
 Gantt Chart with time quantum = 2
Priority Scheduling
 To determine the highest-priority process, an O(n)
search may be necessary.
 It is easier to have
separate queues for
each distinct priority-
Multilevel queue
scheduling.
 Priority may be
based upon process
type. EX: background
processes, system
processes,
interactive,…
Priority Scheduling
 To determine the highest-priority process, an O(n)
search may be necessary.
Multilevel queue scheduling
 It is Each
easierqueue might have its own scheduling
to have
algorithm.
separate Also, for
queues it is possible to time-slice CPU
each distinct time among the queues.
priority-
Multilevel queue
In the foreground–background queue example:
scheduling.
• The foreground queue can be given 80
 Priority mayofbe
percent the CPU time for RR scheduling
based
amongupon
its process
processes,
type. EX: background
• While thesystem
processes, background queue receives 20
percent of the CPU to give to its processes on
processes,
an FCFS basis.
interactive,…
Multilevel Feedback Queue
 A process can move between the various queues.
 Multilevel-feedback-queue scheduler defined:
• Scheduling algorithms for each queue
• Method used to determine when to upgrade or
demote a process Q0
 Example:
• A process in Q0
gains CPU for 8 Q1
ms If it does not
finish, it is moved
to queue Q1
Q2
Multilevel Feedback Queue
 A process can move between the various queues.
 Multilevel-feedback-queue scheduler defined:
• Scheduling algorithms for each queue
• Method used to determine when to upgrade or
demote a process Q0
 Example:
• At Q1 a process is
served in RR for Q1
16 ms  If it still
does not
complete, it is
moved to queue Q2
Q2
Multilevel Feedback Queue
 A process can move between the various queues.
 Multilevel-feedback-queue scheduler defined:
• Scheduling algorithms for each queue
• Method used to determine when to upgrade or
demote
It the mosta general
process CPU-scheduling algorithm.
Q0
 Example:
But
• At Q1 a process is
It is also the most complex algorithm, since it
served in RR for
requires some means to select between Q1
16 ms  If it still
different choices.
does not
complete, it is
moved to queue Q2
Q2
Multiple-Processor Scheduling
 CPU scheduling more complex when multiple CPUs
are available
 One approach is to have all scheduling decisions,
I/O processing, and other system activities handled
by a single processor — the master server. The
other processors execute only user code -
asymmetric multiprocessing.
• Simple because only one core accesses the
system data structures, reducing the need for
data sharing.
• The downfall of this approach is the master server
becomes a potential bottleneck where overall
system performance may be reduced.
Multiple-Processor Scheduling
 Symmetric multiprocessing (SMP) is where each
processor is self scheduling.
A. All threads may be in a common ready queue
must ensure that two separate processors do
not choose to schedule the same thread
shared queue would likely be a performance
bottleneck.
Multiple-Processor Scheduling
 Symmetric multiprocessing (SMP) is where each
processor is self scheduling.
B. Each processor may have its own private queue
of threads
balancing algorithms can be used to equalize
workloads among all processors.
Two general approaches:
• Push migration periodic task
checks load on each processor,
and if found pushes task from
overloaded CPU to other CPUs
• Pull migration idle processors
pulls waiting task from busy CPUs
Mulicore processor
 Recent trend is to place multiple processor cores on
same physical chip - multicore processor.
• Faster and consumes less power
• Each core appears to the operating system to be
a separate logical CPU.
Multithreaded Multicore System
 Each core has more than one hardware threads.
• Each hardware thread maintains its architectural
state, such as instruction pointer and register set,
and thus appears as a logical CPU.
On a quad-core system with 2 hardware threads
per core, the operating system sees 8 logical
processors.
Multithreaded Multicore System
 Each core has more than one hardware threads.
• Each hardware thread maintains its architectural
state, such as instruction pointer and register set,
and thus appears as a logical CPU.
 If one thread has a memory stall, switch to another
thread Memory stall is the time spend by the CPU
waiting for data to become available from
memory. It can be 50% of CPU time
Multithreaded Multicore System
 Each core has more than one hardware threads.
• Each hardware thread maintains its architectural
state, such as instruction pointer and register set,
and thus appears as a logical CPU.
 If one thread has a memory stall, switch to another
thread
Multithreaded Multicore System
 A core can only execute one hardware thread at a
time.
 Two levels of scheduling:
1.The OS deciding
which software
thread to run on a
logical CPU
2.Each core
decides which
hardware thread
to run on the
physical core.
Processor Affinity
 When a thread has been running on one processor,
the cache contents of that processor stores the
memory accesses by that thread.
• A thread having affinity for a processor
(“processor affinity”)
 Load balancing may affect processor affinity as a
thread may be moved from one CPU to another
 Soft affinity – the operating system attempts to
keep a thread running on the same processor, but
no guarantees.
 Hard affinity – allows a process to specify a set of
processors it may run on.
NUMA and CPU Scheduling
 If the operating system is NUMA-aware, it will
assign memory closes to the CPU the thread is
running on.
Heterogeneous Processors
 Some systems are now designed using cores vary
in their clock speed and power management.
 Better management by assigning tasks to certain
cores based upon the task demands.
• Scheduler can assign tasks that do not require
high performance, but may need to run for longer
periods, (such as background tasks) to little
cores.
• Additionally, if the mobile device is in a power-
saving mode, energy-intensive big cores can be
disabled and the system can rely solely on
energy-efficient little cores.
Real-Time CPU Scheduling
 System in which job has deadline, job has to
finished by the deadline (strictly finished). If a result
is delayed, huge loss may happen.
• Can present obvious challenges
 Soft real-time systems:
• Whose operation is degrade if results are not
produce according to the specified timing
requirement
• Critical real-time tasks have the highest priority,
but no guarantee as to when tasks will be
scheduled
• Ex: Computer game, Multimedia …
Real-Time CPU Scheduling
 System in which job has deadline, job has to
finished by the deadline (strictly finished). If a result
is delayed, huge loss may happen.
• Can present obvious challenges
 Hard real-time systems:
• Whose operation is incorrect if result is not
produce according to time constraint
• Task must be serviced by its deadline
• Ex: Air Traffic Control , Medical System…
Real-Time CPU Scheduling
 Event latency – the amount of time that elapses
from when an event occurs to when it is serviced.
• Interrupt latency • Dispatch latency
Real-Time CPU Scheduling
 Event latency – the amount of time that elapses
from when an event occurs to when it is serviced.
• Interrupt latency • Dispatch latency
The time from Time for
arrival of schedule to take
interrupt to start current process
of routine that off CPU and
services switch to
interrupt another
Real-Time CPU Scheduling
 Processes have new characteristics:
• periodic  require CPU at constant intervals
Has processing time t, deadline d, period p

0 ≤ t ≤ d ≤ p

Rate of periodic task is 1/p


Real-Time CPU Scheduling
 Rate monotonic scheduling
• A priority is assigned based on the inverse of its
period - assign a higher priority to tasks that
require the CPU more often.
Shorter periods = higher priority;
Longer periods = lower priority
Rate Monotonic Scheduling
 Example:
We have two processes, P1 and P2. The periods
for P1 and P2 are 50 and 100, respectively. The
processing times are t1 = 20 for P1 and t2 = 35 for
P2.
• First check it is possible to schedule these tasks
so that each meets its deadlines.
If we measure the CPU utilization of a process
Pi as the ratio of its burst to its period—ti ∕ pi
– For P1 is 20 ∕ 50 = 0.40
– For P2 is 35 ∕ 100 = 0.35
– Total CPU utilization of 75 percent.
Rate Monotonic Scheduling
 Example:
We have two processes, P1 and P2. The periods
for P1 and P2 are 50 and 100, respectively. The
processing times are t1 = 20 for P1 and t2 = 35 for
P2.
• First check it is possible to schedule these tasks
so that each meets its deadlines.
If we measure the CPU utilization of a process
Pi as the ratio of its burst to its period—ti ∕ pi
– Therefore, it seems we can schedule these
tasks in such a way that both meet their
deadlines and still leave the CPU with
available cycles.
Rate Monotonic Scheduling
 Example:
We have two processes, P1 and P2. The periods
for P1 and P2 are 50 and 100, respectively. The
processing times are t1 = 20 for P1 and t2 = 35 for
P2.
• P1  p= 50 t=20 Higher priority
• P2  p= 100 t=35 Lower priority
Rate Monotonic Scheduling
 Example:
We have two processes, P1 and P2. The periods for
P1 and P2 are 50 and 80, respectively. The
processing times are 25 for P1 and 35 for P2.
• First check it is possible to schedule these tasks
so that each meets its deadlines.
If we measure the CPU utilization
– For P1 is 25 ∕ 50 = 0.50
– For P2 is 35 ∕ 80 = 0.44
– Total CPU utilization of 94 percent.
– It seems we can schedule these tasks.
Rate Monotonic Scheduling
 Example:
We have two processes, P1 and P2. The periods for
P1 and P2 are 50 and 80, respectively. The
processing times are 25 for P1 and 35 for P2.
• P1  p= 50 t=25 Higher priority
• P2  p= 80 t=35 Lower priority

 Rate-monotonic scheduling has a limitation:


 CPU utilization is bounded, and it is not always
possible to maximize CPU resources fully.
Earliest Deadline First Scheduling (EDF)
 Priorities are assigned according to deadlines:
• The earlier the deadline, the higher the priority
• The later the deadline, the lower the priority

 EDF scheduling does not require that


• processes be periodic,
• nor must a process require a constant amount of
CPU time per burst.
 The only requirement is that a process announce its
deadline to the scheduler when it becomes ready.
Earliest Deadline First Scheduling (EDF)
 Priorities are assigned according to deadlines:
• The earlier the deadline, the higher the priority
• The later the deadline, the lower the priority

 The EDF scheduling is theoretically optimal


• The CPU utilization can be 100 percent.
 In practice, however, it is impossible to achieve this
level of CPU utilization due to the cost of context
switching between processes and interrupt handling.
Thank You

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy