0% found this document useful (0 votes)
0 views6 pages

Scheduling

The document discusses CPU scheduling, highlighting the differences between I/O-bound and CPU-bound programs, and introduces various CPU scheduling algorithms such as First-Come, First-Served, Shortest-Job-First, and Round-Robin. It also covers concepts like preemptive and non-preemptive scheduling, the role of the dispatcher, and criteria for evaluating scheduling algorithms. Additionally, it addresses thread scheduling, multiple-processor scheduling, and real-time CPU scheduling, emphasizing the importance of balancing workloads and minimizing latency.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
0 views6 pages

Scheduling

The document discusses CPU scheduling, highlighting the differences between I/O-bound and CPU-bound programs, and introduces various CPU scheduling algorithms such as First-Come, First-Served, Shortest-Job-First, and Round-Robin. It also covers concepts like preemptive and non-preemptive scheduling, the role of the dispatcher, and criteria for evaluating scheduling algorithms. Additionally, it addresses thread scheduling, multiple-processor scheduling, and real-time CPU scheduling, emphasizing the importance of balancing workloads and minimizing latency.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

CPU SCHEDULING • An I/O -bound program typically has many

CHAPTER 6 short CPU bursts.


• A CPU -bound program might have a few long
CPU bursts.
By switching the CPU among processes, the operating
system can make the
CPU SCHEDULER
computer more productive.
OS that supports threads has kernel-level threads – not
• idle state – OS must select one of the
processes – that are scheduled by the OS.
processes in the ready queue to be executed.
• Short-Term Scheduler/CPU-Scheduler –
CHAPTER OBJECTIVES
does selection process.
• Processes are waiting in memory that are ready
• To introduce CPU scheduling
to execute.
• CPU -scheduling algorithms
• CPU is then allocated to the selected process.
• Evaluation criteria for selecting a CPU-
• Ready queue - Can be implemented as a FIFO
scheduling algorithm
queue, a priority queue, a tree, or simply an
• Scheduling algorithms of several operating
unordered linked list
systems
• The records in the queues are generally
process control blocks ( PCB s) of the
BASIC CONCEPTS
processes.
• The objective of multiprogramming is to have
PRE-EMPTIVE SCHEDULING
some process running at all times, to maximize
CPU utilization.
• CPU Scheduling decisions circumstances:
1. Process switches from the running state to
CPU-I/O BURST CYCLE
the waiting state
2. Process switches from the running state to
• P
the ready state
r
3. When a process switches from the waiting
o
state to the ready state
p
4. When a process terminates
e
• Scheduling must occur for 2 & 3
rt
• Non-preemptive/Cooperative Scheduling =>
y
1 & 4, once the CPU has been allocated to a
process, the process keeps the CPU until it
releases the CPU either by terminating or by
switching to the waiting state.
• Preemptive Scheduling => Can result in race
conditions when data are shared among several
processes. Also affects the design of the
operating-system kernel.
• Interrupts can occur at any time – thus the
sections of code affected by interrupts must be
guarded from simultaneous use.
• Interrupts are disable at entry and re-enabled at
exit, so that sections of code are not accessed
of processes states: concurrently by several processes.
◦ Cycle
◦ I/O wait
• Process execution begins with a CPU burst. DISPATCHER
• Followed by an I/O burst (and repeat)
• Final CPU burst ends with a system request to • Located in CPU
terminate execution
• Def: The dispatcher is the module that gives head of the queue. Process is removed
control of the CPU to the process selected by from ready queue.
the short-term scheduler. ◦ Negative: Average waiting time under the
• Functions: FCFS policy is often quite long. May vary
◦ Switching context substantially if the processes’ CPU burst
◦ Switching to user mode times vary greatly.
◦ Jumping to the proper location in the user ◦ Convoy-Effect - All the other processes
program to restart that program wait for the one big process to get off the
• Requirements: CPU.
◦ Fast, as it is invoked during every process ◦ Results in lower CPU and device
switch. utilization than might be possible if the
• Dispatch Latency - The time it takes for the shorter processes were allowed to go first.
dispatcher to stop one process and start another
running. • Shortest-Job-First Scheduling (SJF)
◦ Associates with each process the length of
SCHEDULING CRITERIA the process’s next CPU burst.
◦ Available CPU is assigned to process that
1. CPU Utilization has the smallest CPU burst.
◦ Keep CPU as busy as possible. ◦ FCFS scheduling is used to break the tie.
2. Throughput ◦ Pro’s: Gives the minimum average
◦ Number of processes that are completed waiting time for a given set of processes -
per time unit. Moving a short process before a long one
3. Turnaround Time decreases the waiting time of the short
◦ The interval from the time of submission process more
of a process to the time of completion. than it increases the waiting time of the
◦ The sum of the periods spent waiting long process.
to get into memory, waiting in the ready ◦ Negative: Knowing the length of the next
queue, executing on the CPU , and doing I/ CPU burst - try to approximate SJF
O. scheduling.
4. Waiting Time ◦ SJF scheduling is used frequently in long-
◦ Sum of the periods spent waiting in the term scheduling.
ready queue. ◦ The next CPU burst is predicted as an
5. Response Time exponential average of the measured
◦ Time from the submission of a request lengths of previous CPU bursts.
until the first response is produced. ◦ Can be either pre-emptive or non-pre-
◦ Time it takes to start responding, not the emptive.
time it takes to output the response. ◦ Pre-emptive SJF scheduling == shortest-
remaining-time-first scheduling
• It is more important to minimize the variance
in the response time than to minimize the PRIORITY SCHEDULING
average response time.
• The SJF algorithm is a special case of the
SCHEDULING ALGORITHMS general priority-scheduling algorithm.
• Equal-priority processes are scheduled in FCFS
• First-Come, First-Served Scheduling order.
(FCFS) • Priorities can be defined either internally or
◦ The process that requests the CPU first is externally. Internal - priorities use some
allocated the CPU first. measurable quantity or quantities to compute
◦ Managed with a FIFO queue. the priority of a process. External - priorities
◦ Proses: Enters ready queue - PCB is linked are set by criteria outside the operating system.
onto tail. Free CPU will be allocated to the • Preemptive or Non-preemptive:
◦ Process arrives at the ready queue and its • Division is made between foreground
priority is compared with the priority (interactive) processes and background (batch)
of the currently running process. processes – they have different response-time
• A major problem with priority scheduling requirements.
algorithms is indefinite blocking, or starvation. • How it works:
• Starvation - A process that is ready to run but ◦ Partitions the ready queue into several
waiting for the CPU can be considered blocked. separate queues
A steady stream of higher-priority processes ◦ processes are permanently assigned to one
can prevent a low-priority process from ever queue, generally based on some property
getting the CPU. of the process:
• Solution: ▪ Memory size
Aging - involves gradually increasing the ▪ Process priority
priority of processes that wait ▪ Process type
in the system for a long time. ◦ Each queue has it’s own scheduling
algorithm: RR, FCFS, etc.
ROUND-ROBIN SCHEDULING (RR) ◦ Scheduling among queues: fixed-priority
preemptive scheduling
• Designed for timesharing systems. ◦ This setup has the advantage of low
• Preemption is added to enable the system to scheduling overhead, but it is inflexible.
switch between processes.
• Time Quantum - small unit of time MULTILEVEL FEEDBACK QUEUE
• The ready queue is treated as a circular queue. SCHEDULING
• How it works:
◦ The CPU scheduler goes around the ready • Allows a process to move between queues.
queue, allocating the CPU to each process • Separate processes according to the
for a time interval of up to 1 time quantum. characteristics of their CPU bursts – If process
◦ CPU scheduler picks the first process from uses too much CPU time, it will be moved to a
the ready queue, sets a timer to interrupt lower-priority queue.
after 1 time quantum, and dispatches the • Uses aging for lower-priority queues.
process. • Defined by the following parameters:
• Treat like FIFO - New processes are added to ◦ The number of queues
the tail of the ready queue. ◦ The scheduling algorithm for each queue
• Average waiting time = often LONG. ◦ The method used to determine when to
• Each process must wait no longer than (n − 1) upgrade a process to a higher priority
x q time units until its next time quantum. queue
• Performance depends on the size of the ◦ The method used to determine when to
quantum. demote a process to a lower priority queue
◦ Extremely large – RR policy is the same as ◦ The method used to determine which
FCFS. queue a process will enter when that
◦ Extremely small – Large number of process needs service
context switches. • Most general and complex CPU-Scheduling
◦ We want the time quantum to be large with algorithm
respect to the context switch time. General
rule: 80 percent of the CPU bursts should THREAD SCHEDULING
be shorter than the time quantum.
• User-level vs Kernel-level threads: (NB!)
MULTILEVEL QUEUE SCHEDULING ◦ Kernel-Level Threads:
Definition:
• Processes are easily classified into different All thread operations are implemented in
groups. the kernel and the OS schedules all threads
in the system. OS managed threads are
called kernel-level threads or light weight
processes. • Contention Scope
◦ Diff between kernel-threads and user-
Advantages: threads: lies in how they are scheduled.
Because kernel has full knowledge of all ◦ Process-Contention Scope (PCS):
threads, Scheduler may decide to give ▪ On systems implementing the many-to-
more time to a process having large one and many-to-many models, the
number of threads than process having thread library schedules user-level
small number of threads. threads to run on an available LWP.
Kernel-level threads are especially good ◦ System-contention scope (SCS):
for applications that frequently block. ▪ Used to decide which kernel-level
thread to schedule onto a CPU.
Disadvantages: ◦ One-to-one model is used by Windows,
The kernel-level threads are slow and Linux, and Solaris – schedule threads using
inefficient. only SCS.
It requires a full thread control block
(TCB) for each thread to maintain
MULTIPLE-PROCESSOR SCHEDULING
information about threads. As a result there
is significant overhead and increased in
• If multiple CPUs are available, load sharing
kernel complexity.
becomes possible—but scheduling problems
◦ User-Level Threads: become correspondingly more complex.
To make threads cheap and fast, they need • Approaches to Multiple-Processor
to be implemented at user level. Scheduling:
Definition: ◦ All scheduling decisions, I/O processing,
User-Level threads are managed entirely and other system activities handled by a
by the run-time system (user-level single processor—the master server.
library).The kernel knows nothing about ◦ The other processors execute only user
user-level threads and manages them as if code.
they were single-threaded processes. ◦ Asymmetric Multiple-Processing - only
They are small and fast - each thread is one processor accesses the system data
represented by a PC, register, stack, and structures, reducing the need for data
small thread control block. Creating a new sharing.
thread, switching between threads, and ◦ Symmetric Multiprocessing (SMP) –
synchronizing threads are done via each processor is self-scheduling.
procedure call. Scheduling proceeds by having the
scheduler for each processor examine the
Advantages: ready queue and select a process to
User-level threads package can be execute.
implemented on an Operating System that • Processor Affinity
does not support threads. ◦ Because of the high cost of invalidating
Does not require modification to operating and repopulating caches, most SMP systems
systems. try to avoid migration of processes
Simple Management from one processor to another and instead
Simple Representation attempt to keep a process running
Fast and Efficient on the same processor.
◦ That is - a process has an affinity for the
Disadvantages: processor on which it is currently running.
Not well integrated with the OS ◦ Soft Affinity: When an operating system
Lack of coordination between threads and has a policy of attempting to keep a
operating system kernel process running on the same processor—
User-level threads requires non-blocking but not guaranteeing that it will do so.
systems call
◦ Hard Affinity: Allowing a process to ▪ Coarse-grained multithreading
specify a subset of processors on which it A thread executes on a processor until a
may run. long-latency event such as a memory
• Load Balancing stall occurs.
◦ Keep workload balanced on SMP systems ▪ Fine-grained multithreading
to utilize the benefits of having more than Switches between threads at a much
one processor. finer level of granularity—typically at
◦ Otherwise, one or more processors may sit the boundary of an instruction cycle.
idle while other processors have high Includes logic for thread switching =>
workloads, along with lists of processes Low cost in switching between threads.
awaiting the CPU.
◦ Purpose: Attempts to keep the workload REAL-TIME CPU SCHEDULING
evenly distributed across all processors in
an SMP system. • Soft real-time systems and hard real-time
◦ Necessary only on systems where each systems:
processor has its own private queue of ◦ Soft real-time systems provide no
eligible processes to execute. guarantee as to when a critical real-time
◦ Approaches: process will be scheduled. They guarantee
▪ Push Migration only that the process will be given
A specific task periodically checks the preference over noncritical processes.
load on each processor and—if it finds ◦ Hard real-time systems have stricter
an imbalance—evenly distributes the requirements. A task must be serviced by
load by moving (or pushing) processes its deadline; service after the deadline has
from overloaded to idle or less-busy expired is the same as no service at all.
processors. • Minimizing Latency
▪ Pull Migration ◦ Event Latency - The amount of time that
Occurs when an idle processor pulls a elapses from when an event occurs to when
waiting task from a busy processor. it is serviced.
◦ Benefit of keeping a process running on ◦ Two types of latencies affect the
the same processor is that the process can performance of real-time systems:
take advantage of its data being in that ▪ Interrupt Latency - The period of
processor’s cache memory. Pushing or time from the arrival of an interrupt at
pulling removes this benefit. the CPU to the start of the routine that
• Multicore Processors services the interrupt.
◦ SMP systems have allowed several threads ▪ Dispatch Latency - The amount of
to run concurrently by providing multiple time required for the scheduling
physical processors. dispatcher to stop one process and start
◦ Definition: another. For keeping dispatch latency
To place multiple processor cores on the low, provide preemptive kernels.
same physical chip. ◦ When an interrupt occurs:
SMP systems that use multicore processors ▪ OS must first complete the instruction
are faster and consume less power than it is executing and determine the type
systems in which each processor has its of interrupt that occurred.
own physical chip. ▪ The state of the current process is
◦ Memory Stall - When a processor saved before servicing the interrupt
accesses memory, it spends time waiting using the specific interrupt service
for the data to become available. routine (ISR).
◦ In general, there are two ways to ▪ Total time to perform these tasks =
multithread a processing core: Interrupt Latency.
◦ The conflict phase of dispatch latency has
two components:
▪ Preemption of any process running in
the kernel.
▪ Release by low-priority processes of
resources needed by a high-priority
process

PRIORITY-BASED SCHEDULING (pg.285)

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy