Chapter Four

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 13

Chapter Four

Scheduling and dispatch

4.1. Preemptive and non-preemptive scheduling

Basic Concepts

 Almost all programs have some alternating cycle of CPU number crunching and waiting
for I/O of some kind. Even a simple fetch from memory takes a long time relative to CPU
speeds.
 In a simple system running a single process, the time spent waiting for I/O is wasted, and
those CPU cycles are lost forever.
 A scheduling system allows one process to use the CPU while another is waiting for I/O,
thereby making full use of otherwise lost CPU cycles.
 The challenge is to make the overall system as efficient and fair as possible, subject to
varying and often dynamic conditions, and where efficient and fair are somewhat
subjective terms, often subject to shifting priority policies.

CPU-I/O Burst Cycle

 Almost all processes alternate between two states in a continuing cycle:


o A CPU burst of performing calculations, and
o An I/O burst, waiting for data transfer in or out of the system.

Preemptive Scheduling

 CPU scheduling decisions take place under one of four conditions:


1. When a process switches from the running state to the waiting state, such as for an
I/O request or invocation of the wait () system call.
2. When a process switches from the running state to the ready state, for example in
response to an interrupt.

1
3. When a process switches from the waiting state to the ready state, say at
completion of I/O or a return from wait().
4. When a process terminates.
 For conditions 1 and 4 there is no choice - A new process must be selected.
 For conditions 2 and 3 there is a choice - To either continue running the current process,
or select a different one.
 If scheduling takes place only under conditions 1 and 4, the system is said to be non-
preemptive, or cooperative. Under these conditions, once a process starts running it keeps
running, until it either voluntarily blocks or until it finishes. Otherwise the system is said
to be preemptive.
 Windows used non-preemptive scheduling up to Windows 3.x, and started using pre-
emptive scheduling with Win95. Macs used non-preemptive prior to OSX, and pre-
emptive since then. Note that pre-emptive scheduling is only possible on hardware that
supports a timer interrupt.
 Note that pre-emptive scheduling can cause problems when two processes share data,
because one process may get interrupted in the middle of updating shared data structures.
Chapter 5 examined this issue in greater detail.
 Preemption can also be a problem if the kernel is busy implementing a system call (e.g.
updating critical kernel data structures) when the preemption occurs. Most modern UNIX
operating system deal with this problem by making the process wait until the system call
has either completed or blocked before allowing the preemption Unfortunately this
solution is problematic for real-time systems, as real-time response can no longer be
guaranteed.
 Some critical sections of code protect themselves from con currency problems by
disabling interrupts before entering the critical section and re-enabling interrupts on
exiting the section. Needless to say, this should only be done in rare situations, and only
on very short pieces of code that will finish quickly, (usually just a few machine
instructions.)

2
4.2. Schedulers and policies

CPU Scheduler

 Whenever the CPU becomes idle, it is the job of the CPU Scheduler ( a.k.a. the short-
term scheduler ) to select another process from the ready queue to run next.
 The storage structure for the ready queue and the algorithm used to select the next
process are not necessarily a FIFO queue. There are several alternatives to choose from,
as well as numerous adjustable parameters for each algorithm.

Dispatcher

 The dispatcher is the module that gives control of the CPU to the process selected by the
scheduler. This function involves:
o Switching context.
o Switching to user mode.
o Jumping to the proper location in the newly loaded program.
 The dispatcher needs to be as fast as possible, as it is run on every context switch. The
time consumed by the dispatcher is known as dispatch latency.

Scheduling Criteria

 There are several different criteria to consider when trying to select the "best" scheduling
algorithm for a particular situation and environment, including:
o CPU utilization - Ideally the CPU would be busy 100% of the time, so as to
waste 0 CPU cycles. On a real system CPU usage should range from 40% (lightly
loaded) to 90% (heavily loaded.)
o Throughput - Number of processes completed per unit time. May range from
10 / second to 1 / hour depending on the specific processes.

3
o Turnaround time - Time required for a particular process to complete, from
submission time to completion. (Wall clock time.) or TAT=Complete Time-
Arrival time.
o Waiting time - How much time processes spend in the ready queue waiting their
turn to get on the CPU.
 (Load average - The average number of processes sitting in the ready
queue waiting their turn to get into the CPU. Reported in 1-minute, 5-
minute, and 15-minute averages by uptime and who.)
o Response time - The time taken in an interactive program from the issuance of a
command to the commence of a response to that command. Response time is the
time spent when the process is in the ready state and gets the CPU for the first time
 In general one wants to optimize the average value of a criteria (Maximize CPU
utilization and throughput, and minimize all the others.) However, sometimes one wants
to do something different, such as to minimize the maximum response time.
 Sometimes it is most desirable to minimize the variance of a criteria than the actual
value. I.e. users are more accepting of a consistent predictable system than an
inconsistent one, even if it is a little bit slower.

Scheduling Algorithms

The following subsections will explain several common scheduling strategies, looking at only a
single CPU burst each for a small number of processes. Obviously real systems have to deal with
a lot more simultaneous processes executing their CPU-I/O burst cycles.

First-Come First-Serve Scheduling, FCFS

 FCFS is very simple it is just a FIFO queue, like customers waiting in line at the bank or
the post office or at a copying machine.
 Unfortunately, however, FCFS can yield some very long average wait times, particularly
if the first process to get there takes a long time. For example, consider the following
three processes:

4
 In the first Gantt chart below, process P1 arrives first. The average waiting time for the
three processes is ( 0 + 24 + 27 ) / 3 = 17.0 ms.
 In the second Gantt chart below, the same three processes have an average wait time of
( 0 + 3 + 6 ) / 3 = 3.0 ms. The total run time for the three bursts is the same, but in the
second case two of the three finish much quicker, and the other process is only delayed
by a short amount.
 Suppose that the processes arrive in the order:

P2 , P3 , P1

The Gantt chart for the schedule is:

 Waiting time for P1 = 6; P2 = 0; P3 = 3


 Average waiting time: (6 + 0 + 3)/3 = 3. Much better than previous case

Convoy effect - short process behind long process. Consider one CPU-bound and many
I/O-bound processes

Shortest-Job-First Scheduling, SJF

5
 The idea behind the SJF algorithm is to pick the quickest fastest little job that needs to be
done, get it out of the way first, and then pick the next smallest fastest job to do next.
 (Technically this algorithm picks a process based on the next shortest CPU burst, not the
overall process time.)
 For example, the Gantt chart below is based upon the following CPU burst times, (and
the assumption that all jobs arrive at the same time.)

 In the case above the average wait time is ( 0 + 3 + 9 + 16 ) / 4 = 7.0 ms, ( as opposed to
10.25 ms for FCFS for the same processes. )
 SJF can be proven to be the fastest scheduling algorithm, but it suffers from one important
problem: How do you know how long the next CPU burst is going to be?

 For long-term batch jobs this can be done based upon the limits that users set for their
jobs when they submit them, which encourages them to set low limits, but risks their
having to re-submit the job if they set the limit too low. However, that does not work for
short-term CPU scheduling on an interactive system.
 Another option would be to statistically measure the run time characteristics of jobs,
particularly if the same tasks are run repeatedly and predictably. But once again that
really isn't a viable option for short term CPU scheduling in the real world.

6
 SJF can be either preemptive or non-preemptive. Preemption occurs when a new process
arrives in the ready queue that has a predicted burst time shorter than the time remaining
in the process whose burst is currently on the CPU. Preemptive SJF is sometimes referred
to as shortest remaining time first scheduling.
 For example, the following Gantt chart is based upon the following data:

The average wait time in this case is ( ( 5 - 3 ) + ( 10 - 1 ) + ( 17 - 2 ) ) / 4 = 26 / 4 = 6.5


ms. (As opposed to 7.75 ms for non-preemptive SJF or 8.75 for FCFS. )

Priority Scheduling

 Priority scheduling is a more general case of SJF, in which each job is assigned a priority
and the job with the highest priority gets scheduled first. SJF uses the inverse of the next
expected burst time as its priority - The smaller the expected burst, the higher the priority.
 Note that in practice, priorities are implemented using integers within a fixed range, but
there is no agreed-upon convention as to whether "high" priorities use large numbers or
small numbers. This book uses low number for high priorities, with 0 being the highest
possible priority.

7
 For example, the following Gantt chart is based upon these process burst times and
priorities, and yields an average waiting time of 8.2 ms:

 Priorities can be assigned either internally or externally. Internal priorities are assigned
by the OS using criteria such as average burst time, ratio of CPU to I/O activity, system
resource use, and other factors available to the kernel. External priorities are assigned by
users, based on the importance of the job, fees paid, politics, etc.
 Priority scheduling can be either preemptive or non-preemptive.
 Priority scheduling can suffer from a major problem known as indefinite blocking, or
starvation, in which a low-priority task can wait forever because there are always some
other jobs around that have higher priority.
o If this problem is allowed to occur, then processes will either run eventually when
the system load lightens ( at say 2:00 a.m. ), or will eventually get lost when the
system is shut down or crashes. (There are rumors of jobs that have been stuck for
years.)

8
o One common solution to this problem is aging, in which priorities of jobs
increase the longer they wait. Under this scheme a low-priority job will eventually
get its priority raised high enough that it gets run.

Round Robin Scheduling

 Round robin scheduling is similar to FCFS scheduling, except that CPU bursts are
assigned with limits called time quantum.
 When a process is given the CPU, a timer is set for whatever value has been set for a time
quantum.
o If the process finishes its burst before the time quantum timer expires, then it is
swapped out of the CPU just like the normal FCFS algorithm.
o If the timer goes off first, then the process is swapped out of the CPU and moved
to the back end of the ready queue.
 The ready queue is maintained as a circular queue, so when all processes have had a turn,
then the scheduler gives the first process another turn, and so on.
 RR scheduling can give the effect of all processors sharing the CPU equally, although the
average wait time can be longer than with other scheduling algorithms. In the following
example the average wait time is 5.66 ms.

9
 The performance of RR is sensitive to the time quantum selected. If the quantum is large
enough, then RR reduces to the FCFS algorithm; If it is very small, then each process
gets 1/nth of the processor time and share the CPU equally.
 BUT, a real system invokes overhead for every context switch, and the smaller the time
quantum the more context switches there are. Most modern systems use time quantum
between 10 and 100 milliseconds, and context switch times on the order of 10
microseconds, so the overhead is small relative to the time quantum.
 The way in which a smaller time quantum increases context switches.
 Turnaround time also varies with quantum time, in a non-apparent manner. Consider, for
example the processes

4.3. Processes and threads

A process is fundamentally defined by having a unique virtual memory space. The process
contains a code segment that consists of the executable machine-language instructions, while the
statically allocated global variables are stored in the data segment. The heap and stack segments
contain the dynamically allocated data and local variables for this execution of the program.

thread is a single sequence stream within a process. Threads are also called lightweight processes
as they possess some of the properties of processes. Each thread belongs to exactly one process.
In an operating system that supports multithreading, the process can consist of many threads. But
threads can be effective only if the CPU is more than 1 otherwise two threads have to context
switch for that single CPU. In a process, a thread refers to a single sequential activity being
executed. these activities are also known as thread of execution or thread control. Now, any
operating system process can execute a thread. we can say, that a process can have multiple
threads.

Multithreading

Multithreaded processes have multiple threads that perform tasks concurrently. Just like the
thread that runs the code in main(), additional threads each use a function as an entry point. To
maintain the logical flow of these additional threads, each thread is assigned a separate stack.

10
However, all of the other segments of memory, including the code, global data, heap, and kernel,
are shared.

Another way to consider the relationship between threads and processes is to separate the system
functions of scheduling and resource ownership. Switching from one thread to another would
change the system from working toward one computational goal to working toward another. This
means that the purpose of switching between threads is to create a schedule that controls when
each thread is running on the CPU.

On the other hand, processes act as containers for resource ownership. As the process runs, it
may request access to files stored on the disk, open network connections, or request the creation
of a new window on the desktop. All of these resources are allocated to the process, not
individual threads.

11
Using multiple threads offers a number of advantages over creating an application from multiple
processes. First, using multiple threads helps programmers build modularity into their code more
effectively. Complex software, especially the types of applications built with object-oriented
programming, typically involves modular components that interact with each other. Oftentimes,
the program can run more efficiently if these components can run concurrently without
performing context switches. Threads provide a foundation for this programming goal.

One common example of using threads for different purposes is an interactive graphical user
interface. Application programmers build in keyboard or mouse listeners that are responsible for
detecting and responding to key presses, mouse clicks, and other such events. These types of
event listeners are simply concurrent threads within the process. By implementing the listener
behavior in a separate thread, the programmer can simplify the structure of the program.

Threads also require significantly less overhead for switching and for communicating. A process
context switch requires changing the system’s current view of virtual memory, which is a time-
consuming operation. Switching from one thread to another is a lot faster. For two processes to
exchange data, they have to initiate interposes communication (IPC), which requires asking the
kernel for help. IPC generally involves performing at least one context switch. Since all threads
in a process share the heap, data, and code, there is no need to get the kernel involved. The
threads can communicate directly by reading and writing global variables.

At the same time, the lack of isolation between threads in a single process can also lead to some
disadvantages. If one thread crashes due to a segmentation fault or other error, all other threads
and the entire process are killed. This situation can lead to data and system corruption if the other
threads were in the middle of some important task.

As an example, assume that the application is a server program with one thread responsible for
logging all requests and a separate thread for handling the requests. If the request handler thread
crashes before the logging thread has a chance to write the request to its log file, there would be
no record of the request. The system administrators left to determine what went wrong would
have no information about the request, so they may waste a lot of time validating other requests
that were all good.

12
The lack of isolation between threads also creates a new type of programming bugs called race
conditions. In a race condition, the behavior of the program depends on the timing of the thread
scheduling. One example of a race condition is when two threads both try to change the value of
a global variable. The final value of the global variable would depend on which thread set the
value last.

4.4. Deadlines and real-time issues


Real Time is the actual time during which something takes place
What is a real-time embedded system: A system which responds to real time situation with the
help of its embedded software and hardware, within the specified time constraint, or Real time
embedded system is a system composed of hardware, application software and real time
operating system?
Characteristics of Real-time:
Real time embedded systems must have the following characteristics:
A. Constant Response: A real-time embedded system always responds in the same manner to a
certain situation, it is not allowed to deviate from its normal designated output. An air-
conditioner is not allowed to throw hot air in summer.
B. Deadline: respond to an event or request within a strictly defined time. It is crucial to the
working of an embedded system, a missed deadline can cost lives and finances.
C. Accuracy: the system should perform exact and accurate tasks. In case of any
malfunctioning, the system failure can cause destruction. What would happen if the heartbeat?
regulator machine can't maintain the heartbeat? patient would eventually die.
D. Quick Response: It is the most important characteristic of all; the real-time embedded
system must be swift enough to respond to the changing external environment with
immediate effect.

13

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy