0% found this document useful (0 votes)
48 views

Processes and Process Management

A process is an executing instance of a program. It has three main components: the executable program, an address space, and execution context. A process alternates between CPU bursts, where it uses the CPU, and I/O bursts, where it performs input/output. Processes can be in various states like new, ready, running, blocked, and terminated depending on what they are currently doing.

Uploaded by

Dann Laurte
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
48 views

Processes and Process Management

A process is an executing instance of a program. It has three main components: the executable program, an address space, and execution context. A process alternates between CPU bursts, where it uses the CPU, and I/O bursts, where it performs input/output. Processes can be in various states like new, ready, running, blocked, and terminated depending on what they are currently doing.

Uploaded by

Dann Laurte
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 160

PROCESSES AND

PROCESS
MANAGEMENT

SCHOOL OF COMPUTING RIEL A. GOMEZ


INTRODUCTION TO PROCESSES

• A program is an executable file


containing the set of
instructions written to perform
a specific job on your
computer.

It is just a file stored on


secondary storage so it is
simply a passive or inanimate
entity.

For example, winword.exe is an


executable file containing the
set of instructions which allows
users to create and edit
documents.

Processes and Process Management


INTRODUCTION TO PROCESSES

• A process is an executing instance of a


program. For example, when a user
double clicks on the Microsoft Word
icon, a process is started that will run
the word processing program.

A process is an active entity and it


resides on the primary memory and
leaves the memory if the system is
rebooted.

Several processes may be related to


same program. For example, a user
can run multiple instances of the
Word program (editing different files).

Processes and Process Management


INTRODUCTION TO PROCESSES
• A process has three components:

1. The executable program itself.

2. The address space of the process, which is the range of


addresses of main memory locations assigned to it by
the operating system.

It is divided into three parts, the code region, the data


region, and the stack region. The code region contains
the program code that is being executed, the data
region contains the data and variables used by the
process, and the stack region contains the last-in, first-
out stack of the process.

Processes and Process Management


INTRODUCTION TO PROCESSES

3. The execution context of the process, which consists of:

a. the values of the internal CPU registers being used

b. the status of the process (whether it is executing or


waiting for the completion of an I/O operation),

c. the priority of the process,

d. and other information needed by the operating


system to execute the process.

Processes and Process Management


INTRODUCTION TO PROCESSES

• Generally, a process has only one thread of control.

• A thread is the sequence of instructions within a process.

• So having a single thread of control means that the process


has only a single path of execution or a single set of
instructions being executed.

• In short, a process can only do one task at a time.


Consequently, a single process cannot maximize CPU
utilization since the CPU will be idle once the process
requests for an I/O operation.

Processes and Process Management


INTRODUCTION TO PROCESSES

• So in multiprogramming/multitasking, the
operating system interleaves the execution of
several processes to maximize the utilization of the
CPU.

• It is therefore expected that there will be several


processes present within any computer system.

• Processes may either be operating system


processes (processes executing operating system
programs) or user processes (processes executing
user programs).

Processes and Process Management


CPU AND I/O BURSTS

• A process alternately uses the CPU and I/O devices in a


repetitive fashion.

• There will be an interval of time when a process is using


the CPU to execute instructions, followed by an interval of
time when it is using an I/O device (or more accurately,
waiting for an I/O operation to finish such as waiting for a
file to be printed), followed by another interval of time
when it is using the CPU again, and so on and so forth.

• A CPU burst is the interval of time when the process is


using the CPU while an I/O burst is when the process is
waiting for an I/O operation to finish.

Processes and Process Management


CPU AND I/O BURSTS

• The execution of a
Process process will start
. the process is
first with a CPU
.
.
using the CPU to
execute machine
burst, followed by
inc A instructions
an I/O burst, and
load B
add A, B
CPU
Burst then followed by
copy C, B
write to disk another CPU burst,
{wait for I/O completion} I/O Burst
the process is
etc.
store C waiting for its I/O
shift A CPU
operation (writing
sub A, B Burst to the disk) to finish
send to printer
{wait for I/O completion} I/O Burst
• This continues
.
until a final CPU
.
.
burst signifying
process
termination is
encountered.
Processes and Process Management
CPU AND I/O BURSTS

• Processes with a few, long CPU bursts are called


CPU-bound processes while those with several,
short CPU bursts are called I/O-bound processes.

• CPU-bound processes spend more time using the


CPU than using I/O while I/O-bound processes
spend more time using I/O than the CPU.

• In I/O-bound processes, most of the time is spent


waiting for the completion of I/O operations.

Processes and Process Management


PROCESS STATES

• A process can be in one of several states.

• The state of a process is determined by what the process is


currently doing.

• Certain events may cause a process to go from one state or


another. The following are the possible states a process can be in:

1. New State. A process is said to be in the New state if it is being


created by the operating system.

When the computer system is ready to take on this additional


process, it will be admitted into the system (loaded into main
memory) and it becomes ready to be executed by the CPU.

Processes and Process Management


PROCESS STATES

2. Ready State. A process is in the Ready state if it is


ready to be executed by the CPU.

Take note that it is not yet executing.

It is simply waiting for the operating system to execute


it in the CPU.

In this state, the process is said to be "runnable."

Once the operating system assigns a CPU to it, it starts


executing.

Processes and Process Management


PROCESS STATES

3. Running State. A process is in the Running state when it is


being executed by the CPU.

4. Blocked State. A process is in the Blocked state when it stops


executing because it is waiting for some event to happen, such
as the completion of an I/O operation.

For example, a certain process cannot execute while its data


are being read from the hard disk or while its output is being
printed.

5. Terminated State. A process is in the Terminated state if it has


finished executing or has been aborted because of some
reason such as a fatal error.

Processes and Process Management


PROCESS STATES

new ready running terminated

blocked

State Diagram of a Process

Processes and Process Management


PROCESS STATES

new ready running terminated

blocked

FROM TO SAMPLE EVENT(S) CAUSING THE STATE TRANSITION


New Ready when the newly created process is admitted into the
system

Processes and Process Management


PROCESS STATES

new ready running terminated

blocked

FROM TO SAMPLE EVENT(S) CAUSING THE STATE TRANSITION


Ready Running when the operating system assigns the CPU to the
waiting process

Processes and Process Management


PROCESS STATES

new ready running terminated

blocked

FROM TO SAMPLE EVENT(S) CAUSING THE STATE TRANSITION


Running Blocked when the running process requested for an I/O
operation and therefore has to wait for the completion
of its request

Processes and Process Management


PROCESS STATES

new ready running terminated

blocked

FROM TO SAMPLE EVENT(S) CAUSING THE STATE TRANSITION


Running Ready when the running process is temporarily halted because
of a timeout event (recall that in multitasking, each
process is given a certain time limit to use the CPU) or
because an interrupt occurred forcing the CPU to stop
the currently running process and execute a higher
priority process

Processes and Process Management


PROCESS STATES

new ready running terminated

blocked

FROM TO SAMPLE EVENT(S) CAUSING THE STATE TRANSITION


Running Terminated when the running process completes its execution or is
intentionally halted by the operating system because of
some reason

Processes and Process Management


PROCESS STATES

new ready running terminated

blocked

FROM TO SAMPLE EVENT(S) CAUSING THE STATE TRANSITION


Blocked Ready when the I/O operation the process is waiting for
finishes or the event it is waiting for occurs

Processes and Process Management


PROCESS CONTROL

• Perhaps the most important


function of the operating system
is managing all the processes that
exist within any computer system.

• Since there is only one CPU, the


operating system must schedule
the processes for execution.

• It is also responsible for allocating • The operating system


the resources needed by each needs certain information
process such as memory or in order to efficiently
address space, I/O devices, files, control and manage
etc. processes.

Processes and Process Management


PROCESS CONTROL

• Process Descriptors

Information about a process is


stored in a data structure called
the process control block (PCB).

It is also called the process


descriptor or task control block
(TCB).

Each process has a PCB


associated with it.

Processes and Process Management


PROCESS CONTROL

Typical information about a


process that is stored in its PCB
includes:

1. The Process ID Number (PID)


of the process.

This is a unique number


assigned to each process as it
is created by the operating
system.

The PID allows the operating


system to identify each
process.

Processes and Process Management


PROCESS CONTROL

2. The address space of


the process.

The address space is


the range of main
memory addresses
assigned to the
process.

Processes and Process Management


PROCESS CONTROL

2. The execution context of the process which is composed of:

a. The values of the internal CPU registers used by the


process such as the general-purpose registers containing
frequently-used data and special registers such as the
program counter and the stack pointer.

b. The current state of the process, whether the process is


new, ready, running, blocked, or terminated.

c. The scheduling priority of the process. This priority is used


by the operating system in determining the level of
importance of a process in scheduling it for CPU execution.

Processes and Process Management


PROCESS CONTROL

4. A list of open files being used by the


process.

5. Some accounting information such


how much CPU time has it used
already, when was the process last
executed, etc.

Whenever a process changes state, the


operating system must update all the
information stored in the PCB of the
process.

A process table is used by the operating


to help keep track on where the PCB of a
particular process can be located.

Processes and Process Management


PROCESS CONTROL

• Process Operations. The


following are the major
operations an operating system
can perform on processes:

1. Process Creation. When the


operating system creates a
process, it creates a PCB for it
and adds an entry for it in the
process table.

It then allocates main


memory space for the new
process.

Processes and Process Management


PROCESS CONTROL

A new process is created because of


any of the following reasons:

– The operating system creates a


new process to respond to a
request of a user.

– In order to accomplish several


tasks, a process may cause the
creation of new processes
(spawning).

The creating process is called the


parent process and the created
process is called the child process.

Processes and Process Management


PROCESS CONTROL

2. Process Termination. When a process is


terminated, the operating system de-allocates
its memory space and all other resources that
were assigned to it.

Its PCB is erased and so is its entry in the


process table.

When the terminated process has spawned


other processes, the operating system will
usually terminate all its child processes also.

Processes and Process Management


PROCESS CONTROL

Processes and Process Management


PROCESS CONTROL

Processes and Process Management


PROCESS CONTROL

Processes and Process Management


PROCESS CONTROL

The following are some of the reasons why a process may be


terminated:

– A process may be terminated because of its normal


completion.

This means that the process has executed the exit system
call informing the operating system that it has completed
executing and may now be terminated.

– A child process is terminated if its parent has been


terminated or if the parent has requested the termination
of the child process (usually done if the child process has
completed its function and is no longer needed).

Processes and Process Management


PROCESS CONTROL

– The process has encountered a fatal error such as:

▪ accessing the memory space of other processes,

▪ requesting for more memory space than what is


available,

▪ an arithmetic error such as a divide-by-zero


operation, and

▪ performing an invalid I/O operation like writing


data to an input device or reading data from an
output device.

Processes and Process Management


PROCESS CONTROL

3. Suspending a Process

There will be situations


wherein it is necessary for
a process in main memory
to be temporarily moved
out of its memory space
and stored in secondary
storage.

This is typically done to


provide memory space for
another process (usually
one with a higher priority).

Processes and Process Management


PROCESS CONTROL

The process that was


swapped out of memory is
said to be suspended.

Take note that suspension


is different from being
aborted.

The process can resume its


execution later when it is
swapped back into main
memory.

Processes and Process Management


PROCESS CONTROL

4. Resuming a Process

Resuming a process
means that a
suspended process is
brought back into
main memory in
order for it to resume
execution.

Processes and Process Management


CONTEXT-SWITCHING

• When the operating system switches the


CPU from one process to another, it is
called a context switch.

• Before the operating system can perform a


context switch, it must do two things first:

1. It must first save the execution context


and other pertinent information of the
current executing process into its PCB.

Saving these values would allow the


CPU to restore them later when it goes
back to resume execution of the
process.

Processes and Process Management


CONTEXT-SWITCHING

2. It must then load the


execution context of
the next process.

This allows the CPU


to resume its
execution where it
was left off
previously.

Processes and Process Management


CONTEXT-SWITCHING

• Context switching is costly to


the system in terms of CPU
time because it takes some
time for the CPU to save the
execution context of the
current process and restore
the execution context of the
next.
• And while the CPU is
• This time period is called performing context switching,
context switch time. it cannot do productive work
(executing processes).
• Context switch time is the
time spent by the CPU to • That is why context switching
switch from one process to is considered as pure
another. overhead.

Processes and Process Management


CONTEXT-SWITCHING
Time

Process 1
CPU Burst I/O Burst . . .
Since Process 1
started an I/O
operation, the OS
will now save its
execution context in
its PCB

The OS will now


restore execution
context of Process 2
from its PCB and will
start executing it

Process 2
CPU Burst . . .
Context Switch Time

Processes and Process Management


CONTEXT-SWITCHING

• Therefore, design issues of modern operating systems


include how to minimize unnecessary context switches or
how to improve the efficiency of context switching.

• Some computer architectures have even included some


hardware support to make context switching faster.

• For example, there are some computer systems that have


multiple sets of registers enough for several processes.

So if a context switch occurs, it may no longer be necessary


to save the contents of the registers used by a process since
all processes have their own dedicated set of registers.

Processes and Process Management


PRINCIPLES OF SCHEDULING

• Recall that once a process in the


Running state requests for an I/O
operation, it enters into the Blocked
state wherein it stops executing and
waits for the completion of its I/O
operation.

• Since the CPU is idle at this point, the


operating system selects one of the
processes that are in the Ready state
and schedules the CPU to execute that
process in order to keep the CPU busy
and hence keeps its utilization high.

• This selection process is called


scheduling.

Processes and Process Management


PRINCIPLES OF SCHEDULING

• However, multiprogramming will


only be possible if there is a pool of
Ready processes in main memory.

• The more processes available in


main memory, the higher the
chances of keeping the CPU busy at
all times, subject of course, to the
availability of main memory space.

• The number of Ready processes in


main memory is called the degree of
multiprogramming.

Processes and Process Management


THE JOB AND CPU SCHEDULERS

• As requests for
program execution
from the user or users
come in, programs that
were requested to be
executed will be loaded Job Queue

into the main memory.

• The processes in the Hard Disk Main Memory


hard disk that are The processes
waiting to be loaded that are in the
hard disk and are
into main memory are waiting to be
loaded into the

put into a queue called main memory are


kept in the Job

the job queue. Queue

Processes and Process Management


THE JOB AND CPU SCHEDULERS

• The operating
system must now
perform job
scheduling.
Job Queue

• Job scheduling is the


act of selecting Hard Disk Main Memory
which processes in The processes
the job queue will that are in the
hard disk and are

be loaded into main waiting to be


loaded into the
main memory are
memory. kept in the Job
Queue

Processes and Process Management


THE JOB AND CPU SCHEDULERS

• Specifically, the job Job Scheduler


scheduler is the part of
the operating system
module that is
responsible for job
scheduling. Job Queue

• It is also called the


long-term scheduler.
Hard Disk Main Memory
The processes

• The job scheduler is the that are in the


hard disk and are
waiting to be
one that determines loaded into the
main memory are
and controls the degree kept in the Job
Queue
of multiprogramming.

Processes and Process Management


THE JOB AND CPU SCHEDULERS

• Once loaded into main


memory, the processes
that are in the Ready
CPU
state are put in another
Ready Queue
queue called the ready
queue.

• The ready queue is


Main Memory simply composed of
The processes processes that are in
that are in the
main memory and
main memory and
are waiting to be
executed by the
waiting to be executed
CPU are kept in
the Ready Queue
by the CPU.

Processes and Process Management


THE JOB AND CPU SCHEDULERS

CPU Scheduler • The operating system must


now perform CPU
scheduling, which is the
act of selecting which
CPU process in the ready queue
Ready Queue will be executed by the
CPU.

• Specifically, the CPU


Main Memory
scheduler or the short-
term scheduler is the
The processes
that are in the
component of the
main memory and
are waiting to be
operating system that is
executed by the responsible for CPU
CPU are kept in
the Ready Queue
scheduling.

Processes and Process Management


THE JOB AND CPU SCHEDULERS

• Once the CPU starts executing a


process, it will only be a very short
time (a few milliseconds or
nanoseconds) before that process
will request for an I/O operation.
In other words, the
• That particular process will enter
the Blocked state and the CPU CPU scheduler will be
scheduler must now select a new executing frequently.
process from the ready queue to
execute.

• The same is probably true for the


next process.

Processes and Process Management


THE JOB AND CPU SCHEDULERS

• Therefore, the CPU scheduler must be very fast in selecting


a new process to execute because of the very short time
between process executions.

This is the main reason why it is also called the short-term


scheduler.

• On the other hand, the job scheduler does not have to


execute frequently because it takes some time before there
is a need to load a new process from the hard disk to main
memory.

Hence, it is also called the long-term scheduler.

Processes and Process Management


THE JOB AND CPU SCHEDULERS
• Queues are an essential part of
scheduling.

• There are other queues the


operating system uses aside
from the job and ready queues.

• Processes that are waiting to


use a particular I/O device are
kept in the device queue of that
I/O device.

• Each device has its own device


queue.

Processes and Process Management


THE JOB AND CPU SCHEDULERS

• For example, processes that are waiting to use the


printer are kept in the printer queue while
processes waiting to access the hard disk are kept in
the disk queue.

Processes and Process Management


NON-PREEMPTIVE AND PREEMPTIVE CPU SCHEDULING
• Non-preemptive Scheduling

Once the CPU has been assigned to a


process and the process starts executing,
the CPU cannot be taken away from that
process.

The process is allowed to complete its


current CPU burst. The only time the CPU
can be assigned to another process is
when:

– The currently executing process


terminates. So the CPU scheduler is
allowed to select another process in the
Ready state to execute.

Processes and Process Management


NON-PREEMPTIVE AND PREEMPTIVE CPU SCHEDULING

– The currently executing process


enters into the Blocked state
because it requested for an I/O
operation.

In this case, the CPU is again


allowed to select another process
to execute.

Therefore, in non-preemptive
scheduling, the CPU scheduler can
only assign the CPU to another process
if the process currently using the CPU
voluntarily gives it up.

Processes and Process Management


NON-PREEMPTIVE AND PREEMPTIVE CPU SCHEDULING
• Preemptive Scheduling

Even though the CPU has been assigned to a process, the


CPU scheduler may decide to assign the CPU to another
process in the ready queue.

The currently executing process has no choice but to


change from the Running state to the Ready state and go
back to the ready queue. This will occur if:

– A process with a higher priority than the currently


executing process enters the ready queue. So the CPU
scheduler will have to immediately assign the CPU to the
higher priority process.

Processes and Process Management


NON-PREEMPTIVE AND PREEMPTIVE CPU SCHEDULING

– Recall that in multitasking, a process is


given a time limit (the time slice or the
time quantum) to use the CPU.

So if the process that is executing has


exceeded its time limit, the CPU
scheduler will assign the CPU to
another process even though the
running process has not yet completed
its current CPU burst.
Processes and Process Management
CPU SCHEDULING ALGORITHMS

• There are several techniques or algorithms the


operating system, specifically its CPU scheduler,
may use in selecting which processes in the ready
queue will be executed next by the CPU.

• A good scheduler should optimize the following


performance criteria:

1. CPU Utilization. The CPU scheduler should


optimize CPU efficiency by maximizing CPU
utilization. This means that it must keep the CPU
as busy as possible.
Processes and Process Management
CPU SCHEDULING ALGORITHMS

2. Throughput. The CPU


scheduler should optimize
throughput by maximizing
the amount of work done by
the CPU.

Throughput may be
measured in units such as
number of processes
completed per minute or
number of instructions
executed per second.

Processes and Process Management


CPU SCHEDULING ALGORITHMS

3. Turnaround Time. The


CPU scheduler should
optimize turnaround
time by minimizing the
time it takes to execute
a process until
completion.

It measures the time


between the point a
process is submitted
and the time it finishes
executing.

Processes and Process Management


CPU SCHEDULING ALGORITHMS

It can be computed by getting


the sum of :

1. the time the process spent


waiting in the job queue,

2. the time spent waiting in


the ready queue,

3. the time spent executing in


the CPU, and

4. the time waiting for I/O


operations.

Processes and Process Management


CPU SCHEDULING ALGORITHMS

4. Response Time. The CPU


scheduler should optimize
response time by minimizing
the time between the
submission of a request and
the start of the system's first
response (not the final
output).

Response time is important to


interactive systems.

Processes and Process Management


CPU SCHEDULING ALGORITHMS

5. Waiting Time. The


CPU scheduler should
optimize waiting time
by minimizing the total
time a process has to
spend inside the ready
queue waiting to be
executed by the CPU.

Processes and Process Management


CPU SCHEDULING ALGORITHMS

Many consider waiting


time as a very good
measurement of the
performance of a
scheduling algorithm.

Since CPU scheduling is


determining the order
of process execution, it
has a direct impact on
how long a process has
to wait before the CPU
executes it.
Processes and Process Management
CPU SCHEDULING ALGORITHMS

Scheduling algorithms
do not affect how fast
the CPU can execute
the instructions of a
process or how long
does it take for an I/O
device to complete a
requested operation.

Processes and Process Management


FIRST-COME, FIRST-SERVED ALGORITHM

• In the First-Come, First-


Served Algorithm (FCFS),
the process that enters the
ready queue first gets to
execute at the CPU first.

• In other words, the ready


queue is treated as a first-
in, first-out (FIFO) queue.

• The FSFC algorithm is also


called the First-In, Fist-Out
Algorithm (FIFO).

Processes and Process Management


FIRST-COME, FIRST-SERVED ALGORITHM

• Processes enter the ready


queue at the rear or tail
and the CPU scheduler
selects the process at the
front or head of the queue
as the next one to be
executed by the CPU.

• Take note that FCFS is a


non-preemptive scheduling
algorithm.

Processes and Process Management


FIRST-COME, FIRST-SERVED ALGORITHM

• Given the following set of processes with their respective arrival


times at the ready queue and the length of their next CPU burst:

Process ID Arrival Time CPU Burst


A 0 8 All values for
B 3 4 arrival time and
CPU burst time
C 4 5 are measured in
D 6 3 milliseconds.
E 10 2

In solving scheduling problems, draw a Gantt chart showing the


times each process starts and ends executing.

Processes and Process Management


FIRST-COME, FIRST-SERVED ALGORITHM

• At time t = 0, process A arrives at the ready


Process Arrival CPU queue.
ID Time Burst
A 0 8 Since the CPU is idle and process A is the only
B 3 4 one inside the queue, the CPU scheduler will
select this process to execute first.
C 4 5
D 6 3 So process A will start executing at t = 0. Since
this process has a CPU burst of 8, it will end its
E 10 2 execution at time t = 8.

The Gantt chart will now be:

0 8

Processes and Process Management


FIRST-COME, FIRST-SERVED ALGORITHM

• By the time process A finishes executing at


Process Arrival CPU time t = 8, the processes inside the ready
ID Time Burst queue are B, C, and D, in that order.
A 0 8
B 3 4 Take note that process E is not yet in the
queue since it will arrive at t = 10.
C 4 5
D 6 3 So the CPU scheduler will select process B to
execute next. It will start executing at time t =
E 10 2 8 and will end at time t = 12.

The Gantt chart will now be:

A B

0 8 12

Processes and Process Management


FIRST-COME, FIRST-SERVED ALGORITHM

Process Arrival CPU


ID Time Burst • At t = 12, the processes inside the ready queue
are C, D, and E, in that order.
A 0 8
B 3 4 So the CPU scheduler will select process C
C 4 5 (with a CPU burst of 5) to execute next. It will
start executing at t = 12 and will end at t = 17.
D 6 3
E 10 2 The Gantt chart will now be:

A B C

0 8 12 17

Processes and Process Management


FIRST-COME, FIRST-SERVED ALGORITHM

Process Arrival CPU


ID Time Burst • At t = 17, the processes remaining inside the
ready queue are D and E in that order.
A 0 8
B 3 4 So the CPU scheduler will select process D
C 4 5 (with a CPU burst of 3) to execute next. It will
start executing at t = 17 and will end at t = 20.
D 6 3
E 10 2 The Gantt chart will now be:

A B C D

0 8 12 17 20

Processes and Process Management


FIRST-COME, FIRST-SERVED ALGORITHM

Process Arrival CPU


ID Time Burst • At t = 20, the only process inside the ready
queue is E.
A 0 8
B 3 4 Obviously, the CPU scheduler will select
process E (with a CPU burst of 2) to execute
C 4 5 next. It will start executing at t = 20 and will
D 6 3 end at time t = 22.
E 10 2
The final Gantt chart will now be:

A B C D E

0 8 12 17 20 22

Processes and Process Management


FIRST-COME, FIRST-SERVED ALGORITHM

A B C D E Process Arrival CPU


ID Time Burst
0 8 12 17 20 22
A 0 8

• Compute the time spent by each process in the B 3 4


ready queue (waiting time). C 4 5
D 6 3
• The waiting time for each process is computed
as the time the process left the ready queue to E 10 2
execute minus the time the process entered
the queue.

• The waiting time for process A is therefore:

WTA = time left queue – time entered queue


= 0 – 0 = 0 ms

Processes and Process Management


FIRST-COME, FIRST-SERVED ALGORITHM

A B C D E Process Arrival CPU


ID Time Burst
0 8 12 17 20 22
A 0 8

• Compute the time spent by each process in the B 3 4


ready queue (waiting time). C 4 5
D 6 3
• The waiting time for each process is computed
as the time the process left the ready queue to E 10 2
execute minus the time the process entered
the queue.

• The waiting time for process B is therefore:

WTB = time left queue – time entered queue


= 8 – 3 = 5 ms

Processes and Process Management


FIRST-COME, FIRST-SERVED ALGORITHM

A B C D E Process Arrival CPU


ID Time Burst
0 8 12 17 20 22
A 0 8
B 3 4
• The waiting times for the remaining
processes are: C 4 5
D 6 3
WTC = 12 – 4 = 8 ms E 10 2
WTD = 17 – 6 = 11 ms
WTE = 20 – 10 = 10 ms

The average waiting time is (0 + 5 + 8 + 11 + 10)/5 = 34/5 = 6.8 ms

Processes and Process Management


FIRST-COME, FIRST-SERVED ALGORITHM

• Taken note that in the


FCFS algorithm, if
processes with the
longer CPU burst arrived
at the ready queue
ahead of the shorter
processes, then the
waiting times of the
shorter processes will be
large.

This will increase the


average waiting time for
the system.

Processes and Process Management


FIRST-COME, FIRST-SERVED ALGORITHM

• The main advantage of


the FCFS scheduling
algorithm is its simplicity.

Because of this, the CPU


scheduler can execute
very fast since selecting
the next process to
allocate the CPU to is
straightforward.

Processes and Process Management


FIRST-COME, FIRST-SERVED ALGORITHM

• The main disadvantage of FCFS is that it generally yields a high


average waiting time since it favors CPU-bound processes
(processes with long CPU bursts).

CPU-bound processes tend to enter the ready queue often since


they rarely do any I/O operation while I/O-bound processes spend
most of their time in the Blocked state.

So when I/O-bound processes enter the ready queue, most of the


CPU-bound processes are already there.

And since CPU-bound processes have long CPU burst times, the
I/O-bound processes will have to wait a relatively long period of
time in the ready queue.

Processes and Process Management


FIRST-COME, FIRST-SERVED ALGORITHM

This is known as the convoy effect


where several CPU-bound
processes in the ready queue can
cause I/O-bound processes to
stack up at the rear of the queue.

Since many I/O-bound processes


are interactive processes, the
resulting response time will
therefore be unacceptable.

This, plus the fact that FCFS is


non-preemptive, makes it not
suitable for time-sharing and
interactive computer systems.

Processes and Process Management


SHORTEST PROCESS FIRST

• Shortest Process First Algorithm (SPF). This


algorithm is also called Shortest Job First (SJF).

• In SPF, the process with the shortest CPU burst


time is the one that will be executed first.

Processes and Process Management


SHORTEST PROCESS FIRST
• In FCFS, long waiting times will be expected if processes
with large CPU burst times are executed ahead of the
shorter processes. So SPF offers a solution to this.

• SPF is actually implementing a priority scheme wherein


I/O–bound processes or processes with short CPU bursts
are given a higher priority than CPU-bound processes.

• In cases where there are two or more processes with same


CPU burst time, the FCFS algorithm may serve as the tie-
breaker. In other words, the process that entered the
queue first is selected by the CPU scheduler.

• Like FCFS, SPF is a non-preemptive scheduling algorithm.

Processes and Process Management


SHORTEST PROCESS FIRST

Process Arrival CPU • At time t = 0, process A arrives at the ready


ID Time Burst queue. Since the CPU is idle and process A is
the only one inside the queue, the CPU
A 0 8 scheduler will select this process to execute
B 3 4 first.
C 4 5
So process A will start executing at t = 0. Since
D 6 3 this process has a CPU burst of 8, it will end
E 10 2 executing at t = 8.

The Gantt chart will now be:

0 8

Processes and Process Management


SHORTEST PROCESS FIRST

Process Arrival CPU • At t = 8, the processes inside the ready queue


ID Time Burst are B, C, and D, in that order. Take note that
process E has not yet arrived.
A 0 8
B 3 4 So the CPU scheduler will select process D
C 4 5 (with a CPU burst of 3) to execute next.
D 6 3 It will start executing at t = 8 and will end at
E 10 2 time t = 11.

The Gantt chart will now be:

A D

0 8 11

Processes and Process Management


SHORTEST PROCESS FIRST

Process Arrival CPU At t = 11, the processes inside the ready queue
ID Time Burst are B, C, and E, in that order.
A 0 8
B 3 4 So the CPU scheduler will select process E
(with a CPU burst of 2) to execute next.
C 4 5
D 6 3 It will start executing at t = 11 and will end at
time t = 13.
E 10 2
The Gantt chart will now be:

A D E

0 8 11 13

Processes and Process Management


SHORTEST PROCESS FIRST

Process Arrival CPU • At t = 13, the processes remaining inside the


ID Time Burst ready queue are B and C in that order.
A 0 8
B 3 4 So the CPU scheduler will select process B
(with a CPU burst of 4) to execute next.
C 4 5
D 6 3 It will start executing at t = 13 and will end at
time t = 17.
E 10 2
The Gantt chart will now be:

A D E B

0 8 11 13 17

Processes and Process Management


SHORTEST PROCESS FIRST

Process Arrival CPU • At t = 17, the only process inside the


ID Time Burst ready queue is C. Obviously, the CPU
A 0 8 scheduler will select process C (with a
B 3 4 CPU burst of 5) to execute next.
C 4 5
D 6 3
It will start executing at t = 17 and will
end at time t = 22.
E 10 2

The final Gantt chart will now be:

A D E B C

0 8 11 13 17 22

Processes and Process Management


SHORTEST PROCESS FIRST

A D E B C Process Arrival CPU


ID Time Burst
0 8 11 13 17 22
A 0 8
• As in FCFS, the waiting time for each process is B 3 4
computed as the time the process left the C 4 5
ready queue to execute minus the time the
process entered the queue. D 6 3
E 10 2
• The waiting time for process A is therefore:

WTA = time left queue – time entered queue


= 0 – 0 = 0 ms
• The waiting time for process B is:

WTB = time left queue – time entered queue


= 13 – 3 = 10 ms

Processes and Process Management


SHORTEST PROCESS FIRST

A D E B C Process Arrival CPU


ID Time Burst
0 8 11 13 17 22
A 0 8
B 3 4
• The waiting times for the remaining
processes are: C 4 5
D 6 3
WTC = 17 – 4 = 13 ms E 10 2
WTD = 8 – 6 = 2 ms
WTE = 11 – 10 = 1 ms

The average waiting time is (0 + 10 + 13 + 2 + 1)/5 = 26/5 = 5.2 ms

Processes and Process Management


SHORTEST PROCESS FIRST

• The SPF scheduling algorithm is more


optimal than the FCFS scheduling
algorithm in terms of waiting time.

This is because shorter processes are


executed ahead of the longer
processes so their waiting times are
relatively shorter as compared to
FCFS.
Processes and Process Management
SHORTEST PROCESS FIRST

So unlike in FCFS, SPF favors I/O-bound


processes.

Since shorter processes tend to finish


execution right away, there will be fewer
processes inside the ready queue.

All of these contribute in making the average


waiting time smaller than that of the FCFS
algorithm.
Processes and Process Management
SHORTEST PROCESS FIRST

• Its disadvantage is that it is practically impossible to


determine the exact CPU burst of each process.

The only solution is to try to predict the next CPU


burst of a process based on its past CPU bursts.

This can be done by using exponential averaging.

However, there may be a problem in cases where


there are no historical data to determine the
behavior pattern of a process.

Processes and Process Management


SHORTEST PROCESS FIRST

• Another disadvantage
is that if there is a
steady stream of
incoming I/O-bound
processes.

• Because of this, it may


take time for CPU-
bound processes to be
executed because of
their lower priority.

Processes and Process Management


SHORTEST PROCESS FIRST

And when a process with a


very large CPU burst finally
gets to execute, it will
monopolize the CPU since
SPF is non-preemptive.

So in cases like this, the


response time of the
system will suffer.

So SPF may not be suitable


for time-sharing or
interactive systems since
there is no guarantee of
having fast response times
always.

Processes and Process Management


SHORTEST REMAINING TIME FIRST ALGORITHM

• The Shortest Remaining Time First Algorithm (SRTF) is the


pre-emptive version of SPF.

• If a new process arrives in the ready queue and it has a


shorter CPU burst time than the remaining CPU burst time
of the currently executing processes, the CPU scheduler will
preempt the running process in favor of the newly-arrived
process.

• The pre-empted process will have to go back to the ready


queue (entering at the rear or tail of the queue).

Processes and Process Management


SHORTEST REMAINING TIME FIRST ALGORITHM

• At time t = 0, process A arrives at the ready


Process Arrival CPU queue. Since the CPU is idle and process A is
ID Time Burst the only one inside the queue, the CPU
scheduler will select this process to execute
A 0 8 first.
B 3 4
C 4 5 So process A will start executing at t = 0.
However, it cannot be assumed it will end
D 6 3 executing at t = 8 since a process with a
E 10 2 shorter CPU burst might arrive and will
preempt it.

The Gantt chart will be:

Processes and Process Management


SHORTEST REMAINING TIME FIRST ALGORITHM

Process Arrival CPU • At t = 3, process B arrives at the ready queue. It has a


ID Time Burst CPU burst of 4.

A 0 5 At this point, process A still has a remaining CPU burst


of 5. Since B is the shorter of the two processes,
B 3 4 process B will preempt A.
C 4 5
Process A goes back to the ready queue and B starts
D 6 3 executing at the CPU. Like before, it cannot be
assumed that B will execute until t = 7 since a shorter
E 10 2 process might arrive later.

The Gantt chart will now be:

A B

0 3

Processes and Process Management


SHORTEST REMAINING TIME FIRST ALGORITHM
• At t = 4, process C arrives at the ready queue. It has a
CPU burst of 5.
Process Arrival CPU
ID Time Burst • At this point, process B still has a remaining CPU burst
A 0 5 of 3. Since B is still the shorter of the two processes,
process B will continue to execute and C remains inside
B 3 3 the ready queue.
C 4 5 • At t = 6, process D arrives at the ready queue. It has a
D 6 3 CPU burst of 3. At this time, process B still has a
remaining CPU burst of 1. Since B is still the shorter of
E 10 2 the two processes, process B will continue to execute
and D remains inside the ready queue. Process B will
execute until completion at t = 7.

• The Gantt chart will now be:

A B

0 3 7

Processes and Process Management


SHORTEST REMAINING TIME FIRST ALGORITHM

• At t = 7, the processes inside the ready queue


Process Arrival CPU are A (CPU Burst of 5), C (CPU Burst of 5), and
ID Time Burst D (CPU Burst of 3), in that order.
A 0 5
B 3 0 So the CPU scheduler will select process D to
execute next. It will start executing at t = 7.
C 4 5
D 6 3 Again, it cannot be assumed that D will
execute until t = 10 since a shorter process
E 10 2 might arrive later.

The Gantt chart will now be:

A B D

0 3 7

Processes and Process Management


SHORTEST REMAINING TIME FIRST ALGORITHM

Process Arrival CPU • At t = 10, process E arrives at the


ID Time Burst ready queue.
A 0 5
B 3 0 However, it will not have any
C 4 5 effect since process D would also
D 6 0 have finished executing at that
E 10 2
time.

The Gantt chart will now be:

A B D

0 3 7 10

Processes and Process Management


SHORTEST REMAINING TIME FIRST ALGORITHM

Process Arrival CPU


ID Time Burst • At t = 10, the processes inside the ready queue
are A (CPU Burst of 5), C (CPU Burst of 5), and
A 0 5 E (CPU Burst of 2), in that order.
B 3 0
C 4 5 • So the CPU scheduler will select process E to
execute next. It will start executing at t = 10
D 6 0 and will end at time t = 12.
E 10 2
• The Gantt Chart will now be:

A B D E

0 3 7 10 12

Processes and Process Management


SHORTEST REMAINING TIME FIRST ALGORITHM

Process Arrival CPU • At t = 12, the processes inside the ready queue
ID Time Burst are A (CPU Burst of 5) and C (CPU Burst of 5) in
that order.
A 0 5
B 3 0 Both have the same CPU burst times. Since
C 4 5 process A is ahead of process C in the queue,
the CPU scheduler will select process A to
D 6 0 execute next. It will start executing at t = 12
E 10 0 and will end at t = 17.

The Gantt chart will now be:

A B D E A

0 3 7 10 12 17

Processes and Process Management


SHORTEST REMAINING TIME FIRST ALGORITHM

Process Arrival CPU


• At t = 17, the only process inside the
ID Time Burst
ready queue is C (CPU burst of 5).
A 0 0
Obviously, the CPU scheduler will
B 3 0
select process C to execute next.
C 4 5
D 6 0 It will start executing at t = 17 and will
E 10 0 end at time t = 22.

The final Gantt Chart will now be:

A B D E A C

0 3 7 10 12 17 22

Processes and Process Management


SHORTEST REMAINING TIME FIRST ALGORITHM

A B D E A C Process Arrival CPU


ID Time Burst
0 3 7 10 12 17 22
A 0 8
• Take note that in a preemptive scheduling B 3 4
algorithm, some processes may enter the C 4 5
ready queue more than once. In this example,
process A entered the ready queue twice. The D 6 3
first is upon arrival and the second when it was
preempted by process B. This should be E 10 2
considered in waiting time computations.

• The waiting time for process A is therefore:

WTA = time left queue – time entered queue


= (0 – 0) + (12 – 3) = 9 ms

Processes and Process Management


SHORTEST REMAINING TIME FIRST ALGORITHM

A B D E A C Process Arrival CPU


ID Time Burst
0 3 7 10 12 17 22
A 0 8
• The waiting times for the remaining B 3 4
processes are: C 4 5
D 6 3
WTB = 3 – 3 = 0 ms E 10 2
WTC = 17 – 4 = 13 ms
WTD = 7 – 6 = 1 ms
WTE = 10 – 10 = 0 ms

The average waiting time is (9 + 0 + 13 + 1 + 0)/5 = 23/5 = 4.3 ms

Processes and Process Management


SHORTEST REMAINING TIME FIRST ALGORITHM

• The main advantage of SRTF is


that it is the most optimal of all
the CPU scheduling algorithms.

• Processes with very large CPU


burst times may no longer
monopolize the CPU since they
will be preempted once a short
process enters the ready queue.

• This makes this scheduling


algorithm ideal for interactive
systems.

Processes and Process Management


SHORTEST REMAINING TIME FIRST ALGORITHM

• Aside from the problem of


determining the exact CPU burst
time of a process, there is now
the additional burden of having
to track the remaining CPU burst
time of running processes.

This imposes additional overhead


on the CPU scheduler.

And since it is a preemptive


scheduling algorithm, it is
expected that SRTF will
experience more context
switches as compared to the two
previous algorithms.

Processes and Process Management


ROUND ROBIN

• The Round Robin


Algorithm (RR) is the
preemptive version of the
FCFS algorithm.

• In RR, processes are also


selected on a first-come,
first-served basis.

• However, each process is


given a time limit to
execute at the CPU.
Processes and Process Management
ROUND ROBIN

• This time limit is called the time


slice or time quantum and is
usually a value between 10 to
100 milliseconds.

• Once the time slice of a running


process expires, the process
goes back to the ready queue,
entering at the tail or rear of
the queue.

• The CPU scheduler then selects


the process at the front of the
queue to execute next.

Processes and Process Management


ROUND ROBIN

Process Arrival CPU • At time t = 0, process A arrives at the ready queue.


ID Time Burst
A 0 8 Since the CPU is idle and process A is the only one
inside the queue, the CPU scheduler will select this
B 3 4 process to execute first. So process A will start
executing at t = 0.
C 4 5
D 6 3 At t = 3, process B arrives at the ready queue. At the
same time, A has consumed its first time slice so it
E 10 2 stops executing and goes back to the ready queue.

Assume time quantum  = 3 ms The Gantt chart will now be:

0 3

Processes and Process Management


ROUND ROBIN

Process Arrival CPU • At t = 3, the processes inside the ready queue are B
(CPU Burst of 4) and A (CPU burst of 5) in that order.
ID Time Burst
A 0 5 So the CPU scheduler will select process B to execute
next. It will start executing at t = 3.
B 3 4
C 4 5 At t = 4, process C arrives at the ready queue.

D 6 3 At t = 6, process D arrives at the ready queue. At the


same time, B has consumed its first time slice so it stops
E 10 2 executing and goes back to the ready queue.

The Gantt chart will now be:

A B

0 3 6

Processes and Process Management


ROUND ROBIN

Process Arrival CPU • At t = 6, the processes inside the ready queue


ID Time Burst are A (CPU burst of 5), C (CPU burst of 5), D
(CPU burst of 3), and B (CPU burst of 1) in that
A 0 5 order.
B 3 1
So the CPU scheduler will select process A to
C 4 5 execute next. It will start executing at t = 6 and
D 6 3 end at t = 9 (after its second time slice has
expired). It will then go back to the ready
E 10 2 queue.

The Gantt chart will now be:

A B A

0 3 6 9

Processes and Process Management


ROUND ROBIN

Process Arrival CPU • At t = 9, the processes inside the ready queue are C
(CPU burst of 5), D (CPU burst of 3), B (CPU burst of 1)
ID Time Burst and A (CPU burst of 2) in that order.
A 0 2
So the CPU scheduler will select process C to execute
B 3 1 next. It will start executing at t = 9.
C 4 5
At t = 10, process E arrives at the ready queue.
D 6 3
At t = 12, C has consumed its first time slice so it will
E 10 2 stop executing and go back to the ready queue.

The Gantt chart will now be:

A B A C

0 3 6 9 12

Processes and Process Management


ROUND ROBIN

Process Arrival CPU • At t = 12, the processes inside the ready queue are D
ID Time Burst (CPU burst of 3), B (CPU burst of 1), A (CPU burst of 2),
E (CPU burst of 2), and C (CPU burst of 2), in that order.
A 0 2
So the CPU scheduler will select process D to execute
B 3 1 next. It will start executing at t = 12 and will terminate
at t = 15.
C 4 2
D 6 3 Take note that process D will terminate at the exact
time its time slice will expire. Therefore, process D will
E 10 2 no longer go back to the ready queue.

The Gantt chart will now be:

A B A C D

0 3 6 9 12 15

Processes and Process Management


ROUND ROBIN

Process Arrival CPU • At t = 15, the processes inside the ready queue are B
ID Time Burst (CPU burst of 1), A (CPU burst of 2), E (CPU burst of 2),
and C (CPU burst of 2), in that order.
A 0 2
B 3 1 So the CPU scheduler will select process B to execute
next. It will start executing at t = 15 and will terminate
C 4 2 at t = 16.
D 6 0 Take note that process B will terminate even before it
E 10 2 has consumed its last time slice.

The Gantt chart will now be:

A B A C D B

0 3 6 9 12 15 16

Processes and Process Management


ROUND ROBIN

Process Arrival CPU


ID Time Burst • At t = 16, the processes inside the ready queue
are A (CPU burst of 2), E (CPU burst of 2), and C
A 0 2 (CPU burst of 2), in that order.
B 3 0
C 4 2 So the CPU scheduler will select process A to
execute next. It will start executing at t = 16
D 6 0 and will terminate at t = 18.
E 10 2
The Gantt chart will now be:

A B A C D B A

0 3 6 9 12 15 16 18

Processes and Process Management


ROUND ROBIN

Process Arrival CPU • At t = 18, the processes inside the


ID Time Burst ready queue are E (CPU burst of 2) and
C (CPU burst of 2), in that order.
A 0 0
B 3 0
So the CPU scheduler will select
C 4 2 process E to execute next. It will start
D 6 0 executing at t = 18 and will terminate
E 10 2 at t = 20.

The Gantt chart will now be:

A B A C D B A E

0 3 6 9 12 15 16 18 20

Processes and Process Management


ROUND ROBIN

Process Arrival CPU • At t = 20, the only process inside the


ID Time Burst ready queue is C (CPU burst of 2).
A 0 0
B 3 0 Obviously, the CPU scheduler will
C 4 2 select process C to execute next. It
D 6 0
will start executing at t = 20 and will
terminate at time t = 22.
E 10 0

The final Gantt Chart will now be:

A B A C D B A E C

0 3 6 9 12 15 16 18 20 22

Processes and Process Management


ROUND ROBIN

A B A C D B A E C Process Arrival CPU


ID Time Burst
0 3 6 9 12 15 16 18 20 22
A 0 8
• The following are the waiting times for each of B 3 4
the five processes: C 4 5
D 6 3
WTA = (0 – 0) + (6 – 3) + (16 – 9) = 10 ms
WTB = (3 – 3) + (15 – 6) = 9 ms E 10 2
WTC = (9 – 4) + (20 – 12) = 13 ms
WTD = 12 – 6 = 6 ms
WTE = 18 – 10 = 8 ms

The average waiting time is (10 + 9 + 13 + 6 + 8)/5 = 46/5 = 9.2 ms

Processes and Process Management


ROUND ROBIN

• The main advantage of RR is


that it is the only CPU
scheduling algorithm that
guarantees that each process
gets its fair share of the CPU.

• RR was primarily designed for


multitasking systems so
response time is its priority.

• So waiting time is considered


not so important.

Processes and Process Management


ROUND ROBIN

• Its main disadvantage is that if


the length of time quantum is
wrong, the performance of RR
is greatly affected.

If the time slice is too small,


there will be too many context
switches.

If it is too large, then its


performance becomes similar
to that of the FCFS algorithm.

Processes and Process Management


ROUND ROBIN

• In general, waiting time improves


if most processes can finish their
next CPU burst within one time
slice.

• So as a general rule, the time


slice or time quantum should be
greater than the typical CPU
burst time.

• Specifically, the chosen time


quantum should be greater than
80% of the all CPU burst times.

Processes and Process Management


PRIORITY SCHEDULING

• In priority scheduling, each process


is assigned a priority and the CPU
scheduler selects the process in the
ready queue with the highest
priority to execute next.

• Generally, the priority of a process is


expressed as an integer. The lower
the value of the integer, the higher
the priority of the process.

• In FCFS, the priority of processes is


based on the time of their entry in
the ready queue while in SPF, the
priority of processes are based on
the length of their next CPU burst.

Processes and Process Management


PRIORITY SCHEDULING

• Priority scheduling may be non-preemptive or


preemptive.

• In preemptive priority scheduling, a running


process may be preempted and sent back to the
ready queue if a higher priority process becomes
ready to execute.

• In cases where there are two processes with the


same priority, then FCFS can be used as the tie-
breaker.

Processes and Process Management


PRIORITY SCHEDULING (NON-PREEMPTIVE)

Process Arrival CPU


Priority • At time t = 0, process A arrives at the ready
ID Time Burst queue.
A 0 8 4
B 3 4 3 Since the CPU is idle and process A is the only
one inside the queue, the CPU scheduler will
C 4 5 1 select this process to execute first.
D 6 3 2
So process A will start executing at t = 0 and
E 10 2 2 will end at t = 8.

The Gantt chart will now be:

0 8

Processes and Process Management


PRIORITY SCHEDULING (NON-PREEMPTIVE)

Process Arrival CPU


Priority • At t = 8, the processes inside the ready queue
ID Time Burst are B, C, and D, in that order.
A 0 8 4
B 3 4 3 Process C has the highest priority among the
three so the CPU scheduler will select it to
C 4 5 1 execute next.
D 6 3 2
It will start executing at t = 8 and will end at
E 10 2 2 time t = 13.

The Gantt chart will now be:

A C

0 8 13

Processes and Process Management


PRIORITY SCHEDULING (NON-PREEMPTIVE)

Process Arrival CPU


ID Time Burst
Priority • At t = 13, the processes inside the ready queue are B, D,
and E in that order.
A 0 8 4
Process D and E both have a priority of 2 while B has a
B 3 4 3 priority of 3. So it is a choice between D and E. Since D
C 4 5 1 entered the ready queue first, the CPU scheduler will
select it to execute next.
D 6 3 2
It will start executing at t = 13 and will end at time t =
E 10 2 2 16.

The Gantt chart will now be:

A C D

0 8 13 16

Processes and Process Management


PRIORITY SCHEDULING (NON-PREEMPTIVE)

Process Arrival CPU


ID Time Burst
Priority • At t = 16, the processes inside the ready queue
are B and E in that order.
A 0 8 4
B 3 4 3 Since E has the higher priority, the CPU
C 4 5 1 scheduler will select it to execute next.
D 6 3 2 It will start executing at t = 16 and will end at
E 10 2 2 time t = 18.

The Gantt chart will now be:

A C D E

0 8 13 16 18

Processes and Process Management


PRIORITY SCHEDULING (NON-PREEMPTIVE)

Process Arrival CPU


• At t = 20, the only process inside the
ID Time Burst
Priority ready queue is B.
A 0 8 4
B 3 4 3 Obviously, the CPU scheduler will
C 4 5 1
select this process to execute next.
D 6 3 2
It will start executing at t = 18 and will
E 10 2 2
end at time t = 22.

The final Gantt chart will now be:

A C D E B

0 8 13 16 18 22

Processes and Process Management


PRIORITY SCHEDULING (NON-PREEMPTIVE)

Process Arrival CPU


A C D E B Priority
ID Time Burst
0 8 13 16 18 22 A 0 8 4
B 3 4 3
• The following are the waiting times for each of
the five processes: C 4 5 1
D 6 3 2
WTA = 0 – 0 = 0 ms E 10 2 2
WTB = 18 – 3 = 15 ms
WTC = 8 – 4 = 4 ms
WTD = 13 – 6 = 7 ms
WTE = 16 – 10 = 6 ms

The average waiting time is (0 + 15 + 4 + 7 + 6)/5 = 32/5 = 6.4 ms

Processes and Process Management


PRIORITY SCHEDULING (PREEMPTIVE)

Process Arrival CPU • At time t = 0, process A arrives at the ready queue.


Priority
ID Time Burst
Since the CPU is idle and process A is the only one
A 0 8 4 inside the queue, the CPU scheduler will select this
process to execute first. So process A will start
B 3 4 3 executing at t = 0.
C 4 5 1
However, it cannot be assumed it will end executing at t
D 6 3 2 = 8 since a process with a higher priority might arrive
and will preempt it. The Gantt chart should show
E 10 2 2 process A starting at t = 0 but should not indicate when
it will end.

The Gantt chart will now be:

Processes and Process Management


PRIORITY SCHEDULING (PREEMPTIVE)

Process Arrival CPU


Priority
ID Time Burst • At t = 3, process B arrives at the ready queue.
Since B has a higher priority than A, process B
A 0 5 4 will preempt A. Process A goes back to the
B 3 4 3 ready queue and B starts executing at the CPU.
C 4 5 1
Like before, it cannot be assumed that B will
D 6 3 2 execute until t = 7 since a higher-priority
E 10 2 2 process might arrive later.

The Gantt chart will now be:

A B

0 3

Processes and Process Management


PRIORITY SCHEDULING (PREEMPTIVE)

Process Arrival CPU


ID Time Burst
Priority • At t = 4, process C arrives at the ready queue.
Since C has a higher priority than B, process C
A 0 5 4 will preempt B. Process B goes back to the
ready queue and C starts executing at the CPU.
B 3 4 3
C 4 5 1 Process C has the highest possible priority (1),
D 6 3 2 therefore it is safe to assume that no other
process can preempt it later. Therefore, it will
E 10 2 2 finish its execution at t = 9.

The Gantt chart will now be:

A B C

0 3 4 9

Processes and Process Management


PRIORITY SCHEDULING (PREEMPTIVE)

Process Arrival CPU


ID Time Burst
Priority • At t = 9, the processes inside the ready queue are A
(CPU burst of 5), B (CPU burst of 3), and D (CPU burst of
A 0 5 4 3), which arrived at t = 6 while process C was executing.
B 3 3 3 D has the highest priority among the three so the CPU
C 4 0 1 scheduler will select it to execute next. It will start
executing at t = 9.
D 6 3 2
It cannot be assumed that it will finish executing at t =
E 10 2 2 12 since a higher-priority process might arrive later.

The Gantt chart will now be:

A B C D

0 3 4 9

Processes and Process Management


PRIORITY SCHEDULING (PREEMPTIVE)

Process Arrival CPU


ID Time Burst
Priority • At t = 10, process E arrives at the ready queue.
A 0 5 4
Since it has the same priority as process D, it
B 3 3 3 cannot preempt the currently executing
C 4 0 1 process.
D 6 3 2 So D continues to execute until it finishes at t =
E 10 2 2 12.

The Gantt chart will now be:

A B C D

0 3 4 9 12

Processes and Process Management


PRIORITY SCHEDULING (PREEMPTIVE)

Process Arrival CPU


Priority
ID Time Burst • The remaining processes which are A (CPU
Burst of 5), B (CPU Burst of 3), and E (CPU
A 0 5 4 burst of 2) will be executed according to their
B 3 3 3 priorities.
C 4 0 1
So process E executes from t = 12 to t = 14,
D 6 0 2 process B from t = 14 to t = 17, and last is
E 10 2 2 process A from t = 17 to t = 22.

The final Gantt Chart will be:

A B C D E B A

0 3 4 9 12 14 17 22

Processes and Process Management


PRIORITY SCHEDULING (PREEMPTIVE)

Process Arrival CPU


A B C D E B A Priority
ID Time Burst
0 3 4 9 12 14 17 22 A 0 8 4
B 3 4 3
• The following are the waiting times for each of
the five processes: C 4 5 1
D 6 3 2
WTA = (0 – 0) + (17 – 3) = 14 ms E 10 2 2
WTB = (3 – 3) + (14-4) = 10 ms
WTC = (4 – 4) = 0 ms
WTD = (9 – 6) = 3 ms
WTE = (12 – 10) = 2 ms

The average waiting time is (14 + 10 + 0 + 3 + 2)/5 = 29/5 = 5.8 ms

Processes and Process Management


PRIORITY SCHEDULING

• The main advantage of the


priority scheduling is that the
waiting time of high-priority
processes are minimized.

This can be seen in the example


given where higher-priority
processes (specifically
processes C, D, and E) have
significantly smaller waiting
times as compared to the
lower-priority processes.

Processes and Process Management


PRIORITY SCHEDULING

• The main disadvantage of this


scheduling algorithm is that
there is a possibility that
processes with very low
priorities may be denied the
use of the CPU especially if
high-priority processes kept on
entering the ready queue.

• This is called starvation, where


low-priority process may wait
indefinitely in the ready queue.

Processes and Process Management


PRIORITY SCHEDULING

• To solve this problem, aging can


be utilized by the operating
system where the priority of a
process gradually increases the
longer it stays in the ready
queue.

• Most modern operating systems


use some sort of priority-based
scheduling wherein real-time
processes are assigned the
highest priority.

Processes and Process Management


MULTILEVEL FEEDBACK QUEUE

• It has been mentioned in previous discussions that in order to


optimize response time and utilization, processes with short CPU
burst times should be given priority.

• This is the main objective of the SPF and SRTF scheduling


algorithms.

• The operating system may use exponential averaging to predict the


CPU burst time of each process.

• However, exponential averaging requires the presence of historical


data to be able to do this. In the absence of such data, multilevel
feedback queues may be used by the operating system in order to
determine the behavior pattern of a process and thus schedule its
execution appropriately.

Processes and Process Management


MULTILEVEL FEEDBACK QUEUE
Processes that finish
All processes enter Q0 executing or request
the queuing network for an I/O operation
at the rear of the within the given time


highest priority queue slice leave the
FCFS (time slice = 10 ms)
queuing network This scheduling algorithm utilizes
several ready queues wherein each
Processes that do not finish executing within
the given time slice move to the next lower-
queue will have a different priority.
priority queue

Q1
Processes that finish
executing or request
• Queue Q0 has the highest priority
for an I/O operation
within the given time
while Qn has the lowest.
slice leave the
FCFS (time slice = 20 ms)
queuing network
• Q0 up to Qn-1 follow the FCFS
Processes that do not finish executing within
the given time slice move to the next lower-
algorithm. However, a time quantum
priority queue is assigned (take note that higher
Processes that finish
priority queues have a smaller time
Q2 executing or request
for an I/O operation
quantum).
within the given time
slice leave the
FCFS (time slice = 30 ms)
queuing network
• If a process does not finish executing
within the given time quantum of a
. queue, it is moved to the next lower-
. priority queue.
.

Qn
Processes that finish • The lowest-priority queue follows the
executing or request
for an I/O operation RR scheduling algorithm.
within the given time
slice leave the
RR
queuing network

Processes and Process Management


MULTILEVEL FEEDBACK QUEUE
Processes that finish
Q0
All processes enter
the queuing network
executing or request
for an I/O operation
• The following describes how the
at the rear of the
highest priority queue
within the given time
slice leave the
multilevel feedback queue algorithm
FCFS (time slice = 10 ms)
queuing network works:
Processes that do not finish executing within
the given time slice move to the next lower-
priority queue
a. All processes enter the queuing
network at the rear of the highest-
Q1
Processes that finish
executing or request
priority queue which is Q0.
for an I/O operation
within the given time
slice leave the
FCFS (time slice = 20 ms)
queuing network b. If a process completes its execution
or requests for an I/O operation
Processes that do not finish executing within
the given time slice move to the next lower-
within the given time quantum of Q0
priority queue (10 ms), it leaves the network.
Processes that finish
Q2 executing or request
for an I/O operation If not, it is moved to the next lower-
within the given time
FCFS (time slice = 30 ms)
slice leave the priority queue (Q1). In other words,
queuing network
processes with longer CPU bursts are
now given lower priorities.
.
.
. A process in a lower-priority queue
cannot execute unless all higher-
Qn
Processes that finish priority queues are empty. And a
executing or request
for an I/O operation running process may be preempted
within the given time
slice leave the
by a process arriving at a higher-
RR
queuing network priority queue.

Processes and Process Management


MULTILEVEL FEEDBACK QUEUE
Processes that finish
All processes enter Q0 executing or request
the queuing network for an I/O operation
at the rear of the within the given time
highest priority queue FCFS (time slice = 10 ms) slice leave the
queuing network
c. Once Q0 is empty, the
operating system will start
Processes that do not finish executing within
the given time slice move to the next lower-
scheduling processes inside
priority queue Q1 .
Processes that finish
Q1 executing or request
for an I/O operation
within the given time
slice leave the
The time quantum has been
FCFS (time slice = 20 ms)
queuing network increased to 20 ms to give the
Processes that do not finish executing within
processes in Q1 a better
the given time slice move to the next lower-
priority queue
chance to complete their
Processes that finish
execution.
Q2 executing or request
for an I/O operation
within the given time
FCFS (time slice = 30 ms)
slice leave the
queuing network
So if a process finishes its
execution or performs an I/O
. operation within 20 ms, it
. leaves the queuing network.
.

Qn
Processes that finish
executing or request
If not, it is again moved to a
for an I/O operation
within the given time
lower-priority queue (Q2).
slice leave the
RR
queuing network

Processes and Process Management


MULTILEVEL FEEDBACK QUEUE
Processes that finish
All processes enter Q0 executing or request
the queuing network for an I/O operation
at the rear of the within the given time
highest priority queue FCFS (time slice = 10 ms) slice leave the
queuing network

Processes that do not finish executing within


• This repeats until a process
the given time slice move to the next lower-
priority queue
reaches the lowest-priority
Q1
Processes that finish
executing or request
queue (Qn).
for an I/O operation
within the given time
slice leave the
FCFS (time slice = 20 ms)
queuing network
This will only happen to
Processes that do not finish executing within
the given time slice move to the next lower-
processes with extremely
priority queue
long CPU burst times (the
Q2
Processes that finish
executing or request
for an I/O operation
CPU-bound processes).
within the given time
slice leave the
FCFS (time slice = 30 ms)
queuing network

.
These processes now
. compete for the use of
.
CPU using the RR
Qn
Processes that finish
executing or request
algorithm.
for an I/O operation
within the given time
slice leave the
RR
queuing network

Processes and Process Management


MULTILEVEL FEEDBACK QUEUE
Processes that finish
Q0
All processes enter
the queuing network
executing or request
for an I/O operation • Multilevel feedback queues give
at the rear of the
highest priority queue FCFS (time slice = 10 ms)
within the given time
slice leave the priority to I/O-bound processes.
queuing network

Processes that do not finish executing within


the given time slice move to the next lower-
priority queue
If the time quantum of Q0 is
chosen such that majority of
Q1
Processes that finish
executing or request short processes finish their next
for an I/O operation
within the given time CPU burst right away, then they
FCFS (time slice = 20 ms)
slice leave the
queuing network leave the queuing network
Processes that do not finish executing within
immediately.
the given time slice move to the next lower-
priority queue

Q2
Processes that finish CPU-bound processes tend to
executing or request
for an I/O operation stay in the queuing network and
FCFS (time slice = 30 ms)
within the given time
slice leave the move down to the lower-priority
queuing network
queues.
.
.
. So this scheduling algorithm has
the capability of segregating the
Qn
Processes that finish
executing or request
CPU-bound and I/O-bound
for an I/O operation
within the given time
processes without knowing
RR
slice leave the
queuing network
anything about the behavior of
the processes.
Processes and Process Management
CONCURRENT PROCESSING

• Concurrency is defined as the


execution of two or more
independent processes at the
same time period.

• This can be achieved either by


interleaving their execution
(multiprogramming or
multitasking on a single CPU) or
simultaneous execution if there
are several processing units.

Processes and Process Management


CONCURRENT PROCESSING
• The operating system may be executing several
processes concurrently.

• These processes may also create their own


processes by using the create-process system call.

• The requesting processes is called the parent


process and the newly-created process is called the
child process.

• Child processes may also create their own child


processes.
Processes and Process Management
CONCURRENT PROCESSING
• Processes B and C are the
A children of process A.

• E and F are the children of


process C.
B C

• And processes G, H, and I are


the children of E.
D E F
• A process may have only one
parent (except the very first
process which has no parent)
G H I but it may have several
children.

Processes and Process Management


CONCURRENT PROCESSING

• To gain a better understanding of parent-child processes, consider


how UNIX implements process spawning:

1. A process can create a child process by executing the fork


system call.

2. After invoking the fork system call, main memory space is


allocated for the newly created child process. The contents of
the memory space of the child process will be an exact copy of
the memory space of the parent.

Both parent and child processes execute independently of


each other. The parent process may opt to stop executing and
wait for the child process to complete its execution. This is
done by using the wait system call.

Processes and Process Management


CONCURRENT PROCESSING

3. At this point, both parent and child processes are


executing the same program.

The child process can then use the exec system call to
replace itself with a new program.

This is called overlaying (a process replacing itself with a


new process).

After using the exec system call, the child process has
supplanted itself with the code of the new program. It
can then start executing this new program.

Processes and Process Management


THREADS

• Processes normally have a single thread of execution


thereby limiting it in doing only one task at a time.

• So operating systems allow a process to cause the creation


of other processes in order for it to concurrently perform
several tasks. So a single running application is actually a
set of several cooperating processes.

• However, having too many processes has its disadvantages.


Creating processes require a lot of resources like main
memory space.

Processes and Process Management


THREADS

• Multithreaded Processes

To allow a single process to do multiple tasks at the same time,


modern operating systems allow a process to have multiple
threads of execution.

Each thread runs a portion of the program and can run


concurrently and independently with other threads running other
portions of the same program.

For example, a web browser can have three threads executing


concurrently: one thread is responsible for downloading data from
a certain web site, another thread to display the graphics or text
on the monitor, and another thread to take care of sound
reproduction.

Processes and Process Management


THREADS

A process starts off with a single thread called the main


thread or the primary thread.

And as the process executes, it may now create new


threads to perform other tasks as required by the program.

Each thread will execute its own sequence of instructions


and they execute their own code independently of the
others.

Since the process has several threads of execution, it is


now called a multithreaded process.

Processes and Process Management


THREADS

Processes and threads are similar in the following ways:

1. Like processes, each thread has its own ID number (a unique


number that helps the operating system distinguish one thread
from another).

2. Like processes, each thread has its own state (whether new,
ready, running, blocked, or terminated).

3. Like processes, each thread has its own set of internal CPU
register values.

4. Like processes, each thread has its own scheduling priority.

Processes and Process Management


THREADS

5. Like processes, each thread has its own process control


block that contains information needed by the
operating system about each thread. For threads
however, this is called a thread control block (TCB).

6. Like processes, threads compete with each other for


CPU time and the operating system schedules each
thread for execution in the CPU. So the concept of
context switching is also applicable to threads.

7. Child processes execute independently of each other


and of the parent process that created them. Threads
also execute independently of each other and of the
main thread that created them.

Processes and Process Management


THREADS

Processes and threads are different in the following ways:

1. All processes within a computer system have their own main memory space.
Threads within a certain process share the same main memory space.
Specifically, the threads of a process share the code and data regions of the
process to which they belong to. Each thread, however, has its own stack.

2. There is less information associated with each thread. This is because most
of the information associated with a process is automatically inherited by its
threads (such as information about memory space). So there is no need to
duplicate such information. As a consequence of this, a task control block is
much simpler than a process control block.

3. Threads belonging to the same process also share other resources allocated
to the process such as open files.

Processes and Process Management


THREADS

5. A process cannot simply control another process unless there


is a parent-child relationship between them. Threads of the
same process are considered as peers or equals and can
therefore exercise control over each other. The main thread
can even be controlled by any thread it creates.

6. Changes made to the parent process will not affect its children.
Changes to the main thread of a process, however, may affect
how the other threads execute. This is because of the fact that
threads share the code and data section of the memory space
of the process. For example, terminating the main thread will
automatically terminate the process, which in turn, terminates
all other threads belonging to that process.

Processes and Process Management


THREADS

Using threads have several advantages over using several processes in the execution of
programs. Some of which are:

1. It is easier to create a thread than to create a process. This is because threads of the same
process share main memory space. A newly created process has to be allocated its own
memory space. This means that a thread has less overhead as compared to a process.

2. Context switching between threads is easier and faster than context switching between
processes. This is because a TCB is simpler than a PCB so there is less information to be
stored and retrieved during context switches between threads.

3. Since threads share memory space, it is easier for them to communicate and share data
with one another. Data written by a thread can easily be accessed by other threads of the
same process since they use the same data region of the process’ memory space.

4. Threads are easier to implement in terms of programming. A thread may be assigned to


each routine or subtask in a program.

Processes and Process Management


THREADS

But threads also have their disadvantages. Worth mentioning are


the following:

1. Since they share the same memory space, there may be


problems when two or more threads try to update the same
variable at the same time or when a thread is updating data
while another is currently reading it. This is called a race
condition and may cause data inconsistency. Race conditions
may be avoided provided there is proper synchronization
among threads accessing the same data.

2. A malfunctioning thread can inadvertently terminate the


process and all its threads (including the malfunctioning one).
This is because threads of the same process are considered as
equals. So any thread can easily terminate the main thread of
the process causing the termination of the process itself.

Processes and Process Management

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy