Processes and Process Management
Processes and Process Management
PROCESS
MANAGEMENT
• So in multiprogramming/multitasking, the
operating system interleaves the execution of
several processes to maximize the utilization of the
CPU.
• The execution of a
Process process will start
. the process is
first with a CPU
.
.
using the CPU to
execute machine
burst, followed by
inc A instructions
an I/O burst, and
load B
add A, B
CPU
Burst then followed by
copy C, B
write to disk another CPU burst,
{wait for I/O completion} I/O Burst
the process is
etc.
store C waiting for its I/O
shift A CPU
operation (writing
sub A, B Burst to the disk) to finish
send to printer
{wait for I/O completion} I/O Burst
• This continues
.
until a final CPU
.
.
burst signifying
process
termination is
encountered.
Processes and Process Management
CPU AND I/O BURSTS
blocked
blocked
blocked
blocked
blocked
blocked
blocked
• Process Descriptors
This means that the process has executed the exit system
call informing the operating system that it has completed
executing and may now be terminated.
3. Suspending a Process
4. Resuming a Process
Resuming a process
means that a
suspended process is
brought back into
main memory in
order for it to resume
execution.
Process 1
CPU Burst I/O Burst . . .
Since Process 1
started an I/O
operation, the OS
will now save its
execution context in
its PCB
Process 2
CPU Burst . . .
Context Switch Time
• As requests for
program execution
from the user or users
come in, programs that
were requested to be
executed will be loaded Job Queue
• The operating
system must now
perform job
scheduling.
Job Queue
Therefore, in non-preemptive
scheduling, the CPU scheduler can
only assign the CPU to another process
if the process currently using the CPU
voluntarily gives it up.
Throughput may be
measured in units such as
number of processes
completed per minute or
number of instructions
executed per second.
Scheduling algorithms
do not affect how fast
the CPU can execute
the instructions of a
process or how long
does it take for an I/O
device to complete a
requested operation.
0 8
A B
0 8 12
A B C
0 8 12 17
A B C D
0 8 12 17 20
A B C D E
0 8 12 17 20 22
And since CPU-bound processes have long CPU burst times, the
I/O-bound processes will have to wait a relatively long period of
time in the ready queue.
0 8
A D
0 8 11
Process Arrival CPU At t = 11, the processes inside the ready queue
ID Time Burst are B, C, and E, in that order.
A 0 8
B 3 4 So the CPU scheduler will select process E
(with a CPU burst of 2) to execute next.
C 4 5
D 6 3 It will start executing at t = 11 and will end at
time t = 13.
E 10 2
The Gantt chart will now be:
A D E
0 8 11 13
A D E B
0 8 11 13 17
A D E B C
0 8 11 13 17 22
• Another disadvantage
is that if there is a
steady stream of
incoming I/O-bound
processes.
A B
0 3
A B
0 3 7
A B D
0 3 7
A B D
0 3 7 10
A B D E
0 3 7 10 12
Process Arrival CPU • At t = 12, the processes inside the ready queue
ID Time Burst are A (CPU Burst of 5) and C (CPU Burst of 5) in
that order.
A 0 5
B 3 0 Both have the same CPU burst times. Since
C 4 5 process A is ahead of process C in the queue,
the CPU scheduler will select process A to
D 6 0 execute next. It will start executing at t = 12
E 10 0 and will end at t = 17.
A B D E A
0 3 7 10 12 17
A B D E A C
0 3 7 10 12 17 22
0 3
Process Arrival CPU • At t = 3, the processes inside the ready queue are B
(CPU Burst of 4) and A (CPU burst of 5) in that order.
ID Time Burst
A 0 5 So the CPU scheduler will select process B to execute
next. It will start executing at t = 3.
B 3 4
C 4 5 At t = 4, process C arrives at the ready queue.
A B
0 3 6
A B A
0 3 6 9
Process Arrival CPU • At t = 9, the processes inside the ready queue are C
(CPU burst of 5), D (CPU burst of 3), B (CPU burst of 1)
ID Time Burst and A (CPU burst of 2) in that order.
A 0 2
So the CPU scheduler will select process C to execute
B 3 1 next. It will start executing at t = 9.
C 4 5
At t = 10, process E arrives at the ready queue.
D 6 3
At t = 12, C has consumed its first time slice so it will
E 10 2 stop executing and go back to the ready queue.
A B A C
0 3 6 9 12
Process Arrival CPU • At t = 12, the processes inside the ready queue are D
ID Time Burst (CPU burst of 3), B (CPU burst of 1), A (CPU burst of 2),
E (CPU burst of 2), and C (CPU burst of 2), in that order.
A 0 2
So the CPU scheduler will select process D to execute
B 3 1 next. It will start executing at t = 12 and will terminate
at t = 15.
C 4 2
D 6 3 Take note that process D will terminate at the exact
time its time slice will expire. Therefore, process D will
E 10 2 no longer go back to the ready queue.
A B A C D
0 3 6 9 12 15
Process Arrival CPU • At t = 15, the processes inside the ready queue are B
ID Time Burst (CPU burst of 1), A (CPU burst of 2), E (CPU burst of 2),
and C (CPU burst of 2), in that order.
A 0 2
B 3 1 So the CPU scheduler will select process B to execute
next. It will start executing at t = 15 and will terminate
C 4 2 at t = 16.
D 6 0 Take note that process B will terminate even before it
E 10 2 has consumed its last time slice.
A B A C D B
0 3 6 9 12 15 16
A B A C D B A
0 3 6 9 12 15 16 18
A B A C D B A E
0 3 6 9 12 15 16 18 20
A B A C D B A E C
0 3 6 9 12 15 16 18 20 22
0 8
A C
0 8 13
A C D
0 8 13 16
A C D E
0 8 13 16 18
A C D E B
0 8 13 16 18 22
A B
0 3
A B C
0 3 4 9
A B C D
0 3 4 9
A B C D
0 3 4 9 12
A B C D E B A
0 3 4 9 12 14 17 22
•
highest priority queue slice leave the
FCFS (time slice = 10 ms)
queuing network This scheduling algorithm utilizes
several ready queues wherein each
Processes that do not finish executing within
the given time slice move to the next lower-
queue will have a different priority.
priority queue
Q1
Processes that finish
executing or request
• Queue Q0 has the highest priority
for an I/O operation
within the given time
while Qn has the lowest.
slice leave the
FCFS (time slice = 20 ms)
queuing network
• Q0 up to Qn-1 follow the FCFS
Processes that do not finish executing within
the given time slice move to the next lower-
algorithm. However, a time quantum
priority queue is assigned (take note that higher
Processes that finish
priority queues have a smaller time
Q2 executing or request
for an I/O operation
quantum).
within the given time
slice leave the
FCFS (time slice = 30 ms)
queuing network
• If a process does not finish executing
within the given time quantum of a
. queue, it is moved to the next lower-
. priority queue.
.
Qn
Processes that finish • The lowest-priority queue follows the
executing or request
for an I/O operation RR scheduling algorithm.
within the given time
slice leave the
RR
queuing network
Qn
Processes that finish
executing or request
If not, it is again moved to a
for an I/O operation
within the given time
lower-priority queue (Q2).
slice leave the
RR
queuing network
.
These processes now
. compete for the use of
.
CPU using the RR
Qn
Processes that finish
executing or request
algorithm.
for an I/O operation
within the given time
slice leave the
RR
queuing network
Q2
Processes that finish CPU-bound processes tend to
executing or request
for an I/O operation stay in the queuing network and
FCFS (time slice = 30 ms)
within the given time
slice leave the move down to the lower-priority
queuing network
queues.
.
.
. So this scheduling algorithm has
the capability of segregating the
Qn
Processes that finish
executing or request
CPU-bound and I/O-bound
for an I/O operation
within the given time
processes without knowing
RR
slice leave the
queuing network
anything about the behavior of
the processes.
Processes and Process Management
CONCURRENT PROCESSING
The child process can then use the exec system call to
replace itself with a new program.
After using the exec system call, the child process has
supplanted itself with the code of the new program. It
can then start executing this new program.
• Multithreaded Processes
2. Like processes, each thread has its own state (whether new,
ready, running, blocked, or terminated).
3. Like processes, each thread has its own set of internal CPU
register values.
1. All processes within a computer system have their own main memory space.
Threads within a certain process share the same main memory space.
Specifically, the threads of a process share the code and data regions of the
process to which they belong to. Each thread, however, has its own stack.
2. There is less information associated with each thread. This is because most
of the information associated with a process is automatically inherited by its
threads (such as information about memory space). So there is no need to
duplicate such information. As a consequence of this, a task control block is
much simpler than a process control block.
3. Threads belonging to the same process also share other resources allocated
to the process such as open files.
6. Changes made to the parent process will not affect its children.
Changes to the main thread of a process, however, may affect
how the other threads execute. This is because of the fact that
threads share the code and data section of the memory space
of the process. For example, terminating the main thread will
automatically terminate the process, which in turn, terminates
all other threads belonging to that process.
Using threads have several advantages over using several processes in the execution of
programs. Some of which are:
1. It is easier to create a thread than to create a process. This is because threads of the same
process share main memory space. A newly created process has to be allocated its own
memory space. This means that a thread has less overhead as compared to a process.
2. Context switching between threads is easier and faster than context switching between
processes. This is because a TCB is simpler than a PCB so there is less information to be
stored and retrieved during context switches between threads.
3. Since threads share memory space, it is easier for them to communicate and share data
with one another. Data written by a thread can easily be accessed by other threads of the
same process since they use the same data region of the process’ memory space.