Unit Ii
Unit Ii
Unit Ii
• A process is
–Program under execution.
–Dispatch unit.
–Instance of program.
–Concern of control.
Process VS Program
Process Program
• Program under execution. • Set of instruction.
• Passive instance
Process (Cont…)
• A process is more than a program code.
• It includes current activity, as represented by the
value of program counter and the content of the
processor registers.
• Also includes a process stack, which contains
temporary data like function parameters, return
address and local variables.
• Data section which contains global variables.
• Also include a heap, which handles the run time
memory management or dynamics memory
allocation.
Contains temporary variables
Stack (Parameters, return address,
Local variables)
• Scheduling
• Execution
• Killing or Termination
States of Process
• A process has following states
– New.
– Running.
– Waiting.
– Ready.
– Terminated
Process state diagram
Uni-Programming OS
Admitted Exit
NEW RUNNING Terminated
I/O or event
wait/completed
I/O
Process state diagram
Multi-programming OS
States of Process
• A process has following states
– New.
– Running.
– Block & Wait.
– Ready.
– Termination.
–Suspend ready
–Suspend wait or suspend block
State transition diagram
Schedulers in OS
• Schedulers in OS are special system
software.
• They help in scheduling the processes in
various ways.
• They are mainly responsible for selecting
the jobs to be submitted into the system
and deciding which process to run.
Types
• Long-term scheduler
• Short-term scheduler
• Medium-term scheduler
Long-term Scheduler
• Long-term scheduler is also known as Job
Scheduler.
• It selects a balanced mix of I/O bound and CPU
bound processes from the secondary memory
(new state).
• Then, it loads the selected processes into the
main memory (ready state) for execution.
• The primary objective of long-term scheduler
is to maintain a good degree of
multiprogramming.
Degree of Multiprogramming
In multiprogramming systems,
• Multiple processes may be present in the ready
state which are all ready for execution.
• Degree of multiprogramming is the maximum
number of processes that can be present in the
ready state.
• Long-term scheduler controls the degree of
multiprogramming.
• Medium-term scheduler reduces the degree of
multiprogramming.
Short-term Scheduler
• Short-term scheduler is also known as
CPU Scheduler.
• It decides which process to execute next
from the ready queue.
• After short-term scheduler decides the
process, Dispatcher assigns the decided
process to the CPU for execution.
• The primary objective of short-term
scheduler is to increase the system
performance.
Medium-term Scheduler
• Medium-term scheduler swaps-out the processes
from main memory to secondary memory to free up
the main memory when required.
• Thus, medium-term scheduler reduces the degree of
multiprogramming.
• After some time when main memory becomes
available, medium-term scheduler swaps-in the
swapped-out process to the main memory and its
execution is resumed from where it left off.
• Swapping may also be required to improve the
process mix.
Attributes of process
• A process has following attributes
– Process id
– Program counter
– Process state
– Priority
– General purpose register
– List of open files
– List of open devices
– Protection
Process control block (PCB)
Memory protection.
CPU Switch From Process to Process
Process Scheduling Queues
• Job queue – set of all processes in the
system.
• Ready queue – set of all processes
residing in main memory,
ready and waiting to execute.
• Device queues – set of processes waiting
for an I/O device.
• Process migration between the various
queues.
Ready Queue And Various I/O Device
Queues
Representation of Process
Scheduling
CPU Scheduling
• Maximum CPU utilization
obtained with
multiprogramming.
• Max throughput
0 24 27 30
• Waiting time for P1 = 0; P2 = 24; P3 = 27
• Average waiting time: (0 + 24 + 27)/3 = 17
• Average turn-around time: (24 + 27 + 30)/3 = 27
FCFS Scheduling (Cont.)
• Case #2: Suppose that the processes arrive in the order: P2 , P3 , P1
P2 P3 P1
0 3 6 30
P1 3 4
P2 5 3
P3 0 2
P4 5 1
P5 4 3
PID AT BT CT TAT WT
P1 3 4 7 7–3=4 4–4=0
P2 5 3 13 13 – 5 = 8 8–3=5
P3 0 2 2 2–0=2 2–2=0
P4 5 1 14 14 – 5 = 9 9–1=8
P5 4 3 10 10 – 4 = 6 6–3=3
FCFS Scheduling (Cont.)
• Case #1 is an example of the convoy effect; all the
other processes wait for one long-running process to
finish using the CPU
– This problem results in lower CPU and device
utilization; Case #2 shows that higher utilization might
be possible if the short processes were allowed to run
first.
• The FCFS scheduling algorithm is non-preemptive
– Once the CPU has been allocated to a process, that
process keeps the CPU until it releases it either by
terminating or by requesting I/O.
– It is a troublesome algorithm for time-sharing systems
Example2- FCFS
P1 waiting time: 0
P2 waiting time: 24
P3 waiting time: 27
The average waiting time: (0+24+27)/3 = 17
Shortest-Job-First (SJF) Scheduling
• Associate with each process the length of its next CPU
burst. Use these lengths to schedule the process with the
shortest time.
• Two schemes:
– Nonpreemptive – once CPU given to the process it
cannot be preempted until completes its CPU burst.
– Preemptive – if a new process arrives with CPU burst
length less than remaining time of current executing
process, preempt. This scheme is know as the
Shortest-Remaining-Time-First (SRTF).
• SJF is optimal – gives minimum average waiting time for a
given set of processes.
Example of Non-Preemptive SJF
Process Arrival Time Burst Time
P1 0.0 7
P2 2.0 4
P3 4.0 1
P4 5.0 4
• SJF (non-preemptive)
P1 P3 P2 P4
0 3 7 8 12 16
P1 P2 P3 P2 P4 P1
0 2 4 5 7 11 16
Process P1 10 2 7 1
Process P2 20 4 14 2
Process P3 30 6 21 3
HW
Burst Time
Process No. Arrival Time
CPU Burst I/O Burst CPU Burst
P1 0 3 2 2
P2 0 2 4 1
P3 2 1 3 2
P4 5 2 2 1
Determining Length of Next CPU Burst
• Can be estimate the length of process by simple averaging.
• Can be done by using the length of previous CPU bursts, using
exponential averaging.
n+1 = tn + (1 − ) n .
Examples of Exponential Averaging
• =0
– n+1 = n
– Recent history does not count.
• =1
– n+1 = tn
– Only the actual last CPU burst counts.
• If we expand the formula, we get:
n+1 = tn+(1 - ) tn -1 + …
+(1 - )j tn -1 + …
+(1 - )n=1 tn 0
• Since both and (1 - ) are less than or equal to 1, each successive
term has less weight than its predecessor.
Priority Scheduling
Priority Scheduling
• The SJF algorithm is a special case of the general priority scheduling
algorithm
• A priority number (integer) is associated with each process
• The CPU is allocated to the process with the highest priority (smallest
integer = highest priority)
• Priority scheduling can be either preemptive or non-preemptive
– A preemptive approach will preempt the CPU if the priority of the
newly-arrived process is higher than the priority of the currently
running process
– A non-preemptive approach will simply put the new process (with
the highest priority) at the head of the ready queue
• SJF is a priority scheduling algorithm where priority is the predicted next
CPU burst time
• The main problem with priority scheduling is starvation, that is, low
priority processes may never execute
• A solution is aging; as time progresses, the priority of a process in the
ready queue is increased
Priority Scheduling
• Consider the set of 4 processes whose arrival
time and burst time are given below-
Burst Time
Arrival
Process No. Priority
Time
CPU Burst I/O Burst CPU Burst
P1 0 2 1 5 3
P2 2 3 3 3 1
P3 3 1 2 3 1
•Average Turn Around time = (10 + 13 + 6) / 3 = 29 / 3 = 9.67
units
•Average waiting time = (6 + 9 + 3) / 3 = 18 / 3 = 6 units
Round Robin (RR)
• Each process gets a small unit of CPU time (time
quantum), usually 10-100 milliseconds. After this time
has elapsed, the process is preempted and added to the
end of the ready queue.
• If there are n processes in the ready queue and the time
quantum is q, then each process gets 1/n of the CPU
time in chunks of at most q time units at once. No
process waits more than (n-1)q time units.
• Performance
– q large FIFO
– q small q must be large with respect to context
switch, otherwise overhead is too high.
Example: RR with Time Quantum = 20
P1 P2 P3 P4 P1 P3 P4 P1 P3 P3
P2 1 3
P3 2 1
P4 3 2
P5 4 3
P1 13 13 – 0 = 13 13 – 5 = 8
P2 12 12 – 1 = 11 11 – 3 = 8
P3 5 5–2=3 3–1=2
P4 9 9–3=6 6–2=4
P5 14 14 – 4 = 10 10 – 3 = 7
Q1
Q2
Multiple-Processor Scheduling
• If multiple CPUs are available, load sharing
among them becomes possible; the
scheduling problem becomes more
complex.
• We concentrate in this discussion on systems
in which the processors are identical
(homogeneous) in terms of their functionality.
– We can use any available processor to run
any process in the queue.
• Two approaches: Asymmetric processing
and symmetric processing (see next slide)
Multiple-Processor Scheduling
• Asymmetric Multiprocessing (ASMP)
– One processor handles all scheduling decisions,
I/O processing, and other system activities
– The other processors execute only user code
– Because only one processor accesses the
system data, the need for data sharing is
reduced.
• Symmetric Multiprocessing (SMP)
– Each processor schedules itself
– All processes may be in a common ready
queue or each processor may have its own
ready queue
– Either way, each processor examines the
ready queue and selects a process to execute
Multiple Processor Scheduling
➢Efficient use of the CPUs requires load balancing to keep
the workload evenly distributed
❖In a Push migration approach, a specific task regularly
checks the processor loads and redistributes the
waiting processes as needed
❖In a Pull migration approach, an idle processor pulls a
waiting job from the queue of a busy processor
➢Virtually all modern operating systems support MPS,
including Windows XP, Solaris, Linux, and Mac OS X
Real-Time Scheduling
• Hard real-time systems – required to complete a
critical task within a guaranteed amount of time.
• Soft real-time computing – requires that critical
processes receive priority over less fortunate ones.
Algorithm Evaluation
Criteria may include several measures, such as:
❑Maximize CPU utilization under the constraint that
the maximum response time is 1 second.
❑Maximize throughput such as turnaround time is
linearly proportional to total execution time.
Technique for Algorithm Evaluation
• Deterministic modelling – takes a
particular predetermined workload and
defines the performance of each algorithm
for that workload.
Deterministic Modelling: Using FCFS scheduling
• One-to-One
• Many-to-Many
Many-to-One
Many user-level threads mapped to
single kernel thread.
One-to-one
Each user-level thread maps to kernel
thread.
Many-to-Many
Allows many user level threads to be mapped to many kernel
threads
Allows the operating system to create a sufficient number of kernel
threads.
Thank you