Unit 4PPT
Unit 4PPT
Unit 4PPT
• CPU burst is when the process is being executed in the CPU. I/O burst is when
the CPU is waiting for I/O for further execution. After I/O burst, the process
goes into the ready queue for the next CPU burst.
• Process execution begins with a CPU burst. That is followed by an I/O burst,
then another CPU burst, then another I/O burst, and so on. Eventually, the last
CPU burst will end with a system request to terminate execution, rather than
with another I/O burst.
• Types of CPU Scheduling :
• There are two types of CPU scheduling
1) Pre-emptive Scheduling :
•In this type of Scheduling, the tasks are usually assigned with
priorities. At times it is necessary to run a certain task that has a higher
priority before another task although it is running.
• Therefore, the running task is interrupted for some time and resumed
later when the priority task has finished its execution.
•Thus this type of scheduling is used mainly when a process switches
either from running state to ready state or from waiting state to ready
state.
•The resources (that is CPU cycles) are mainly allocated to the process
for a limited amount of time and then are taken away, and after that,
the process is again placed back in the ready queue in the case if that
process still has a CPU burst time remaining. That process stays in the
ready queue until it gets the next chance to execute.
∙ Some Algorithms that are based on preemptive scheduling are
•Round Robin Scheduling (RR),
∙ Response Time: Response time (RT) of a process is the time after which
any process gets CPU resources allocated after entering the ready queue.
•Types of Scheduling Algorithms :
•CPU Scheduling deals with the problem of deciding which of
the processes in the ready queue is to be allocated the CPU.
•The algorithm used by the scheduler to carry out the selection
of process for execution is known as scheduling algorithm.
a. First Come First Serve (FCFS)
b. Shortest Job First (SJF)
c. Shortest Remaining Time (SRTN)
d. Round Robin Scheduling
e. Priority Scheduling
f. Multilevel Queue Scheduling
First Come First Serve (FCFS):
•By far the simplest CPU-scheduling algorithm is the first-come,
first-served (FCFS) scheduling algorithm.
•With this scheme, the process that requests the CPU first is allocated the
CPU first.
• The implementation of the FCFS policy is easily managed with a FIFO
queue.
•When a process enters the ready queue, its PCB is linked onto the tail of
the queue.
• When the CPU is free, it is allocated to the process at the head of the
queue.
•The running process is then removed from the queue. The code for FCFS
scheduling is simple to write and understand.
•On the negative side, the average waiting time under the FCFS policy is
often quite long.
•Consider the following set of processes that arrive at time 0, with the
length of the CPU burst given in milliseconds:
•Process Burst Time:
•P1 24
•P2 3
•P3 3
•If the processes arrive in the order P1, P2, P3, and are served in FCFS
order, we get the result shown in the following Gantt chart, which is a
bar chart that illustrates a particular schedule, including the start and
finish times of each of the participating processes.
•The waiting time is 0 milliseconds for process P1, 24 milliseconds for
process P2, and 27 milliseconds for process P3. Thus, the average
waiting time is (0 + 24 + 27)/3 = 17 milliseconds. If the processes
arrive in the order P2, P3, P1, however, the results will be as shown in
the following Gantt chart:
•Advantages of RR :
1. It can be actually implementable in the system because it is
not depending on the burst time.
2. It doesn't suffer from the problem of starvation.
3. All the jobs get a fare allocation of CPU.
•Disadvantages of RR:
1. The higher the time quantum, the higher the response time in
the system.
2. The lower the time quantum, the higher the context
switching overhead in the system.
3. Deciding a perfect time quantum is really a very difficult task
in the system
•Priority Scheduling Algorithm :
•Priority Scheduling is a process scheduling algorithm based on
priority where the scheduler selects tasks according to priority.
Thus, processes with higher priority execute first followed by
processes with lower priorities.
•If two jobs have the same priorities then the process that should
execute first is chosen on the basis of round-robin or FCFS.
Which process should have what priority depends on a process’
memory requirements, time requirements, the ratio of I/O burst
to CPU burst, etc.
•Types of Priority Scheduling
•Following are the two main types of priority scheduling:
1) Mutual Exclusion
2) No Pre-emption
3) Hold and Wait
4)Circular Wait
1) Mutual Exclusion:
•At least one resource must be held in a non-sharable mode;
•that is, only one process at a time can use the resource.
• If another process requests that resource, the requesting
process must be delayed until the resource has been released.
•Each resource is either currently assigned to exactly one
process or is available.
2) Hold and Wait:
•A process must be holding at least one resource and waiting to
acquire additional resources that are currently being held by
other processes.
•Process currently holding resources granted earlier can request
new resources.
3) No Pre-emption:
• Resources cannot be Pre-empted ; i.e resource can only be released
voluntarily by the process holding it, after the process has completed
its task.
• Resources previously granted cannot be forcibly taken away from a
process.
• They must be explicitly releases by the process holding them.
• 4) Circular Wait: A set of processes are waiting for each other in a
circular fashion.
•There exist a set (P0,P1,…… Pn) of waiting processes such that P0 is
waiting for a resource which is held by P2. Pn-1 is waiting for
resources which are held by Pn and Pn is waiting for resource which
is held by P0.
•Thus there must be a circular chain of two or more processes , each of
which is waiting for a resource held by the next member of the chain.
•Deadlock handling :
•A deadlock in OS can be handled in following four different
ways:
•Adopt methods for avoiding the deadlock.
•Prevent the deadlock from occurring (use a protocol)
•Ignore the deadlock.
•Allow the deadlock to occur ,detect it and recover from it.
•Methods of Handling Deadlocks in Operating System/
Methods of deadlock prevention:
•A deadlock occurs when all of the four conditions (Mutual
Exclusion ,Hold and wait,no Preemption and Circular wait)are
satisfied at any point if time .
•The deadlock can be prevented by not allowing all four
conditions to be satisfied simultaneously i.e by making sure that
at least one of the four condition does not hold.
1)Eliminating Mutual Exclusion Condition :
•The mutual exclusion condition must hold for non shareable
resources.
•Eg. Several processes cannot simultaneously share a printer.
•Shareable resources do not require mutually exclusive access,
and thus can not be involved in a deadlock.
•Read only files are good example of shareable resource . if
several processes attempt to open a read only file at the same
time ,they can be granted simultaneous access to the file .
• a process never needs to wait for a shareable resource. It is not
possible to prevent deadlock by denying the mutual exclusion
condition .
•Eliminating Hold and Wait :
•To ensure that the hold-and-wait condition never occurs in the system,
we must guarantee that, whenever a process requests a resource, it
does not hold any other resources.
•One protocol that we can use requires each process to request and be
allocated all its resources before it begins execution. We can
implement this provision by requiring that system calls requesting
resources for a process precede all other system calls.
•
• Eliminating no Preemption:
•The third necessary condition for deadlocks is that there be no
preemption of resources that have already been allocated.
•To ensure that this condition does not hold, we can use the following
protocol.
•If a process is holding some resources and requests another resource
that cannot be immediately allocated to it (that is, the process must
wait), then all resources the process is currently holding are
preempted. In other word, these resources are implicitly released. The
preempted resources are added to the list of resources for which the
process is waiting. The process will be restarted only when it can
regain its old resources, as well as the new ones that it is requesting.
•Eliminating Circular wait condition :
•The Circular wait condition of deadlock can be eliminated by
assigning a priority number to each available resource and a process
can request resources only in increasing order.
•Whenever a process requests for a resource , the priority number of
the required resource is compared with the priority numbers of the
resources already held by it.
• 1.if the priority number of a requested resource is greater than that of all the
currently held resources the request is granted.
• 2.if the priority number of a requested resource is less than that of the currently
head resources, all the resources with greater priority number must be released
first, before acquiring the new resource.