OS Unit-2-Notes
OS Unit-2-Notes
PROCESS MANAGEMENT
PROCESS CONCEPT
The process concept includes the following:
1. Process
2. Process state
3. Process Control Block
4. Threads
Process
A process can be thought of as a program in execution (or) A process is the unit of work in a
modern time-sharing system.
A process will need certain resources such as CPU time, memory, files and I/O devices to
accomplish its task.
These resources are allocated to the process either when it is created or while it is executing.
The below figure shows the structure of process in memory:
The process contains several sections: Text, Data, Heap and Stack.
● The Text Section contains the program code. It also includes the current activity, as
represented by the value of the program counter and the contents of the processor’s
registers.
● Process stack contains temporary data such as function parameters, return addresses and
local variables.
● Data section contains global variables.
● Heap is memory that is dynamically allocated during process run time.
Difference between Program and Process:
● A program is a passive entity, such as a file containing a list of instructions stored on disk
often called an executable file.
● A process is an active entity with a program counter specifying the next instruction to
execute and a set of associated resources.
● A program becomes a process when an executable file is loaded into memory.
Two common techniques for loading executable files are double-clicking an icon
representing the executable file and entering the name of the executable file on the command
line as in prog.exe or a.out.
Although two processes may be associated with the same program, they are considered as
two separate execution sequences. For instance, several users may be running different copies
of the mail program or the same user may invoke many copies of the web browser program.
Each of these is considered as a separate process.
Process State
As a process executes, it changes state. The process state defines the current activity of that
process.
A process may be in one of the following states:
● New: The process is being created.
● Ready: The process is waiting to be assigned to a processor.
● Running: Instructions are being executed.
● Waiting: The process is waiting for some event to occur such as an I/O completion or
reception of a signal.
● Terminated: The process has finished execution.
Note: Only one process can be running on any processor at any instant of time.
● Two types of queues are present: the Ready Queue and a set of Device Queues. CPU and
I/O are the resources that serve the queues.
● A new process is initially put in the ready queue. It waits there until it is selected for
execution or dispatched.
Once the process is allocated the CPU and is executing, one of several events could occur:
● The process could issue an I/O request and then be placed in an I/O queue.
● The process could create a new child process and wait for the child’s termination.
● The process could be removed forcibly from the CPU, as a result of an interrupt and be
put back in the ready queue.
Schedulers
A process migrates among the various scheduling queues throughout its lifetime. For
scheduling purposes, the operating system must select processes from these queues. The
selection process is carried out by the Scheduler.
There are three types of Schedulers are used:
1. Long Term Scheduler
2. Short Term Scheduler
3. Medium Term Scheduler
Long Term Scheduler (New to ready state)
● Initially processes are spooled to a mass-storage device (i.e Hard disk), where they are
kept for later execution.
● Long-term scheduler or job scheduler selects processes from this pool and loads them into
main memory for execution. (i.e. from Hard disk to Main memory).
● The long-term scheduler executes much less frequently, there may be minutes of time
between creation of one new process to another process.
● The long-term scheduler controls the degree of multiprogramming (the number of
processes in memory).
Short Term Scheduler (Ready to Running)
● Short-term scheduler or CPU scheduler selects from among the processes that are ready to
execute and allocates the CPU to one of them. (i.e. a process that resides in main memory
will be taken by CPU for execution).
● The short-term scheduler must select a new process for the CPU frequently.
● The short term scheduler must be very fast because of the short time between executions
of processes.
Medium Term Scheduler
Medium Term Scheduler does two tasks:
1. Swapping: Medium-term scheduler removes a process from main memory and stores it
into the secondary storage. After some time, the process can be reintroduced into main
memory and its execution can be continued where it left off. This procedure is called
Swapping.
2. Medium Term Scheduler moves a process from CPU to I/O waiting queue and I/O queue
to the ready queue.
● The init process always has a pid of 1. The init process serves as the root parent process
for all user processes.
● Once the system has booted, the init process can also create various user processes, such
as a web or print server, an ssh server etc.
● kthreadd and sshd are child processes of init.
● The kthreadd process is responsible for creating additional processes that perform tasks
on behalf of the kernel.
● The sshd process is responsible for managing clients that connect to the system by using a
secure shell (ssh).
ps command is used to obtain a list of processes:
ps –el
The command will list complete information for all processes currently active in the system.
● When a process creates a child process, that child process will need certain resources such
as CPU time, memory, files, I/O devices to accomplish its task.
● A child process may be able to obtain its resources directly from the operating system or
it may be constrained to a subset of the resources of the parent process.
● The parent may have to partition its resources among its children or it may be able to
share some resources such as memory or files among several of its children.
● Restricting a child process to a subset of the parent’s resources prevents any process from
overloading the system by creating too many child processes.
When a process creates a new process there exist two possibilities for execution:
1. The parent continues to execute concurrently with its children.
2. The parent waits until some or all of its children have
terminated. There are also two address-space possibilities for the
new process:
1. The child process is a duplicate of the parent process (i.e) it has the same program and
data as the parent.
2. The child process has a new program loaded into it.
Process System calls in Unix/ Linux: fork( ), exec( ), wait( ), exit( )
● fork( ): In UNIX OS a new process is created by the fork( ) system call.
● The new process consists of a copy of the address space of the original process. This
mechanism allows the parent process to communicate easily with its child process.
● Both the parent and the child processes continue execution at the instruction after the
fork( ).
● For the new child process (i.e. Child Process) the return code for the fork( ) is zero.
● The nonzero process identifier of the child is returned to the parent.
● exec( ): After a fork( ) system call, one of the two processes typically uses the exec( )
system call to replace the process’s memory space with a new program.
● The exec( ) system call loads a binary file into memory and starts its execution.
● In this way, the two processes are able to communicate and then go their separate ways.
● wait( ): The parent can create more children or if the parent has nothing else to do while
the child process is running then the parent process can issue a wait( ) system call to
move itself out of the Ready Queue until the child process terminates.
● The call to exec( ) overlays the process’s address space with a new program or the call to
exec( ) does not return control unless an error occurs.
Program for Creating a separate process using the UNIX fork( ) system call
#include <sys/types.h>
#include <stdio.h>
#include <unistd.h>
int main( )
{
pid_t pid;
/* fork a child process */
pid = fork( );
if (pid < 0)
{ /* error occurred */
fprintf(stderr, "Fork Failed");
return 1;
}
else if (pid == 0)
{ /* child process */
execlp("/bin/ls","ls",NULL);
}
else { /* parent process */
/* parent will wait for the child
to complete */
wait(NULL); printf("Child Complete");
}
return 0;
}
The above C program shows the UNIX system calls fork, exec, wait. Two different processes
are running copies of the same program.
● The only difference is that the value of pid for the child process is zero, while the value of
pid for the parent is an integer value greater than zero (i.e. the actual pid of the child
process).
● The child process inherits privileges and scheduling attributes from the parent as well as
certain resources such as open files.
● The child process then overlays its address space with the UNIX command /bin/ls (used
to get a directory listing) using the execlp( ) system call (execlp( ) is a version of the
exec( ) system call).
● The parent waits for the child process to complete with the wait( ) system call.
● When the child process completes by either implicitly or explicitly invoking exit( ), the
parent process resumes from the call to wait( ), where it completes using the exit( )
system call.
Process Termination: exit( )
● A process terminates when it finishes executing its final statement and asks the operating
system to delete it by using the exit( ) system call.
● The process may return a status value to its parent process via the wait( ) system call.
● All the resources of the process including physical and virtual memory, open files and I/O
buffers are deallocated by the operating system.
A parent may terminate the execution of one of its children for a variety of reasons such as:
1. The child has exceeded its usage of some of the resources that it has been allocated.
2. The task assigned to the child is no longer required.
3. The parent is exiting and the operating system does not allow a child to continue if its
parent terminates.
Cascading Termination
If a parent process terminates either normally or abnormally then all its children must also be
terminated is referred as Cascading Termination. It is normally initiated by operating system.
In Linux and UNIX systems, a process can be terminate by using the exit( ) system call
providing an exit status as a parameter:
/* exit with status 1 */
exit(1);
Under normal termination, exit( ) may be called either directly (i.e. exit(1)) or indirectly (i.e.
by a return statement in main( ) ).
A parent process may wait for the termination of a child process by using the wait( ) system
call. The wait( ) system call is passed a parameter that allows the parent to obtain the exit
status of the child. This system call also returns the process identifier of the terminated child
so that the parent can tell which of its children has terminated:
pid_t pid;
int status;
pid = wait(&status);
Zombie process
A zombie process or defunct process is a process that has completed execution (via the exit
system call) but still has an entry in the process table. This occurs for the child processes, where
the entry is still needed to allow the parent process to read its child's exit status. Once the exit
status is read via the wait system call, the zombie's entry is removed from the process table and
said to be "reaped". A child process always first becomes a zombie before being removed from
the resource table.
In most cases, zombies are immediately waited on by their parents and then reaped by the
system under normal system operation. Processes that stay zombies for a long time are
generally an error and cause a resource leak, but they only occupy the process table entry.
In the term's metaphor, the child process has died but has not yet been reaped. Also, unlike
normal processes, the kill command does not affect a zombie process.
Zombie processes should not be confused with orphan processes. An orphan process is a
process that is still executing, but whose parent has died. When the parent dies, the orphaned
child process is adopted by init (process ID 1). When orphan processes die, they do not remain
as zombie processes; instead, they are waited on by init. The result is that a process that is both
a zombie and an orphan will be reaped automatically.
o However, the process's entry in the process table remains. The parent can read the
child's exit status by executing the wait system call, where the zombie is removed. The
wait call may be executed in sequential code, but it is commonly executed in a handler
for the SIGCHLD signal, which the parent receives whenever a child has died.
o After the zombie is removed, its process identifier(PID) and entry in the process table
can then be reused. However, if a parent fails to call wait, the zombie will be left in the
process table, causing a resource leak. In some situations, this may be desirable, and the
parent process wishes to continue holding this resource. For example, if the parent
creates another child process, it will not be allocated the same PID.
o The following special case applies to modern UNIX-like systems. If the parent explicitly
ignore SIGCHLD by setting its handler to SIG_IGN (rather than simply ignoring the
signal by default) or has the SA_NOCLDWAIT flag set, all child exit status information
will be discarded, and no zombie processes will be left.
o Zombies can be identified in the output from the UNIX "ps" command by the presence
of a "Z" in the "STAT" column. Zombies that exist for more than a short time typically
indicate a bug in the parent program or just an uncommon decision to not reap children.
o If the parent program is no longer running, zombie processes typically indicate a bug in
the operating system. As with other resource leaks, the presence of a few zombies is not
worrying in it but may indicate a problem that would grow serious under heavier loads.
Since there is no memory allocated to zombie processes, the only system memory usage
is for the process table entry itself. The primary concern with many zombies is not
running out of memory but rather out of process table entries and concretely process ID
numbers.
o USING THE KILL COMMAND, the SIGCHLDsignal can be sent to the parent
manually to remove zombies from a system. If the parent process still refuses to reap the
zombie, and if it would be fine to terminate the parent process, the next step can be to
remove the parent process. When a process loses its parent, init becomes its new parent.
Init periodically executes the wait system call to reap any zombies with init as a parent.
// Parent process
if (child_pid > 0)
sleep(50);
return 0;
}
Orphan Processes
If a parent did not invoke wait( ) and instead terminated, thereby leaving its child processes
as orphans are called Orphan processes.
● Linux and UNIX address this scenario by assigning the init process as the new parent to
orphan processes.
● The init process periodically invokes wait( ), thereby allowing the exit status of any
orphaned process to be collected and releasing the orphan’s process identifier and
process-table entry.
// becomes an orphan.
#include<stdio.h>
#include <sys/types.h>
#include <unistd.h>
int main()
else if (pid == 0)
{
sleep(30);
}
return 0;
}
Process Data structures
Process Table
§It is maintained by the OS and is stored in RAM, every process currently executing is
maintained in the process table.
It contains:
§Identifier: Process ID, Parent Process ID and Child Process ID, User ID.
PID: When a process is started, it is given a unique number called process ID (PID).
Init: There is one important process called init, which is the grandfather of all other processes. It
has a PID of 1 and the Kernel itself has a PID of 0.
PPID: In addition to a unique process ID, each process is assigned a parent process ID, that tells
which process started it.
Threads in OS
A thread is a path of execution within a process. A process can contain multiple threads.
Why Multithreading?
A thread is also known as a lightweight process. The idea is to achieve parallelism by dividing a
process into multiple threads. For example, in a browser, multiple tabs can be different threads.
MS Word uses multiple threads: one thread to format the text, another thread to process inputs,
etc.
Process vs Thread
The primary difference is that threads within the same process run in a shared memory space,
while processes run in separate memory spaces.
Threads are not independent of one another like processes are, and as a result threads share with
other threads their code section, data section, and OS resources (like open files and signals).
But, like a process, a thread has its own program counter (PC), register set, and stack space.
1. Responsiveness: If the process is divided into multiple threads, if one thread completes its
execution, then its output can be immediately returned.
2. Faster context switch: Context switch time between threads is lower compared to process
context switch. Process context switching requires more overhead from the CPU.
3. Effective utilization of a multiprocessor system: If we have multiple threads in a single
process, then we can schedule multiple threads on multiple processors. This will make process
execution faster.
4. Resource sharing: Resources like code, data, and files can be shared among all threads
within a process.
Note: stack and registers can’t be shared among the threads. Each thread has its own stack and
registers.
5. Communication: Communication between multiple threads is easier, as the threads share
common address space. While in process we have to follow some specific communication
techniques for communication between two processes.
6. Enhanced throughput of the system: If a process is divided into multiple threads, and each
thread function is considered as one job, then the number of jobs completed per unit of time is
increased, thus increasing the throughput of the system
Types of Threads
Thread is a single sequence stream within a process. Threads have the same properties as of the
process so they are called light weight processes. Threads are executed one after another but
gives the illusion as if they are executing in parallel. Each thread has different states. Each
thread has
1. A program counter
2. A register set
3. A stack space
Threads are not independent of each other as they share the code, data, OS resources etc.
Types of Threads: OS and to cause interrupt to Kernel. Kernel doesn’t know about the user
level thread and manages them as if they were single-threaded processes.
· Advantages of ULT –
· Can be implemented on an OS that doesn’t support multithreading.
· Simple representation since thread has only program counter, register set,
stack space.
· Simple to create since no intervention of the kernel.
· Thread switching is fast since no OS calls need to be made.
· Limitations of ULT –
· No or less co-ordination among the threads and Kernel.
· If one thread causes a page fault, the entire process blocks.
2. Kernel Level Thread (KLT) – Kernel knows and manages the threads. Instead of a thread
table in each process, the kernel itself has a thread table (a master one) that keeps track of
all the threads in the system. In addition, the kernel also maintains the traditional process
table to keep track of the processes. The OS kernel provides system calls to create and
manage threads.
· Advantages of KLT –
· Since the kernel has full knowledge about the threads in the system, the
scheduler may decide to give more time to processes having a large number
of threads.
· Good for applications that frequently block.
· Limitations of KLT –
· Slow and inefficient.
· It requires a thread control block so it is an overhead.
Threading issues in OS
2. Thread cancellation
Termination of the thread in the middle of its execution is termed as ‘thread cancellation’. Let
us understand this with the help of an example. Consider that there is a multithreaded program
which has let its multiple threads search through a database for some information. However, if
one of the threads returns with the desired result the remaining threads will be canceled.
Now a thread which we want to cancel is termed as target thread.
Thread cancellation
Thread cancellation can be performed in two ways:
Asynchronous Cancellation: In asynchronous cancellation, a thread is employed to terminate
the target thread instantly.
Deferred Cancellation: In deferred cancellation, the target thread is scheduled to check itself
at regular intervals whether it can terminate itself or not.
The issue related to the target threads are listed below:
What if the resources had been allotted to the cancel target thread?
What if the target thread is terminated when it was updating the data, it was sharing with some
other thread.
Here the asynchronous cancellation of the thread where a thread immediately cancels the target
thread without checking whether it is holding any resources or not is troublesome.
However, in deferred cancellation, the thread that indicates the target thread about the
cancellation, the target thread cross checks its flag in order to confirm that it should be canceled
immediately or not. The thread cancellation takes place where they can be canceled safely such
points are termed as cancellation points by Pthreads.
3. Signal Handling
Signal handling is more convenient in the single-threaded program as the signal would be
directly forwarded to the process. But when it comes to a multithreaded program, the issue
arrives at which thread of the program the signal should be delivered.
Let’s say the signal would be delivered to:
§All the threads of the process.
§To some specific threads in a process.
§To the thread to which it applies
§Or you can assign a thread to receive all the signals.
Signal Handling
Well, how the signal would be delivered to the thread would be decided, depending upon the
type of generated signal. The generated signal can be classified into two type’s synchronous
signal and asynchronous signal.
Synchronous signals are forwarded to the same process that leads to the generation of the
signal. Asynchronous signals are generated by the event external to the running process thus the
running process receives the signals asynchronously.
Signal Handling
So, if the signal is synchronous, it would be delivered to the specific thread causing the
generation of the signal. If the signal is asynchronous, it cannot be specified to which thread of
the multithreaded program it would be delivered. If the asynchronous signal is notifying to
terminate the process the signal would be delivered to all the threads of the process.
The issue of an asynchronous signal is resolved up to some extent in most of the multithreaded
UNIX systems. Here the thread is allowed to specify which signal it can accept and which it
cannot. However, the Windows operating system does not support the concept of the signal;
instead it uses an asynchronous procedure call (ACP) which is like the asynchronous signal of
the UNIX system.
UNIX allows the thread to specify which signal it can accept and which it will not whereas the
ACP is forwarded to the specific thread.
4. Thread Pool
When a user requests for a web page to the server, the server creates a separate thread to service
the request. Although the server also has some potential issues. Consider if we do not have a
bound on the number of active threads in a system and would create a new thread for every new
request then it would finally result in exhaustion of system resources.
We are also concerned about the time it will take to create a new thread. It must not be that case
that the time require to create a new thread is more than the time required by the thread to
service the request and then getting discarded as it would result in wastage of CPU time
Thread Pool
The solution to this issue is the thread pool. The idea is to create a finite number of threads
when the process starts. This collection of threads is referred to as the thread pool. The threads
stay in the thread pool and wait till they are assigned any request to be serviced.
Whenever the request arrives at the server, it invokes a thread from the pool and assigns it the
request to be serviced. The thread completes its service and returns to the pool and waits for the
next request.
If the server receives a request and it does not find any thread in the thread pool it waits for
some or the other thread to become free and return to the pool. This is much better than creating
a new thread each time a request arrives and convenient for the system that cannot handle many
concurrent threads.
Process vs Thread
Process:
Processes are basically the programs that are dispatched from the ready state and are scheduled
in the CPU for execution. PCB(Process Control Block) holds the concept of process. A process
can create other processes which are known as Child Processes. The process takes more time to
terminate and it is isolated, meaning it does not share the memory with any other process.
The process can have the following states new, ready, running, waiting, terminated, and
suspended.
Thread:
Thread is the segment of a process means a process can have multiple threads and these
multiple threads are contained within a process. A thread has three states: Running, Ready, and
Blocked.
The thread takes less time to terminate as compared to the process but unlike the process,
threads do not isolate.
3. It takes more time for creation. It takes less time for creation.
The process has its own Process Thread has Parents’ PCB, its own Thread
Control Block, Stack, and Control Block, and Stack and common
11. Address Space. Address space.
Since all threads of the same process share
address space and other resources, any
Changes to the parent process do changes to the main thread may affect the
12. not affect child processes. behavior of the other threads of the process.
Multi-Threading Models
It is a process of multiple threads executed at same time.
Many operating systems support kernel thread and user thread in a combined way. Example of
such a system is Solaris. Multi threading models are of three types.
In this model, we have multiple user threads multiplexed to the same or lesser number of kernel
level threads. Number of kernel level threads are specific to the machine, the advantage of this
model is if a user thread is blocked we can schedule other user threads to another kernel thread.
Thus, System doesn’t block if a particular thread is blocked.
As each user thread is connected to a different kernel , if any user thread makes a blocking
system call, the other user threads won’t be blocked.
CPU SCHEDULING
CPU scheduling is the basis of Multiprogrammed operating systems. By switching the CPU
among processes, the operating system can make the computer more productive.
● In a single-processor system, only one process can run at a time. Others must wait until
the CPU is free and can be rescheduled.
● The CPU will sit idle and wait for a process that needs an I/O operation to complete. If
the I/O operation completes then only the CPU will start executing the process. A lot of
CPU time has been wasted with this procedure.
● The objective of multiprogramming is to have some process running at all times to
maximize CPU utilization.
● When several processes are in main memory, if one process is waiting for I/O then the
operating system takes the CPU away from that process and gives the CPU to another
process. Hence there will be no wastage of CPU time.
Concepts of CPU Scheduling
1. CPU–I/O Burst Cycle
2. CPU Scheduler
3. Preemptive Scheduling
4. Dispatcher
CPU–I/O Burst Cycle
Process execution consists of a cycle of CPU execution and I/O wait.
● Process execution begins with a CPU burst. That is followed by an I/O burst. Processes
alternate between these two states.
● The final CPU burst ends with a system request to terminate execution.
● Hence the First cycle and Last cycle of execution must be CPU burst.
CPU Scheduler
Whenever the CPU becomes idle, the operating system must select one of the processes in the
ready queue to be executed. The selection process is carried out by the Short-Term
Scheduler or CPU scheduler.
Preemptive Scheduling
CPU-scheduling decisions may take place under the following four cases:
1. When a process switches from the running state to the waiting state.
Example: as the result of an I/O request or an invocation of wait( ) for the termination of a
child process.
2. When a process switches from the running state to the ready state.
Example: when an interrupt occurs
3. When a process switches from the waiting state to the ready state.
Example: at completion of I/O.
4. When a process terminates.
For situations 2 and 4 are considered as Preemptive scheduling situations. Mach OS X,
WINDOWS 95 and all subsequent versions of WINDOWS are using Preemptive scheduling.
Dispatcher
The dispatcher is the module that gives control of the CPU to the process selected by the
short-term scheduler. Dispatcher function involves:
1. Switching context
2. Switching to user mode
3. Jumping to the proper location in the user program to restart that program.
The dispatcher should be as fast as possible, since it is invoked during every process switch.
The time it takes for the dispatcher to stop one process and start another process running is
known as the Dispatch Latency.
SCHEDULING CRITERIA
Different CPU-scheduling algorithms have different properties and the choice of a particular
algorithm may favor one class of processes over another.
Many criteria have been suggested for comparing CPU-scheduling algorithms:
● CPU utilization: CPU must be kept as busy as possible. CPU utilization can range from
0 to 100 percent. In a real system, it should range from 40 to 90 percent.
● Throughput: The number of processes that are completed per time unit.
● Turn-Around Time: It is the interval from the time of submission of a process to the
time of completion. Turnaround time is the sum of the periods spent waiting to get into
memory, waiting in the ready queue, executing on the CPU and doing I/O.
● Waiting time: It is the amount of time that a process spends waiting in the ready queue.
● Response time: It is the time from the submission of a request until the first response is
produced. Interactive systems use response time as its measure.
Note: It is desirable to maximize CPU utilization and Throughput and to minimize Turn-
Around Time, Waiting time and Response time.
CPU SCHEDULING ALGORITHMS
CPU scheduling deals with the problem of deciding which of the processes in the ready queue
is to be allocated to the CPU. Different CPU-scheduling algorithms are:
1. First-Come, First-Served Scheduling (FCFS)
2. Shortest-Job-First Scheduling (SJF)
3. Priority Scheduling
4. Round Robin Scheduling
5. Multilevel Queue Scheduling
6. Multilevel Feedback Queue Scheduling
Gantt Chart is a bar chart that is used to illustrate a particular schedule including the start and
finish times of each of the participating processes.
First-Come, First-Served Scheduling (FCFS)
In FCFS, the process that requests the CPU first is allocated the CPU first.
● FCFS scheduling algorithm is Non-preemptive.
● Once the CPU has been allocated to a process, it keeps the CPU until it releases the CPU.
● FCFS can be implemented by using FIFO queues.
● When a process enters the ready queue, its PCB is linked onto the tail of the queue.
● When the CPU is free, it is allocated to the process at the head of the queue.
● The running process is then removed from the queue.
Example:1 Consider the following set of processes that arrive at time 0. The processes are
arrived at in the order P1, P2, P3, with the length of the CPU burst given in milliseconds.
Proce Burst Time
ss
P1 24
P2 3
P3 3
Gantt Chart for FCFS is:
The average waiting time under the FCFS policy is often quite long.
● The waiting time is 0 milliseconds for process P1, 24 milliseconds for process P2 and 27
milliseconds for process P3.
● Thus, the average waiting time is (0 + 24 + 27)/3 = 17 milliseconds.
Convoy Effect in FCFS
Convoy effect means, when a big process is executing in CPU, all the smaller processes must
have to wait until the big process execution completes. This will affect the performance of the
system.
Example:2 Let us consider the same example above but with the processes arriving in the
order P2, P3, P1.
The processes coming at P2, P3, P1 the average waiting time (6 + 0 + 3)/3 = 3 milliseconds
whereas the processes come in the order P1, P2, P3 the average waiting time is 17
milliseconds.
Disadvantage of FCFS:
FCFS scheduling algorithm is Non-preemptive, it allows one process to keep CPU for a long
time. Hence it is not suitable for time sharing systems.
tn be the length of the nth CPU burst (i.e. contains the most recent information).
stores the past history.
be our predicted value for the next CPU burst.
α controls the relative weight of recent and past history in our
prediction (0 ≤ α ≤1)
● If α=0, then , recent history has no effect
● If α=1 then , only the most recent CPU burst matters.
● If α = 1/2, so recent history and past history are equally weighted.
Shortest Remaining Time First Scheduling (SRTF)
SRTF is the preemptive SJF algorithm.
● A new process arrives at the ready queue, while a previous process is still executing.
● The next CPU burst of the newly arrived process may be shorter than the currently
executing process.
● SRTF will preempt the currently executing process and execute the shortest job.
Consider the four processes with arrival times and burst times in milliseconds:
Proce Arrival Burst Time
ss time (ms)
P1 0 8
P2 1 4
P3 2 9
P4 3 5
Gantt Chart for SRTF
Priority Scheduling
A priority is associated with each process and the CPU is allocated to the process with the
highest priority. Equal-priority processes are scheduled in FCFS order.
● An SJF algorithm is a special kind of priority scheduling algorithm where small CPU
bursts will have higher priority.
● Priorities can be defined based on time limits, memory requirements, the number of open
files etc.
Example: Consider the following processes with CPU burst and Priorities. All the processes
arrive at time t=0 in the same order. Low numbers are having higher priority.
Proce Burst time Priori
ss (ms) ty
P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2
Gantt chart for Priority Scheduling:
The above figure shows Multi-level queue scheduling algorithm with five queues, listed below
in order of priority:
1. System processes
2. Interactive processes
3. Interactive editing processes
4. Batch processes
5. Student processes
Each queue has absolute priority over lower-priority queues.
● No lower level queue processes will start executing unless all the processes in the higher
level queue are empty.
Example: The interactive processes start executing only when all the processes in the
system queue are empty.
● If a lower priority process is executing and a higher priority process enters into the queue
then a lower priority process will be preempted and starts executing a higher priority
process.
Example: If a system process entered the ready queue while an interactive process was
running, the interactive process would be preempted.
Disadvantage: Starvation of Lower level queue
The multilevel queue scheduling algorithm is inflexible.
● The processes are permanently assigned to a queue when they enter the system. Processes
are not allowed to move from one queue to another queue.
● There is a chance that lower level queues will be in starvation because unless the higher
level queues are empty no lower level queues will be executing.
● If at any instant of time if there is a process in higher priority queue then there is no
chance that lower level process can be executed eternally.
Multilevel Feedback Queue Scheduling is used to overcome the problem of Multi-level queue
scheduling.
Multilevel Feedback Queue Scheduling (MLFQ)
Multilevel feedback queue scheduling algorithm allows a process to move between queues.
● Processes are separated according to the characteristics of their CPU bursts.
● If a process uses too much CPU time, it will be moved to a lower-priority queue.
● A process that waits too long in a lower-priority queue moved to a higher-priority queue.
● This form of aging prevents starvation.
Consider a multilevel feedback queue scheduler with three queues: queue0, queue1, queue2.
● The scheduler first executes all processes in queue0 then queue1 and then queue2.
● Only when queue 0 and queue1 is empty, the scheduler will execute processes in queue2.
● A process that arrives for queue1 will preempt a process in queue2. A process in queue1
will in turn be preempted by a process arriving for queue0.
● A process entering the ready queue is put in queue0. A process in queue 0 is given a time
quantum of 8ms. If it does not finish within this time, it is moved to the tail of queue 1.
● If queue 0 is empty, the process at the head of queue1 is given a quantum of 16ms. If it
does not complete, it is preempted and is put into queue2.
● Processes in queue 2 are run on an FCFS basis but are run only when queues 0 and 1 are
empty.
● This scheduling algorithm gives highest priority to any process with a CPU burst of 8ms
or less. Such a process will quickly get the CPU and finish its CPU burst and go off to its
next I/O burst.
● Processes that need more than 8ms but less than 24ms are also served quickly, although
with lower priority than shorter processes.
● Long processes automatically sync to queue2 and are served in FCFS order with any CPU
cycles left over from queue 0 and queue1.
A Multi-Level Feedback queue scheduler is defined by the following parameters:
● The number of queues.
● The scheduling algorithm for each queue.
● The method used to determine when to upgrade a process to a higher priority queue.
● The method used to determine when to demote a process to a lower priority queue.
● The method used to determine which queue a process will enter when that process needs
service.
MULTIPLE-PROCESSOR SCHEDULING
Scheduling process will become complex with multiple CPU structures but Load Sharing is
possible with multiple CPU structures.
In multiple processor systems, we can use any available processor to run any process in
the queue.
Multiprocessor Scheduling Approaches
There are two approaches to multiprocessing: Asymmetric and Symmetric Multiprocessing.
● In Asymmetric Multiprocessing, Master and Worker relationships exist.
The master server is a single processor that handles the activities related to all scheduling
decisions, I/O processing and other system activities. The other processors execute only
user code.
● Asymmetric multiprocessing is simple because only one processor accesses the system
data structures, reducing the need for data sharing.
● In Symmetric Multiprocessing (SMP) each processor is self-scheduling. All processes
may be in a common ready queue or each processor may have its own private ready
queue processes.
● Scheduling proceeds by having the scheduler for each processor examine the ready
queue and select a process to execute.
● When multiple processors try to access and update a common data structure, the
scheduler must be programmed carefully.
● Two separate processors do not choose to schedule the same process and that
processes are not lost from the queue.
Processor Affinity
The Process Affinity is “to make a process run on the same CPU it ran on last time”.
● The processor cache contains the data that is most recently accessed by the process.
● If the process migrates from one processor to another processor, the contents of cache
memory must be invalidated for the first processor and the cache for the second processor
must be entered again.
● Because of the high cost of invalidating and re-entering caches, most SMP systems try to
avoid migration of processes from one processor to another and instead attempt to keep a
process running on the same processor.
Processor affinity can be implemented in two ways: Soft affinity and Hard affinity.
● Soft affinity: The operating system will attempt to keep a process on a single
processor, but it is possible for a process to migrate between processors.
● Hard affinity: Some systems provide system calls that support Hard Affinity, thereby
allowing a process to specify a subset of processors on which it may run.
Load Balancing
Load balancing attempts to keep the workload evenly distributed across all processors in an
SMP system.
Load balancing is necessary only on systems where each processor has its own private queue
of processes to execute.
There are two approaches to load balancing: Push Migration and Pull Migration.
● Push migration: A specific process periodically checks the load on each processor. If the
task finds an imbalance then it evenly distributes the load by moving (pushing) processes
from overloaded processors to idle or less-busy processors.
● Pull migration: It occurs when an idle processor pulls a waiting process from a busy
processor.
Multicore Processors
In a multicore processor each core acts as a separate processor. Multicore processors
may complicate scheduling issues.
● When a processor accesses memory, it spends a significant amount of time waiting for
the data to become available. This waiting time is called a Memory Stall.
● Memory Stall may occur for several reasons for example Cache miss.
● In order to avoid memory stall, many recent hardware designs have implemented
multithreaded processor cores in which two or more hardware threads are assigned to
each core. That way, if one thread stalls while waiting for memory, the core can switch
to another thread.
● The execution of thread0 and the execution of thread 1 are interleaved on a dual-threaded
processor core.
● From an operating-system perspective, each Hardware Thread appears as a Logical
Processor that is available to run a software thread.
● Thus, on a Dual-threaded, Dual-core system, Four logical processors are presented to
the operating system.
There are two ways to multithread a processing core: Coarse-grained and Fine-grained
Coarse-Grained Multithreading:
● A thread executes on a processor until a long-latency event such as a memory stall occurs.
● The processor must switch to another thread to begin execution, because of the delay
caused by the long-latency event.
● The cost of switching between threads is high, since the instruction pipeline must be
flushed before the other thread can begin execution on the processor core.
● Once this new thread begins execution, it begins filling the pipeline with its instructions.
Fine-grained (or) Interleaved Multithreading:
● Thread switching done at the boundary of an instruction cycle.
● The architectural design of fine-grained systems includes logic for thread switching. As a
result, the cost of switching between threads is small.
Process System calls in Unix/ Linux: fork( ), exec( ), wait( ), exit( ), vfork(),
waitpid()
#include<sys/typesh>
#include <stdio.h>
#include <unistd.h>int
main( )
{
pid_t pid;
/* fork a child
process */ pid =
fork( );
if (pid < 0)
{ /* error occurred */
fprintf(stderr, "Fork
Failed"); return 1;
}
else if (pid == 0)
{ /* child process */
execlp("/bin/ls","ls",NULL);
return 0;
}
}
else { /* parent process */
/* parent will wait for the
child to complete */
wait(NULL); printf("Child
Complete");}
The above C program shows the UNIX system calls fork, exec, wait. Two different processes
are running copies of the same program.
● The only difference is that the value of pid for the child process is zero, while the value of
pid for the parent is an integer value greater than zero (i.e. the actual pid of the child
process).
● The child process inherits privileges and scheduling attributes from the parent as well as
certain resources such as open files.
● The child process then overlays its address space with the UNIX command /bin/ls (used
to get a directory listing) using the execlp( ) system call (execlp( ) is a version of the
exec( ) system call).
● The parent waits for the child process to complete with the wait( ) system call.
● When the child process completes by either implicitly or explicitly invoking exit( ), the
parent process resumes from the call to wait( ), where it completes using the exit( )
system call.
Waiting causes the parent to wait for the child to alter its state. The status change could be due
to the child process being terminated, stopped by a signal, or resumed by a signal. In some
circumstances, when a child process quits or switches state, the parent process should be
notified of the child’s alteration in state or termination state. In that instance, the parent process
uses functions like wait () to inquire about the update in the state of the child process.
Wait () suspends the caller process till the system receives information on the ending child’s
status. Wait () returns instantly if the system already has status information on a finished child
process when invoked. If the caller process receives the signal with action to run a signal
handler or terminate the process, wait () is also terminated.
The waitpid () system function pauses the current process until the pid argument specifies a
child with an altered state. Waitpid() waits solely for terminated children by default; however,
this behavior can be changed. The wait () system call accepts just one parameter, which holds
the process’s information and updates. If you don’t care about the child process’s exit status
and only care about making the parent wait for the child, use NULL as the value. In this guide,
we will elaborate on an example for the understanding of the Wait () system call in C
programming.
amming.
Waitpid()
The waitpid() system call monitors a child of the caller process for state changes and retrieves
information about the child whose behavior has changed. The child was halted by a signal or
resumed by a signal regarded as a state shift. Waiting for a terminated child enables the system
to free the resources associated with the child; if no wait is conducted, the terminated child will
remain in a “zombie” condition
The waitpid() system function pauses the current process until the PID argument specifies a
child who has changed. The calling process is paused until a child’s process completes or is
terminated. Waitpid() halts the calling process till the system receives information about the
child’s status. Waitpid() returns quickly if the system already has status information on a
suitable child when it is called. If the caller process gets a signal with the action of either
executing a signal handler or terminating the process, waitpid() is terminated. The waitpid()
function will pause the caller thread’s execution until it receives information and updates for
one of its terminated child processes or a signal that will either run a signal-catching procedure
or terminate the process.
Syntax of waitpid():
● < -1: Wait for any child process whose process group ID is equal to the absolute value of pid.
● -1: Wait for any child process.
● 0: Wait for any child process whose process group ID is equal to that of the calling process.
● > 0: Wait for the child whose process ID is equal to the value of pid.
The value of options is an OR of zero or more of the following constants:
Example:
Output:
● A process terminates when it finishes executing its final statement and asks the operating
system to delete it by using the exit( ) system call.
● The process may return a status value to its parent process via the wait( ) system call.
● All the resources of the process including physical and virtual memory, open files and I/O
buffers are deallocated by the operating system.
A parent may terminate the execution of one of its children for a variety of reasons such as:
1. The child has exceeded its usage of some of the resources that it has been allocated.
2. The task assigned to the child is no longer required.
3. The parent is exiting and the operating system does not allow a child to continue if its
parent terminates.
The exit() is such a function or one of the system calls that is used to terminate the process.
This system call defines that the thread execution is completed especially in the case of a
multi-threaded environment. For future reference, the status of the process is captured.
After the use of exit() system call, all the resources used in the process are retrieved by the
operating system and then terminate the process. The system called Exit() is equivalent to
exit().
Synopsis
#include <unistd.h>
void _exit(int status);
#include <stdlib.h>
void _Exit(int status);
Cascading Termination
If a parent process terminates either normally or abnormally then all its children must also be
terminated is referred as Cascading Termination. It is normally initiated by the operating
system.
In Linux and UNIX systems, a process can be terminate by using the exit( ) system call
providing an exit status as a parameter:
/* exit with status 1 */
exit(1);
Under normal termination, exit( ) may be called either directly (i.e. exit(1)) or indirectly (i.e.
by a return statement in main( ) ).
A parent process may wait for the termination of a child process by using the wait( ) system
call. The wait( ) system call is passed a parameter that allows the parent to obtain the exit
status of the child. This system call also returns the process identifier of the terminated child
so that the parent can tell which of its children has terminated:
pid = wait(&status);
vfork()
vfork() is a special case of clone(2). It is used to create new processes without copying the
page tables of the parent process. It may be useful in performance sensitive applications where
a child will be created which then immediately issues an execve().
vfork() differs from fork() in that the parent is suspended until the child makes a call
to execve(2) or _exit(2). The child shares all memory with its parent, including the stack,
until execve() is issued by the child. The child must not return from the current function or
call exit(), but may call _exit().
vfork –> create a child process and block parent process.
Note:- In vfork, signal handlers are inherited but not shared.
This will yield output mentioning what is vfork used for, syntax and along with all the required
details.
pid_t vfork(void);
vfork is as same as fork except that behavior is undefined if process created by vfork either
modifies any data other than a variable of type pid_t used to store the return value p of vfork or
calls any other function between calling _exit() or one of the exec() family.
Note: vfork is sometimes referred to as special case of clone.
Following is the C programming example for vfork() how it works.
#include<stdio.h>
#include<unistd.h>
Int main(void)
printf("Before fork\n");
vfork();
printf("after fork\n");
Before vfork
after vfork
after vfork
Aborted
Note:– As explained earlier, many a times the behaviour of the vfork system call is not
predictable. As in the above case it had printed before once and after twice but aborted the call
with _exit() function. It is better to use fork system call unless otherwise and avoid using vfork
as much as possible.
3.2 Deadlocks
A set of processes is in a Deadlock state when every process in the set is waiting for an event
that can be caused only by another process in the set. The events with which we are mainly
concerned here are resource acquisition and resource release.
3.2.1 System Model:A system consists of a finite number of resources to be distributed among
a number of competing processes.
Resources are categorized into two types: Physical resources and Logical resources
● Physical resources: Printers, Tape drives, DVD drives, memory space and CPUcycles
● Logical resources: Semaphores, Mutex locks andfiles.
Each resource type consists of some number of identical instances. (i.e.) If a system has two
CPU’s then the resource type CPU has two instances.
A process may utilize a resource in the following sequence under normal mode of operation:
● Request: The process requests the resource. If the resource is being used by another
process then the request cannot be granted immediately then the requesting process must
wait until it can acquire theresource.
● Use: The process can operate on theresource.
Example: If the resource is a printer, the process can print on the printer.
● Release: The process releases theresource.
System calls for requesting and releasingresources:
● Device System calls: request( ) and release()
● Semaphore System calls: wait( ), signal()
● Mutex locks: acquire( ), release().
● Memory System Calls: allocate( ) and free()
● File System calls: open( ), close().
A System Table maintains the status of each resource whether the resource is free or allocated.
For each resource that is allocated, the table also records the process to which it is allocated. If a
process requests a resource that is currently allocated to another process, it can be added to a
queue of processes waiting for thisresource.
Consider the above graph, with processes and Resources and have some
edges: P1 → R1 → P2 → R3 → P3 → R2 → P1
P2 → R3 → P3 → R2 → P2
Process P2 is waiting for the resource R3, which is held by processP3.
Process P3 is waiting for either process P1 or process P2 to release resourceR2.
Process P1 is waiting for process P2 to release
resource R1. Hence the Processes P1, P2 and P3
aredeadlocked.
2. Resource-Allocation-Graph Algorithm
In this algorithm we use three edges: request edge, assignment edge and a claim edge.
Claim edge Pi → Rj indicates that process Pi may request resource Rj at some time
in thefuture.
Claim edge resembles a request edge in direction but is represented by dashedline.
When process Pi requests resource Rj, the claim edge Pi → Rj is converted to a
request edge.
When a resource Rj is released by Pi, the assignment edge Rj → Pi is reconverted to
a claim edge Pi → Rj.
The resources must be claimed a priori in the system. That is, before process Pi
starts executing, all its claim edges must already appear in the resource-
allocationgraph.
Now suppose that process Pi requests resource Rj.
● The request can be granted only if converting the request edge Pi → Rj to an
assignment edge Rj → Pi does not result in the formation of a cycle in the
resource-allocation graph. We check for safety by using a cycle-
detectionalgorithm.
● If no cycle exists, then the allocation of the resource will leave the system in a
safestate.
● If a cycle is found, then the allocation will put the system in an unsafe state. In
that case, process Pi will have to wait for its requests to besatisfied.
Example: consider the above resource-allocation graph. Suppose that P2 requests R2.
● R2 is currently free still we cannot allocate it to P2, since this will create a cycle
ingraph.
● A cycle indicates that the system is in an unsafestate.
● If P1 requests R2 and P2 requests R1, then a deadlock willoccur.
Problem:The resource-allocation-graph algorithm is not applicable to a resource
allocation system with multiple instances of each resource type.
3. Banker's Algorithmis used in a system with multiple instance of each resource type.
The name was chosen because the algorithm could be used in a banking system to ensure that
the bank never allocated its available cash in such a way that it could no longer satisfy the needs
of all its customers.
Banker’s algorithm uses two algorithms:
1. Safetyalgorithm
2. Resource-Requestalgorithm
Banker’s algorithm:
● When a new process enters the system, the process must declare the Maximum
number of instances of each resource type that it mayneed.
● The Maximum number may not exceed the Total number of resources in
thesystem.
● When a user requests a set of resources, the system must determine whether the
allocation of these resources will leave the system in a safestate.
● If the system is in safe state then the resources areallocated.
● If the system is in unsafe state then the process must wait until some other process
releases enoughresources.
Data structures used to implement the Banker’s algorithm
Consider the system with n number of processes and m number of resource types:
● Availablem: A vector of length m indicates the number of available resources of
each type.
● Maxn×m:Ann ×m matrixdefinesthe maximumdemand ofeachprocess.
● Allocation n×m: An n × m matrix defines the number of resources of each type
currently allocated to eachprocess.
● Needn×m:Ann ×m matrix indicates theremainingresourceneed ofeach process.
Need[i][j] = Max[i][j]−Allocation[i][j].
● Available[j] = k means then k instances of resource type Rj areavailable.
● Max[i][j] = k means process Pi may request at most k instances of resource typeRj.
● Allocation[i][j]=k means process Pi is currently allocated k instances of resource typeRj.
● Need[i][j]=k means process Pi may need k more instances of resource type Rj to
complete itstask.
Each row in the matrices Allocation n×m and Need n×m are considered as vectors and refer to
them as Allocationi and Needi.
● The vector Allocationi specifies the resources currently allocated to processPi.
● The vector Needi specifies the additional resources that process Pi may still request
to complete itstask.
Safety algorithm
Safety algorithm finds out whether the system is in safe state or not. The algorithm can be
described as follows:
1. Let Work and Finish be vectors of length m and n, respectively. Weinitialize
Work = Available
Finish[i] = false for i = 0, 1, ..., n − 1.
2. Find an index i such thatboth
Finish[i] ==
false Needi
≤ Work
If no such i exists, go to step 4.
3. Work = Work +
Allocationi Finish[i]
=true
Go to step 2.
4. If Finish[i] == true for alli, then the system is in a safestate.
Note: To determine a safe state, this algorithm requires an order of m×n2operations.
Resource-Request Algorithm
This algorithm determines whether requests can be safely granted.
Let Requesti be the request vector for process Pi. If Requesti[ j] == k, then process Pi
wants k instances of resource type Rj.
When a request for resources is made by process Pi, the following actions are taken:
1. If Requesti ≤ Needi, go to step2.
Otherwise, raise an error condition, since the process has exceeded its maximum claim.
2. If Requesti ≤ Available, go to step 3.
Otherwise, Pi must wait, since the resources are not available.
3. Have the system pretend to have allocated the requested resources to process Pi by
modifying the state asfollows:
Available = Available– Requesti
;
Allocationi = Allocationi +
Requesti;
Needi = Needi – Requesti ;
4. If the resulting resource-allocation state is safe, the transaction is completed and
process Pi is allocated itsresources.
If the new state is unsafe, then Pi must wait for Requestiand the old resource-
allocation state is restored.
Example for Banker’s Algorithm
Consider a system with 5 processes: P0, P1, P2, P3, P4 and 3 resource types A, B and C with
10, 5, 7instances respectively. (i.e.) . Resource type A=10, B= 5 and C=7 instances. Suppose
that, at time T0, the following snapshot of the system has been taken:
Allocati Max Availa
on ble
Proce A BC A A BC
ss BC
P0 010 753 332
P1 200 322
P2 302 902
P3 211 222
P4 002 433
The Available vector can be calculated by subtractring total no of resources from the
sum of resources allocated to each process.
Available resources of A= Total resources of A – Sum of resources allocated to Process
P1 to P4
The Need matrix can be obtained by using Need[i][j] = Max[i][j]
−Allocation[i][j]
M Allocati Need
ax on
A B C A B C A B C
P0 7 5 3 0 1 0 7 4 3
P1 3 2 2 2 0 0 1 2 2
P2 9 0 2 3 0 2 6 0 0
P3 2 2 2 2 1 1 0 1 1
P4 4 3 3 0 0 2 4 3 1
By using the banker’s algorithm we can decide whether the state is safe or not.
After solving the above problem by using bankers algorithm we will get to a safe state with safe
sequence <P1,P3,P4,P0,P2>.
3.3.3 Deadlock Detection: If a system does not employ either a Deadlock-Prevention or a
Deadlock-Avoidance algorithm then a deadlock situation may occur. In this environment, the
system mayprovide:
● An algorithm that examines the state of the system to determine whether a deadlock has
occurred
● An algorithm to recover from thedeadlock.
Deadlock Detection in Single Instance of Each Resource Type
If all resources have only a single instance then we can define a Deadlock-Detection algorithm
that uses a variant of the resource-allocation graph called a wait-forgraph.
We obtain wait-for graph from the resource-allocation graph by removing the resource
nodes and collapsing the appropriate edges.
● An edge from Pi to Pjin a wait-for graph implies that process Pi is waiting for processPj
to release a resource that Pi needs.
● An edge Pi → Pj exists in a wait-for graph if and only if the corresponding resource
allocation graph contains two edges Pi → Rq and Rq → Pj for some resource Rq.
Example:Consider a system with 5 processes: P0, P1, P2, P3, P4 and 3 resource types A, B and C
with10, 5, 7 instances respectively. (i.e.) . Resource type A=7, B= 2 and C=6 instances. Suppose that, at
time T0, we have the following resource-allocation state:
Allocati Reque Available
on st
A B C A B C A B C
P0 0 1 0 0 0 0 0 0 0
P1 2 0 0 2 0 2
P2 3 0 3 0 0 0
P3 2 1 1 1 0 0
P4 0 0 2 0 0 2
Initially the system is not in Deadlock State. If we apply the Deadlock Detection algorithm we
will find the sequence < P0, P2, P3, P1, P4 > results in Finish[i] == true for alli.The system is in
safe state hence there is no deadlock.