Unit 2
Unit 2
Process State:
A process can be in several different states during its lifecycle. These
states represent the various stages a process goes through as it
executes in the system. The common process states include:
process scheduling
The process scheduling is the activity of the process manager that
handles the removal of the running process from the CPU and the
selection of another process on the basis of a particular strategy.
Process scheduling is an essential part of a Multiprogramming operating
systems. Such operating systems allow more than one process to be
loaded into the executable memory at a time and the loaded process
shares the CPU using time multiplexing.
Categories of Scheduling
There are two categories of scheduling:
1. Non-preemptive: Here the resource can’t be taken from a
process until the process completes execution. The switching of
resources occurs when the running process terminates and moves
to a waiting state.
2. Preemptive: Here the OS allocates the resources to a process
for a fixed amount of time. During resource allocation, the process
switches from running state to ready state or from waiting state to
ready state. This switching occurs as the CPU may give priority to
other processes and replace the process with higher priority with
the running process.
Process Scheduling Queues
The OS maintains all Process Control Blocks (PCBs) in Process
Scheduling Queues. The OS maintains a separate queue for each of the
process states and PCBs of all processes in the same execution state
are placed in the same queue. When the state of a process is changed,
its PCB is unlinked from its current queue and moved to its new state
queue.
The Operating System maintains the following important process
scheduling queues
Job queue This queue keeps all the processes in the system.
Ready queue This queue keeps a set of all processes residing in
main memory, ready and waiting to execute. A new process is
always put in this queue.
Device queues The processes which are blocked due to
unavailability of an I/O device constitute this queue.
The OS can use different policies to manage each queue (FIFO, Round
Robin, Priority, etc.). The OS scheduler determines how to move
processes between the ready and run queues which can only have one
entry per processor core on the system; in the above diagram, it has
been merged with the CPU.
Two-State Process Model
Two-state process model refers to running and non-running states which
are described below
1 Running
When a new process is created, it enters into the system as in the
running state.
2 Not Running
Processes that are not running are kept in queue, waiting for their
turn to execute. Each entry in the queue is a pointer to a particular
process. Queue is implemented by using linked list. Use of
dispatcher is as follows. When a process is interrupted, that process
is transferred in the waiting queue. If the process has completed or
aborted, the process is discarded. In either case, the dispatcher then
selects a process from the queue to execute.
Schedulers
Schedulers are special system software which handle process
scheduling in various ways. Their main task is to select the jobs to be
submitted into the system and to decide which process to run.
Schedulers are of three types
Long-Term Scheduler
Short-Term Scheduler
Medium-Term Scheduler
Long Term Scheduler
It is also called a job scheduler. A long-term scheduler determines
which programs are admitted to the system for processing. It selects
processes from the queue and loads them into memory for execution.
Process loads into the memory for CPU scheduling.
The primary objective of the job scheduler is to provide a balanced mix
of jobs, such as I/O bound and processor bound. It also controls the
degree of multiprogramming. If the degree of multiprogramming is
stable, then the average rate of process creation must be equal to the
average departure rate of processes leaving the system.
On some systems, the long-term scheduler may not be available or
minimal. Time-sharing operating systems have no long term scheduler.
When a process changes the state from new to ready, then there is use
of long-term scheduler.
Short Term Scheduler
It is also called as CPU scheduler. Its main objective is to increase
system performance in accordance with the chosen set of criteria. It is
the change of ready state to running state of the process. CPU
scheduler selects a process among the processes that are ready to
execute and allocates CPU to one of them.
Short-term schedulers, also known as dispatchers, make the decision of
which process to execute next. Short-term schedulers are faster than
long-term schedulers.
Medium Term Scheduler
Medium-term scheduling is a part of swapping. It removes the
processes from the memory. It reduces the degree of multiprogramming.
The medium-term scheduler is in-charge of handling the swapped out-
processes.
A running process may become suspended if it makes an I/O request. A
suspended processes cannot make any progress towards completion. In
this condition, to remove the process from memory and make space for
other processes, the suspended process is moved to the secondary
storage. This process is called swapping, and the process is said to be
swapped out or rolled out. Swapping may be necessary to improve the
process mix.
Comparison among Scheduler
S.N L o n g - T e r m S h o r t - T e r m M e d i u m - T e r m
. S c h e d u l e r S c h e d u l e r Scheduler
Semaphore
A semaphore is a variable that controls the access to a common resource by
multiple processes. The two types of semaphores are binary semaphores and
counting semaphores.
Mutual Exclusion
Mutual exclusion requires that only one process thread can enter the critical
section at a time. This is useful for synchronization and also prevents race
conditions.
Barrier
A barrier does not allow individual processes to proceed until all the processes
reach it. Many parallel languages and collective routines impose barriers.
Spinlock
This is a type of lock. The processes trying to acquire this lock wait in a loop
while checking if the lock is available or not. This is known as busy waiting
because the process is not doing any useful operation even though it is active.
Approaches to Interprocess Communication
The different approaches to implement interprocess communication are given as
follows
Pipe
A pipe is a data channel that is unidirectional. Two pipes can be used to
create a two-way data channel between two processes. This uses standard
input and output methods. Pipes are used in all POSIX systems as well as
Windows operating systems.
Socket
The socket is the endpoint for sending or receiving data in a network. This is
true for data sent between processes on the same computer or data sent
between different computers on the same network. Most of the operating
systems use sockets for interprocess communication.
File
A file is a data record that may be stored on a disk or acquired on demand by
a file server. Multiple processes can access a file as required. All operating
systems use files for data storage.
Signal
Signals are useful in interprocess communication in a limited way. They are
system messages that are sent from one process to another. Normally, signals
are not used to transfer data but are used for remote commands between
processes.
Shared Memory
Shared memory is the memory that can be simultaneously accessed by
multiple processes. This is done so that the processes can communicate with
each other. All POSIX systems, as well as Windows operating systems use
shared memory.
Message Queue
Multiple processes can read and write data to the message queue without
being connected to each other. Messages are stored in the queue until their
recipient retrieves them. Message queues are quite useful for interprocess
communication and are used by most operating systems.
A diagram that demonstrates message queue and shared memory methods of
interprocess communication is as follows -
Inter process Communication (IPC) is a mechanism which allows the exchange of
data between processes. It enables resource and data sharing between the
processes without interference.
Processes that execute concurrently in the operating system may be either
independent processes or cooperating processes.
A process is independent and it may or may not be affected by other processes
executing in the system. Any process that does not share data with any other
process is independent.
Suppose if a process is cooperating then, it can be affected by other processes that
are executing in the system. Any process that shares the data with another process
is called a cooperative process.
Given below is the diagram of inter process communication
Reasons for Process Cooperation
There are several reasons which allow process cooperation, which is as follows
Information sharing Several users are interested in the same piece of
information. We must provide an environment to allow concurrent access to
such information.
Computation speeds up If we want a particular task to run faster, we must
break it into subtasks, then each will be executed in parallel with the other.
The speedup can be achieved only if the computer has multiple processing
elements.
Modularity A system can be constructed in a modular fashion dividing the
system functions into separate processes or threads.
Convenience An individual user may work on many tasks at the same time. For
example, a user may be editing, compiling, and printing in parallel.
The cooperative process requires an IPC mechanism that will allow them to
exchange data and information.
IPC Models
There are two fundamental models of IPC which are as follows
Shared memory
A region of memory that is shared by cooperating processes is established.
Processes can then exchange information by reading and writing data to the shared
region.
Message passing
Communication takes place by means of messages exchanged between the
cooperating processes. Message passing is useful for exchanging small amounts of
data because no conflicts need be avoided.
It is easier to implement when compared to shared memory by using system calls
and therefore, it requires more time-consuming tasks of the kernel.
Comparison
Process Thread
Basis
Context switching Processes require more time for Threads require less time for context
time context switching as they are switching as they are lighter than
heavier. processes.
Memory Sharing Processes are totally independent A thread may share some memory
and don’t share memory. with its peer threads.
Communication Communication between Communication between threads
processes requires more time than requires less time than between
between threads. processes.
Data and Code Processes have independent data A thread shares the data segment,
sharing and code segments. code segment, files etc. with its peer
threads.
Treatment by OS All the different processes are All user level peer threads are
treated separately by the operating treated as a single task by the
system. operating system.
Time for creation Processes require more time for Threads require less time for
creation. creation.
Time for Processes require more time for Threads require less time for
termination termination. termination.
Conclusion
The most significant difference between a process and a thread is that a process is
defined as a task that is being completed by the computer, whereas a thread is a
lightweight process that can be managed independently by a scheduler.