0% found this document useful (0 votes)
26 views

Unit 2

Learning Purpose

Uploaded by

ideascollect123
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views

Unit 2

Learning Purpose

Uploaded by

ideascollect123
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

Process:

A process is a fundamental concept in operating systems. It represents


an independent, self-contained unit of execution in a computer system.
A process consists of a program in execution along with its associated
resources such as memory, CPU registers, open files, and other system
resources. Processes are managed by the operating system to provide
multitasking and concurrency, allowing multiple processes to execute on
a single CPU.

Process State:
A process can be in several different states during its lifecycle. These
states represent the various stages a process goes through as it
executes in the system. The common process states include:

New: The process is being created or initialized.


Ready: The process is prepared to execute but is waiting for the CPU to
be allocated.
Running: The process is currently being executed by the CPU.
Blocked (or Waiting): The process is unable to continue execution,
typically due to waiting for some event (like I/O completion) to occur.
Terminated (or Exit): The process has finished its execution and is
being terminated.

Process Control Block (PCB):


A Process Control Block (PCB), also known as a Task Control Block
(TCB), is a data structure maintained by the operating system for each
process. It contains important information about the process, allowing
the operating system to manage and control the process effectively. The
PCB typically includes:

The following are the data items


Process State
This specifies the process state i.e. new, ready, running, waiting or terminated.
Process Number
This shows the number of the particular process.
Program Counter
This contains the address of the next instruction that needs to be executed in the
process.
Registers
This specifies the registers that are used by the process. They may include
accumulators, index registers, stack pointers, general purpose registers etc.
List of Open Files
These are the different files that are associated with the process

process scheduling
The process scheduling is the activity of the process manager that
handles the removal of the running process from the CPU and the
selection of another process on the basis of a particular strategy.
Process scheduling is an essential part of a Multiprogramming operating
systems. Such operating systems allow more than one process to be
loaded into the executable memory at a time and the loaded process
shares the CPU using time multiplexing.
Categories of Scheduling
There are two categories of scheduling:
1. Non-preemptive: Here the resource can’t be taken from a
process until the process completes execution. The switching of
resources occurs when the running process terminates and moves
to a waiting state.
2. Preemptive: Here the OS allocates the resources to a process
for a fixed amount of time. During resource allocation, the process
switches from running state to ready state or from waiting state to
ready state. This switching occurs as the CPU may give priority to
other processes and replace the process with higher priority with
the running process.
Process Scheduling Queues
The OS maintains all Process Control Blocks (PCBs) in Process
Scheduling Queues. The OS maintains a separate queue for each of the
process states and PCBs of all processes in the same execution state
are placed in the same queue. When the state of a process is changed,
its PCB is unlinked from its current queue and moved to its new state
queue.
The Operating System maintains the following important process
scheduling queues
Job queue This queue keeps all the processes in the system.
Ready queue This queue keeps a set of all processes residing in
main memory, ready and waiting to execute. A new process is
always put in this queue.
Device queues The processes which are blocked due to
unavailability of an I/O device constitute this queue.
The OS can use different policies to manage each queue (FIFO, Round
Robin, Priority, etc.). The OS scheduler determines how to move
processes between the ready and run queues which can only have one
entry per processor core on the system; in the above diagram, it has
been merged with the CPU.
Two-State Process Model
Two-state process model refers to running and non-running states which
are described below

S.N State & Description


.

1 Running
When a new process is created, it enters into the system as in the
running state.

2 Not Running
Processes that are not running are kept in queue, waiting for their
turn to execute. Each entry in the queue is a pointer to a particular
process. Queue is implemented by using linked list. Use of
dispatcher is as follows. When a process is interrupted, that process
is transferred in the waiting queue. If the process has completed or
aborted, the process is discarded. In either case, the dispatcher then
selects a process from the queue to execute.

Schedulers
Schedulers are special system software which handle process
scheduling in various ways. Their main task is to select the jobs to be
submitted into the system and to decide which process to run.
Schedulers are of three types
Long-Term Scheduler
Short-Term Scheduler
Medium-Term Scheduler
Long Term Scheduler
It is also called a job scheduler. A long-term scheduler determines
which programs are admitted to the system for processing. It selects
processes from the queue and loads them into memory for execution.
Process loads into the memory for CPU scheduling.
The primary objective of the job scheduler is to provide a balanced mix
of jobs, such as I/O bound and processor bound. It also controls the
degree of multiprogramming. If the degree of multiprogramming is
stable, then the average rate of process creation must be equal to the
average departure rate of processes leaving the system.
On some systems, the long-term scheduler may not be available or
minimal. Time-sharing operating systems have no long term scheduler.
When a process changes the state from new to ready, then there is use
of long-term scheduler.
Short Term Scheduler
It is also called as CPU scheduler. Its main objective is to increase
system performance in accordance with the chosen set of criteria. It is
the change of ready state to running state of the process. CPU
scheduler selects a process among the processes that are ready to
execute and allocates CPU to one of them.
Short-term schedulers, also known as dispatchers, make the decision of
which process to execute next. Short-term schedulers are faster than
long-term schedulers.
Medium Term Scheduler
Medium-term scheduling is a part of swapping. It removes the
processes from the memory. It reduces the degree of multiprogramming.
The medium-term scheduler is in-charge of handling the swapped out-
processes.
A running process may become suspended if it makes an I/O request. A
suspended processes cannot make any progress towards completion. In
this condition, to remove the process from memory and make space for
other processes, the suspended process is moved to the secondary
storage. This process is called swapping, and the process is said to be
swapped out or rolled out. Swapping may be necessary to improve the
process mix.
Comparison among Scheduler

S.N L o n g - T e r m S h o r t - T e r m M e d i u m - T e r m
. S c h e d u l e r S c h e d u l e r Scheduler

1 It is a job scheduler I t i s a C P U It is a process


scheduler swapping scheduler.

2 Speed is lesser than S p e e d i s f a s t e s t Speed is in between


short term scheduler among other two both short and long
term scheduler.

3 It controls the degree It provides lesser It reduces the degree


of multiprogramming control over degree of multiprogramming.
of multiprogramming

4 It is almost absent or It is also minimal in It is a part of Time


m i n i m a l i n t i m e time sharing system sharing systems.
sharing system

5 It selects processes I t s e l e c t s t h o s e It can re-introduce the


from pool and loads processes which are process into memory
them into memory for ready to execute and execution can be
execution continued.
Context Switching
A context switching is the mechanism to store and restore the state or
context of a CPU in Process Control block so that a process execution
can be resumed from the same point at a later time. Using this
technique, a context switcher enables multiple processes to share a
single CPU. Context switching is an essential part of a multitasking
operating system features.
When the scheduler switches the CPU from executing one process to
execute another, the state from the current running process is stored
into the process control block. After this, the state for the process to run
next is loaded from its own PCB and used to set the PC, registers, etc.
At that point, the second process can start executing.
Context switches are computationally intensive since register and
memory state must be saved and restored. To avoid the amount of
context switching time, some hardware systems employ two or more
sets of processor registers. When the process is switched, the following
information is stored for later use.
Program Counter
Scheduling information
Base and limit register value
Currently used register
Changed State
I/O State information
Accounting information
What are the operations on process?
A process is a program in execution and it is more than a program code called as
text section and this concept works under all the operating system because all the
task perform by the operating system needs a process to perform the task
The process executes when it changes state. The state of a process is defined by
the current activity of the process.
Each process may be any one of the following states
New The process is being created.
Running In this state the instructions are being executed.
Waiting The process is in waiting state until an event occurs like I/O operation
completion or receiving a signal.
Ready The process is waiting to be assigned to a processor.
Terminated The process has finished execution.
It is important to know that only one process can be running on any processor at any
instant. Many processes may be ready and waiting.
Operations on Process
The two main operations perform on Process are as follows
Process Creation
There should be four principle events which cause processes to be created.
System initialization
Numerous processes are created when an operating system is booted. Some of
them are
Foreground processes Processes that interact with users and perform work for
them.
Background processes It is also called as daemons and not associated with
particular users, but instead has some specific function.
Execution of a process-creation system call by a running process
The running process will issue system calls to create one or more new processes to
help it do its job.
A user request to create a new process
A new process is created with the help of an existing process executing a process
creation system call.
In UNIX, the system call which is used to create a new process is fork()
In Windows, CreateProcess(), which has 10 parameters to handle both process
creation and loading the correct program into the new process.
Initiation of a batch job
Users are going to submit batch jobs to the system.
When the operating system creates a new process and runs the next job from the
input queue in it.
Process Termination
Process is going to be terminated by a call to kill in UNIX or Terminate Process in
windows.
Process is terminated due to following reason
Normal exit Most processes terminate when they have completed their work
and execute a system call to exit.
Error exit The third type of error occurs due to program bugs like executing an
illegal instruction, referencing, or dividing by zero.
Fatal exit A termination of a process occurs when it discovers a fatal error.
Killed by another process A process executes a system call to kill some other
process.
What is Interprocess Communication?
Interprocess communication is the mechanism provided by the operating system
that allows processes to communicate with each other. This communication could
involve a process letting another process know that some event has occurred or the
transferring of data from one process to another.
A diagram that illustrates interprocess communication is as follows

Synchronization in Interprocess Communication


Synchronization is a necessary part of interprocess communication. It is either
provided by the interprocess control mechanism or handled by the communicating
processes. Some of the methods to provide synchronization are as follows

Semaphore
A semaphore is a variable that controls the access to a common resource by
multiple processes. The two types of semaphores are binary semaphores and
counting semaphores.
Mutual Exclusion
Mutual exclusion requires that only one process thread can enter the critical
section at a time. This is useful for synchronization and also prevents race
conditions.
Barrier
A barrier does not allow individual processes to proceed until all the processes
reach it. Many parallel languages and collective routines impose barriers.
Spinlock
This is a type of lock. The processes trying to acquire this lock wait in a loop
while checking if the lock is available or not. This is known as busy waiting
because the process is not doing any useful operation even though it is active.
Approaches to Interprocess Communication
The different approaches to implement interprocess communication are given as
follows

Pipe
A pipe is a data channel that is unidirectional. Two pipes can be used to
create a two-way data channel between two processes. This uses standard
input and output methods. Pipes are used in all POSIX systems as well as
Windows operating systems.
Socket
The socket is the endpoint for sending or receiving data in a network. This is
true for data sent between processes on the same computer or data sent
between different computers on the same network. Most of the operating
systems use sockets for interprocess communication.
File
A file is a data record that may be stored on a disk or acquired on demand by
a file server. Multiple processes can access a file as required. All operating
systems use files for data storage.
Signal
Signals are useful in interprocess communication in a limited way. They are
system messages that are sent from one process to another. Normally, signals
are not used to transfer data but are used for remote commands between
processes.
Shared Memory
Shared memory is the memory that can be simultaneously accessed by
multiple processes. This is done so that the processes can communicate with
each other. All POSIX systems, as well as Windows operating systems use
shared memory.
Message Queue
Multiple processes can read and write data to the message queue without
being connected to each other. Messages are stored in the queue until their
recipient retrieves them. Message queues are quite useful for interprocess
communication and are used by most operating systems.
A diagram that demonstrates message queue and shared memory methods of
interprocess communication is as follows -
Inter process Communication (IPC) is a mechanism which allows the exchange of
data between processes. It enables resource and data sharing between the
processes without interference.
Processes that execute concurrently in the operating system may be either
independent processes or cooperating processes.
A process is independent and it may or may not be affected by other processes
executing in the system. Any process that does not share data with any other
process is independent.
Suppose if a process is cooperating then, it can be affected by other processes that
are executing in the system. Any process that shares the data with another process
is called a cooperative process.
Given below is the diagram of inter process communication
Reasons for Process Cooperation
There are several reasons which allow process cooperation, which is as follows
Information sharing Several users are interested in the same piece of
information. We must provide an environment to allow concurrent access to
such information.
Computation speeds up If we want a particular task to run faster, we must
break it into subtasks, then each will be executed in parallel with the other.
The speedup can be achieved only if the computer has multiple processing
elements.
Modularity A system can be constructed in a modular fashion dividing the
system functions into separate processes or threads.
Convenience An individual user may work on many tasks at the same time. For
example, a user may be editing, compiling, and printing in parallel.
The cooperative process requires an IPC mechanism that will allow them to
exchange data and information.
IPC Models
There are two fundamental models of IPC which are as follows
Shared memory
A region of memory that is shared by cooperating processes is established.
Processes can then exchange information by reading and writing data to the shared
region.
Message passing
Communication takes place by means of messages exchanged between the
cooperating processes. Message passing is useful for exchanging small amounts of
data because no conflicts need be avoided.
It is easier to implement when compared to shared memory by using system calls
and therefore, it requires more time-consuming tasks of the kernel.

Difference between Process and Thread


Both process and thread are related to each other and quite similar as these are
the independent sequence of execution. The basic difference between a process
and a thread is that a process takes place in different memory spaces, whereas a
thread executes in the same memory space.
Read through this article to find out how a process is different from a thread, in the
context of operating systems. Let's start with some basics of threads and processes.
What is a Process?
A process is an active program, i.e., a program that is under execution. It is more
than the program code, as it includes the program counter, process stack, registers,
program code etc. Compared to this, the program code is only the text section.
When a computer program is triggered to execute, it does not run directly, but it first
determines the steps required for execution of the program, and following these
steps of execution is referred to as a process. An individual process takes its own
memory space and does not share this space with other processes.
Processes can be classified into two types namely – clone process and parent
process. A clone process, also called a child process, is one which is created by
another process, while the main process is one that is responsible for creating other
processes to perform multiple tasks at a time is called as parent process.
What is a Thread?
A thread is a lightweight process that can be managed independently by a
scheduler. It improves the application performance using parallelism. A thread
shares information like data segment, code segment, files etc. with its peer threads
while it contains its own registers, stack, counter etc.
A thread is basically a subpart of a large process. In a process, all the threads within
it are interrelated to each other. A typical thread contains some information like data
segment, code segment, etc. This information is being shared to their peer threads
during execution.
The most important feature of threads is that they share memory, data, resources,
etc. with their peer threads within a process to which they belong. Also, all the
threads within a process are required to be synchronized to avoid unexpected
results.
Difference between Process and Thread
The following table highlights the major differences between a process and a thread

Comparison
Process Thread
Basis

Definition A process is a program under A thread is a lightweight process that


execution i.e. an active program. can be managed independently by a
scheduler

Context switching Processes require more time for Threads require less time for context
time context switching as they are switching as they are lighter than
heavier. processes.

Memory Sharing Processes are totally independent A thread may share some memory
and don’t share memory. with its peer threads.
Communication Communication between Communication between threads
processes requires more time than requires less time than between
between threads. processes.

Blocked If a process gets blocked, If a user level thread gets blocked,


remaining processes can continue all of its peer threads also get
execution. blocked.

Resource Processes require more resources Threads generally need less


Consumption than threads. resources than processes.

Dependency Individual processes are Threads are parts of a process and


independent of each other. so are dependent.

Data and Code Processes have independent data A thread shares the data segment,
sharing and code segments. code segment, files etc. with its peer
threads.

Treatment by OS All the different processes are All user level peer threads are
treated separately by the operating treated as a single task by the
system. operating system.

Time for creation Processes require more time for Threads require less time for
creation. creation.

Time for Processes require more time for Threads require less time for
termination termination. termination.
Conclusion
The most significant difference between a process and a thread is that a process is
defined as a task that is being completed by the computer, whereas a thread is a
lightweight process that can be managed independently by a scheduler.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy