0% found this document useful (0 votes)
17 views

3.Process Management

ppt notes for osy chapter3 Process management.

Uploaded by

lillyforsuree
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views

3.Process Management

ppt notes for osy chapter3 Process management.

Uploaded by

lillyforsuree
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 67

3.

Process Management
3.0.Introduction
1.A running instance of program is called as process.A process is a smallest
unit of work that is scheduled by the OS.
2.A process needs resources,such as CPU time,memory,files and I/O devices
to accomplish its task.These resources are allocated either when the program is
created or when it is executing.
3.A user uses processes to achieve execution of programs in a sequential or
concurrent manner as desired.
4.Process management is the fundamental task of any modern OS.The OS
must allocate resources to processes,enable processes to share and exchange
information,protect the resources of each process from other processes and
enable synchronization among processes.
5.To meet these requirnments,the OS must maintain a data structure for each
process,which describes the state and resource ownership of that process,and
which enables the OS to exert control over each process.
3.1.Process
1.A process is defined as ,”an entity which represents the basic unit of work to
be implemented in the system”.
2.A process is defined as, ”Program under execution,which competes for the
CPU time and other resources.”
3.A process is a program in execution.Process is also called as job,task and
unit of work.
4.The execution of a process must progress in a sequential fashion.That is ,at
any time ,at most one instruction is executed on behalf of the process.
5.A process is an instance of an executing program ,including the current
values of the program counter ,registers and variables.
6.Logically each process has its seprate virtual CPU.in actual,the real CPU
switches from one process to an other.
7.A process is an activity and it has a program,input,output and a state.
• Each process has Following sections:::(Fig:Process in Memory)

1.A Text Section that contains the program code.


2.A Data Section that contains global and static variables.
3.The heap section is used for dynamic memory allocation,and is managed via
calls to new,delete,malloc,free,etc.
4.The stack section is used for local variables.A process stack which contains
The temporary data(Such as Subroutine parameters,return addresses,and
temporary variables).space on the stack is reserved for local variables when
they are declared and the space is freed up when the variables go out of scope.
5.A Program counter that contains the contents of the processor’s registers.
--As the program execute the process changes state.The state of process is
defined by its current activity.
--process execution is an alternating sequence of CPU and I/O
bursts,beginning and ending with a CPU burst.Thus each process may be in
one of the states namely,new ,active,waiting,or halted or stop.
--Multiprogramming systems explicitly allow multiple processes to exist at any
given time,where only one is using the CPU at any given moment,while the\
Remaining processes are performing I/O or are waiting.
--Process management includes creting,running,terminating,and assigning
different processes to different devices.
--Even process scheduling is a part of process management where the
sequence and priorities of the processes are different.
Difference Between Program & Process
Sr. Program Process
No
1 A Program is a series of instructions to Process is a program in execution.
perform a particular task

2 Program is given as a set of process. In 2.Process is a part of a program. Process is


some cases we may divide a problem into the part where logic of that particular program
number of parts .At these times we write a exists.
separate logic for each part as process.

3 It is stored in secondary storage Process is stored in memory

4 Program is set of instructions to be Process is a program in execution.


executed by processor.

5 Program is static entity as it made up of Process is dynamic entity.


program statements.

6 Program occupy fixed place in storage or Process changes its state during execution.
main memory
3.1.1.Process Model
• In process model the OS is organized into a number of sequential processes.A
process is just an executing program,including the current values of the
program counter,register and variables.
• Each process had its own virtual CPU.in reality of course the real CPU
switches back and forth from process to process;thus it is much a collection of
processes running in parallel.this rapid switching back and forth is called
multiprogramming.
• In fig (A),computer multiprogramming four programs in the memory.
• In fig(B),can see how this abstracted into four processes,each with its own flow
of control,and each one running independent of other ones.
• In fig(C),can see that viewed over a long enough time interval,all the processes
have made progress ,but at any given instant only one process is actually
running.
• With the CPU switching back and forth among the processes,the rate at which a
process performs its computation will not be uniform,and probably not even
reproducible if the same processes are run again.
• With some scheduling algorithm being used to determine when to stop work on
one process and service a different one.
Program Counter
Four Program Counter
A

B
A B C D

D
(B) Process Time
(C)
(A)
3.1.2.Process State
Process States
1.In multiprogramming system, many processes are executed by the OS,but at
any instant of time only one process executes on the CPU.other process wait
for their run.
2.The current activity of a process is known as its state.As a process executes it
changes state.The process state is an indicator of the nature of the current
activity in a process.
3.Following fig shows Process state diagram.A state diagram represents the
different states in which a process can be at different times,along with the
tranisitions from one state to another that are possible in the OS.
• Each process may be in one of the following states.
1.New State:
1.A process that has just been created but has not yet been admitted to the
pool of execution processes by the OS.
2.Every new operation which is requested to the system is known as new born
process.
Process State
2.Ready State
1.When a process ready to execute but it is waiting for the CPU
to execute then this is called as ready state.
2.After the completion of the input and outputs the process will be on ready
State means the process will wait for the processor to execute.
3.Running State
1.The process that is currently being executed.
2.when the process is running under the CPU,or when the program is executed
by the CPU,then this is called as Running state.
3.And When a process is running then this will also provide us some output on
the screen.
4.Waiting or Blocked
1.A process that cannot execute until some event occurs or an I/O completion.
2.When a process is waiting for some input and output operations then this is
called as waiting state.
3.And in this state process is not under the execution instead the process is stored
out of memory and when the user will provide the input then this will again be on
ready state.
Process state
5.Terminated State
1.After the completion of the process,the process will be automatically
terminated by the CPU.,so this is called as terminated state of the process.
2.After executing the whole process the processor will also de allocate the
memory which is allocated to the process.so this is called as the terminated
process.
 A Process are availble into any one of the state while execution.
State/Activity Description
New Process is being executed.

Running CPU is executing the process’s instructions.

Waiting Process is,well waiting for an event,typically I/O or


signal.
Ready Process is waiting for a processor.

Terminated Process is done running.


3.1.3.Process Control
• For managing process and resources OS must have information about the
current status of each process and resources. For which OS constructs and
Maintains tables of information about each entity that it is managing.
• Four different types of tables maintained by OS:
1.Memory Tables:
Are used to keep track of both main and secondary memory . some of main
memory is reserved for use by the OS , the remainder is available for use by
processes.
2.I/O Tables:
Are used by the OS to manage the I/O devices and channels of the computer
System . At any given time , an I/O devices may be available or assigned to a
particular process.
3.File Tables:
Provide information about the existence of files , their location on secondary
Memory , their current status another attributes.
4.Process Tables:
Are used to manage processes a process must include a program or set of programs to
be executed.
3.1.4.Process Control Block(PCB)

• Each process is represented in the OS by a Process Control Block also


called as Task Control Block(TCB).
• The OS groups all information that needs about a particuler process into a
data structure called a PCB or process descripter.
• When a process is created ,OS creates a corresponding PCB and released
whenever the process terminates.
• A PCB stores descriptive information pertaining to a process,such as its
state,Program counter,memory management information,information
about its scheduling,allocated resources,accounting information,etc.that is
required to control and manage a particular process.
• The basic purpose of PCB is to indicate the so far progress of a process.fig
shows typical PCB.
• The PCB is a data structure with fields for recording the various aspects of
process execution and the usage of resources.
Process Control Block
• In simple words ,the OS maintains the information about each process in a record or a data
structure called as Process Control Block(PCB).

Pointer Process State

Process Number
Program Counter

CPU Registers

Memory Allocation

Event information

List of Open files

Fig:Process Control Block


Process Control Block
• Fig shows following sections of PCB:
1.Process Number:
1.Each process is identified by its process number,called Process Identification Number
(PID).
2.Every process has unique Process -id through which it is identified.
3.The process -id is provided by the OS.The process id of two process could not be
same because process-id is always unique.
2.Priority:
1.Each process is assigned a certain level of priority that corresponds to the relative
importance of the event that it services process priority is the preference of the one
process over other process for execution.
2.Priority may be given by the user/system manager or it may be given internally by
OS.This field stores the priority of a particular process.
3.Process State:
1.This information is about the current state of the process.
2.The state may be new,ready,running,and waiting,halted and so on.
Process Control Block
4.Program Counter
1.The counter indicates the address of the next instruction to be executed for this
process.
5.CPU Registers:
1.The registers vary in number and type,depending on the computer architecture.
2.They include accumulators,index registers,stack pointers,and general purpose
registers,plus any condition-code information.
3.Along with the program counter,this state information must be saved when an
interrupt occurs,to allow the process to be continued correctly afterward.
6.CPU Scheduling Information:
1.This information include a process priotity,pointers to scheduling queus,and any other
scheduling parameters.
7.Memory Management Information:
1.This information may include such information as the value of the base and limit
registers,the page tables,or the segment tables,depending on the memory system used
by the OS.
Process Control Block
8.Accounting Information:
1.This information includes the amount of CPU and real time used,time limits,account
numbers,job or process numbers,and so on.
9.I/O status Information:
1.This information includes the list of I/O devices allocated to the process ,a list of
open files ,and so on.
10.File Management:
1.It includes information about all open files,access rights etc.
11.Pointer:
1.Pointer points to another process control block.Pointer is uesd for maintaining the
scheduling list.

--The PCB serves as the repository for any information that may vary from process to
Process.
3.1.5.Operation On Processes

• 1.There are various operations or tasks that can be performed on processes, such as
creating,terminating,suspending or resuming a process etc.
• 2.To successfully execute these operations/tasks , the OS provides run time services for
the process management.
• 1.Process Creation
• 1.when new process is to be added to those currently being managed,the OS builds the
data structures that are used to manage the process and allocates address space in
main memory to the process . This is creation of new process.
• 2.There are four principal events that cause processes to be created:
• I)System Intialization:
When an OS is booted,typically several processes are created.
• II)Execution of a Process Creation System Call by a running process:
1.often a running process will issue system calls to create one or more new processes to
help it do its job.
Operation On Processes
• III)A user request to create a New Process:
1.In interactive systems,users can start a program by typing a command or clicking
an icon.
• IV)Initiation of a batch Job:
• 1.user can submit batch jobs to the system .when an OS decides that it has the run
another job,it creates a new process and runs the next job from the input queue in it.
• --system call CreateProcess() in windows and fork() in unix which tells the OS to
create a new process.
• When a process creates a new process ,two possibilities exist in term of
execution
• 1.The parent continues to execute concurrently with its children.
• 2.The parent waits until some or all of its children have terminated.
• When a process creates a new process, there are two possibilities in terms of
address space of the new process.
• 1.The child process has the same program and data as parent.
• 2.The child process has new program loaded into it.
• 2.Process Termination

• 1.depending on the condition,a process may be terminated either normally or


forcibly by some other process.
• 2.Normal termination occurs when the process completes its task and invokes an
appropriate system call ExitProcess()in windows and kill() in unix that tell the OS
that it is finished.
• 3.A process may cause abnormal termination of some another process .For this the
process invokes an appropriate system call TerminateProcess() in windows and
kill() in Unix that tells the OS to kill some other process.
• Generally , the parent process can invoke such a system call to terminate its child
process.This is usually happen’s because of the folllowing three reasons:
• 1.Cascading termination in which the termination of a process causes termination
of all its childern.On some OS,a child process is not allowed to execute when its
parent is being terminated.In such cases,the OS initiates cascading termination.
• 2.The task that was being performed by the child process is not required.
• 3.The child process has used up the resources allocated to it more than that it was
permitted.
3.2.Process Scheduling
Add a process to the ready queue

Choose a process from ready queue

Allocate the processor to the selected process

Need I/O Add to device queue

Job
completed
3.2.Process Scheduling
1.Many programs at a time on the computer but there is a single CPU . so for
running all the programs concurrently or simultaneously, then we use the
scheduling.
2.Scheduling is that in which each process have some amount of time of
CPU . Scheduling provides time of CPU to the each process.
3.When two or more processes compete for the CPU at the same time then
choice has to be made which process to allocate the CPU next.
4.This procedure of determining the next process to be executed on the CPU is
called as process Scheduling and the module of the OS that makes this
decision is called the scheduler.
5.Processor scheduling is one of the primary function of a multiprogramming
OS . The process of scheduling can be explained with the help of Fig.
6.The process scheduler is the component of the OS that is responsible for
deciding whether the currently running process should continue running and if
not ,which process should run next.
7.The allocation strategy for a job/process to a processor is called process
scheduling.
3.2.1.Scheduling Queues
1.For a uniprocessor system, there will never be more than one running process . If
there are more than one processes , the rest will have to wait until the CPU is free and
can be rescheduled.
2.The process which are ready &waiting to execute ,are kept on a list called the ready
queue.
3.The list is generally a linked list . A ready queue header will contain pointers to the
First and last PCB’s in the list . Each PCB has a pointer field which points to the next
process in the ready queue.
4.There are also other queue in the system . when a process is allocated the CPU, it
executes for a while and eventually quits , is interrupted or waits for the occurrence of a
Particular event , such as the completion of an I/O request.
5.The process therefore may have to wait for the disk . The list of process waiting for a
particular I/O device is called as Device Queue . Each device has its own device queue.
Ready
Queue Head

Registers Registers
Tail
* *
3.2.1.Scheduling Queues
Queue Header PCB7 PCB2
Ready Head
Queue Registers Registers
Tail

Mag
Head
Tape
Unit 0 Tail
Mag
Head
Tape
Unit 1 Tail q PCB3 PCB14 PCB6

Disk Head

Unit 0 Tail

PCB5
Terminal Head
Tail
Unit 0

FIG:The Ready Queue and Various I/O Device Queue


3.2.1.Scheduling Queues
6.Eventually it is served by the I/O device and returns to the ready queue.A process continues this
CPU , I/O cycle until it finishes and then it exits from the system.
7.Fig shows the queuing diagram for representing the process scheduling .Each rectangle box
represent a queue.
8.Two types of queues are present namely the ready queue and a set of device queues.The circles
represent the resources that serve the queues , and the arrows indicate the flow of processes in the
System.

Ready Queue CPU

I/O I/O Queue

I/O I/O Queue

I/O I/O Queue


3.2.2.Schedulers
1.Schedulers are special system software’s which handles process scheduling in various ways.
2.Their main task is to select the jobs to be submitted into the system and to decide which
process to run.

Medium Term Scheduler


Suspended And swapped out
queue
Long Term
Scheduler
Batch
Jobs Short term Ex Exit
CPU
Batch Queue Ready Queue

Interactive
Programs Suspended Queue
Medium
Term Scheduler
Types of Schedulers
• Long-Term Scheduler
• It is also called a job scheduler. The long term scheduler determines which jobs are
admitted to the system processing.
• In batch system there are more jobs submitted that can be executed
immediately.These jobs are spooled to mass storage device,where they are kept for
latter execution(ready queue).
• The long term scheduler selects jobs from this job pool and loads into memory for
execution.it is called as job scheduler or admission Scheduler.
• It selects processes from the queue and loads them into memory for execution.
Process loads into the memory for CPU scheduling.
• The primary objective of the job scheduler is to provide a balanced mix of jobs,
such as I/O bound and processor bound. It also controls the degree of
multiprogramming.
• If the degree of multiprogramming is stable, then the average rate of process
creation must be equal to the average departure rate of processes leaving the
system.
• On some systems, the long-term scheduler may not be available or minimal. Time-
sharing operating systems have no long term scheduler.
• When a process changes the state from new to ready, then there is use of long-
term scheduler.
Types of Schedulers
• Medium-Term Scheduler
• On some system,the long term scheduler may be absent or minimal.For example,time
sharing systems such as Unix and Microsoft Windows Systems often have no long term
scheduler but simply put every new process in memory for short term scheduler.
• Some Os such as time sharing systems,may introduce an additional intermediate level of
Scheduling(medium term scheduler).
• It is part of Swapping so it is also known as swapper. Swapping clears the memory of the
processes.
• The degree of multiprogramming is reduced. The switched out processes are handled by
the medium-term scheduler.
Swap In Swap Out
Partially executed Swapped Out
Processes

CPU End
Ready Queue

Fig.MTS
I/O I/O waiting Queue
Types of Schedulers
• Short-Term Sceduler
• The short-term scheduler selects one of the processes from the ready queue and
schedules them for execution.
• A scheduling algorithm is used to decide which process will be scheduled for execution
next.
• The short-term scheduler executes much more frequently than the long-term scheduler
as a process may execute only for a few milliseconds.
• The choices of the short term scheduler are very important.
• If it selects a process with a long burst time, then all the processes after that will have
to wait for a long time in the ready queue.
• This is known as starvation and it may happen if a wrong decision is made by the short-
term scheduler.
• Short term scheduler is invoked very frequently (miliseconds)(must be fast).Long term is
invoked very infrequently(seconds,minutes)(may be slow).The long term scheduler
controls the degree of multiprogramming.
• Processes can be described as
• 1.I/O- Bound Process spends more time doing I/O than computations,many short CPU
bursts.
• 2.CPU-Bound Process spends more time doing computation;few very long CPU bursts.
Long term Scheduler Swapped Out

Short Term Scheduler Exit


Enter CPU
Ready Queue

I/O I/O Queue I/O request

Time quantum elapsed

Child
Execution
Child process creation

INT Waiting for INT

Fig.Representation Of Process Scheduling


Difference Between Short term ,Long Term and Medium Term Schdeduler
Sr. Short Term Scheduler Long Term Scheduler Medium Term Scheduler
No

1. Scheduler which selects the job or The Scheduler which picks up The medium term Scheduler is
processes which are ready to job from pool and loads into that it removes the process from
execute from the ready queue and main memory for execution is main Memory and again reloads
allocate the CPU to one of them is called as Long Term Scheduler. afterwards when required.
called Short Term Scheduler.

2. It is CPU scheduler It is Job Scheduler It is process swapping


scheduler

3. Frequency of execution is high(in Frequency of execution is Execution frequency is medium.


Miliseconds) low(in few minutes)

4. Speed is very fast. Speed is lessor than short term Medium term scheduler is
scheduler. called whenever required.

5. It deals with CPU. It deals with main memory for It deals with MM for removing
loading process. processes and reloading
whenever required.

6. It provide lessor control over It controls the degree of It reduces degree of


degree of multiprogramming. multiprogramming. multiprogramming.

7. Minimal in time sharing Absent or minimal in time Time sharing system use
Sharing system medium term scheduler.
Context Switching
• A context switching is the mechanism to store and restore the state or
context of a CPU in Process Control block so that a process execution can
be resumed from the same point at a later time.
• Switching the CPU to another process requires saving the state of the old
Proess and loading the saved state for the new process.This task is known
as ‘Context Switch’,
• Using this technique, a context switcher enables multiple processes to share
a single CPU.
• Context switching is an essential part of a multitasking operating system
features.
• When the scheduler switches the CPU from executing one process to
execute another, the state from the current running process is stored into
the process control block.
• After this, the state for the process to run next is loaded from its own PCB
and used to set the PC, registers, etc. At that point, the second process can
start executing.
• Context switch times are highly dependent on hardware support.Its speed varies
from machine to machine,depending on the memory speed,the number of registers
that must be copied and the existence of special instructions.
• Typically,the speed ranges from 1 to 1000 microseconds.
• some hardware systems employ two or more sets of processor registers to reduce
the amount of Context Switching time.when the process is switched, the following
information is stored for later use.
• 1.Program Counter
• 2.Scheduling information
• 3.Base and limit register value
• 4.Currently used register
• 5.Changed State
• 6.I/O State information
• 7.Accounting information
3.3.Interprocess Communication
• 1.OS provides co-operating processes to communicate with each other via an IPC facility.
• 2. There are applications that involve each executing multiple processes concurrently.
Processes work together to perform application specific tasks.They are referred to as co-
operating processes.
• 3. Co-operating processes are ” loosely” connected in the sense that they have independent
private address spaces and run at different speeds.the relative speeds of the processes are not
normally known.
• 4. From time to time, the interact among themselves by exchanging information .An exchange
of information among processes is called an inter-process communication.
• 5. When multiple processes run on a system concurrently and more than one process requires
CPU at the same time, then it becomes essential to select any one process to which the CPU
can be allocated. To serve this purpose ,scheduling is required.

Shared Data
Senders Receivers
Structures

Information Placeholder
Fig.A Typical IPC Facility
3.3.1.Introduction To IPC
1.Processes executing concurrently in the operating system might be either independent processes
or co-operating processes.
2.Interprocess communication is a set of programming interfaces that allow a programmer to
co-ordinate activities among different program processes that can run concurrently in the
operating system.
3. This allows a program to handle many user request at the same time.
4. Since even a single user request may result in multiple processes running in the OS on the
user’s behalf,the processes need to communicate with each other .the IPC interfaces is make this
Possible.
5. The interprocess communication is a set of technique for the exchange of data among multiple
Processes. IPC technique divided into method for synchronization ,message passing ,stored
memory etc.
6. A capability supported by some OS that allows one process to communicate with another
process the processes can be running on the same computer or different computer connected
through a network.
7. IPC enables one application to control another application and for several application to share
the same data without interfering with one another.
8. IPC is required in all multiprocessing systems ,but it is not generally supported by single
process operating system such as DOS, OS/2 and Ms Windows etc.
Purpose Of IPC::
1.Data transfer ,sharing data, even tnotification ,Resources sharing and synchronization and
process control .
2.Interprocess communication is one of the most common concepts used in OS and distributed
computing systems. it deals with how multiple process can communicate among each other.
3. Two fundamental models allows inter process communication as explained below :
1.shared memory model :
To processes exchange data or information through sharing region .they can read and write data
from and to this region.
2. Message passing model
In message passing model the data or information is exchanged in the form of messages..
Fig.Communication Model

Process A
Process A M Shared memory
1

M Process B
2
Process B
2 2 1

Kernel M Kernel
A )Message Passing b)shared Memory
3.3.2.Shared Memory System
• 1.IPC is inside the memory requires a region of shared memory among communicating
process. Processes can be an exchange information by reading a writing data to the
shared the region.
• 2. Typically assignment memory region resides in the address space of the process
creating the shared memory segment.
• 3. Other Processes that wish to communicate using this shared memory segment must
attach it to their address space.
• 4.Normally the OS does not allow one process to access the memory region of another
process.
• 5.Shared memory requires that two or more processes agree to remove this
restrictions. they can then exchange information by reading and writing data in the
shared areas.
• 6.The process are also responsible for ensuring that they are not writing to the same
location simultaneously. consider the producer-consumer problem in a producer
process produces information that is consumed by consumer information.
• 7.To allow producer and consumer processes to run concurrently ,we must have
available a buffer of items that can be filled by the producer and emptied by the
consumer.
• 8.This buffer will reside in a region of memory that is shared by the producer and
consumer processeses. A producer can produce one item while the consumer is
consuming and another item.
• Two types of buffer can be used :
• 1.unbounded buffer ::
• 1.The unbounded buffer places no practical limit on the size of the buffer.
• 2. The consumer may have to wait for new items,but the producer can always produce new items.
• 2.Bounded Buffer::
• 1.the bounded buffer has a fixed size buffer. In this case the consumer must wait if the buffer is empty and
producer must wait if the buffer is full.
• Shared memory allows maximum speed and convenience of communication .shared memory is faster than
message passing, as message passing systems are typically implemented using system calls and this require the
more time consuming task at kernel intervention.
• In contrast, in shared memory systems,system calls are required only to establish shared memory regions. once
shared memory is established, all access are treated as a routine memory accesses, and no assistance from the
kernel is required.
• figure shows IPC communication through address space sharing.
Process P2

P2 address Space

P1 address space

V
Shared
Process P1
Memory

This area is mapped to the same Main Memory


Locations

Fig:IPC through Address Space Sharing.


3.3.3..Message Passing System
1. Message passing provides a mechanism to allow processes to communicate and to
synchronize their actions without sharing the same address space and this is particularly
useful in a distributed environment ,where the communicating processes may reside on
different computers connected by a network.
2. For example a chat program used on the world wide web could be designed so that chat
participants communicate with one another by exchanging message.
3. The functions of a message system is to allow process to communicate with one another
without the need to resort to shared data .
4. In this scheme services are provided as ordinary use processes. that is the service operate
outside of the kernel .communication among the user processes is accomplished through
the passing of message.
5. In IPC facility provides of the two operations
1.send (message) 2.Receive (message)
6 If process A and B want to communicate they must send message and to receive
messages from each other a communication link must exist between them this link is
implemented not only physically but also logically.
7. several methods for logically implementing a link and the send/receive operations:
1. direct or indirect communication
2. Symmetric or Asymmetric communication.
3. Send by copy or send by reference.
4. Fixed sized or variable sized messages.
Naming
• processes that want to communicate must have a way to refer to each other. They
can use either direct or indirect communication.
• 1.Direct Communication:
• 1. With direct communication each process that wants to communicate must
explicitly name the recipient or sender of the communication.
• I) send (A,message): send message to process A.
• II)receive (B message): receive a message from process B.
• Direct communication has the following properties:
• 1. A link is established automatically between every pair of processes that want to
communicate the processes need to know only each other's identity to
communicate.
• 2. A link is associated with exactly two processes.
• 3. Exactly one link exist between each pair of processes.

i send(j,m) receive(i,m)
j

Fig : An example Of Direct Message Passing


2.Indirect Communication:
1. With indirect communication the message is sent to and received from mailboxes
or ports .
2.Mailbox where messages can be placed by processes and from which messages can
be removed. Receive Receive
3. Each mailbox has a Unique Identification.Two processes can communicate only if
they share a mailbox. the send and receive primitives are defined as follows:
i)send(A,message):Send a message to mailbox A.
ii)receive(A,message):Receive a message from mailbox A. Mail Box
4. In this scheme a communication link has the following properties:
1. A link is established between a pair of processes only if the both members of the
pair have a shared mailbox.
2. A link may be associated with more than two processes.
3. Between each pair of communicating processes ,there may be number of different
links with each link corresponding to one mailbox. Send
5. Suppose processes P1, P2, P3 all shares mailbox. A process P1 sends to a message A
while P2 and P3 each execute a received from A. A mailbox owned by operating system is
independent and is not attached to any particular process.
6. The operating system that must provide a mechanism that allows a process to do
the following :
1.create new mailbox
2. send and receive message through the mailbox .
3.delete a mailbox
Sychronization
1. Communication between processes takes place by calls to send and receive
primitives .There are different design options for implementing each primitive.
2. Message passing may be blocking or non blocking also known as synchronous
and asynchronous.
3. 1.Blocking Send: The sending process is a blocked until the message is received
by the receiving process or by the mailbox.
2.Non-Blocking send: The sending process sends the message and resumes
operations.
3.Blocking Receives: The receive blocks until a message is available.
4.Non-blocking Receive: The receive retrieves either a valid message or a null.
4. Different combination of send() and receive() are possible. when both the send
and receive are blocking we have a meeting between the sender and the receiver.
Buffering
1. When a communication is direct or indirect messages exchanged by
communicating processes reside in a temporary queue .such a queue can be
implemented in three ways.
1.Zero Capacity: The queue has a maximum length zero thus the link cannot have any
message waiting in it.In this case the sender must block until the recipient receives
the messages.
2.Bounded Capacity: the queue has a finite length n, an interest at most n messages
can reside in it. if the queue is not full When a new message is sent the latter is
placed in the queue and the sender can continue execution without waiting. The
link has a finite capacity however if the link is a full the sender must block until
space is available in the queue.
3.Unbounded Capacity: The queue has a potentially infinite length,thus any number
of messages can wait in it. the sender never blocks.
2. The messages sent by a process are temporarily stored in a temporary queue (also
called buffer) by the operating system before delivering them to the recipient.
3. The zero capacity case is sometimes referred to as a message system with no
buffering;the other cases are referred to as automatic buffering.
• Critical Section problem:
1.The key to prevent double involving share storage is to find some way to prohibit
more than one process from reading and writing the shared data simultaneously. the
part of the program where the shared memory is accessed is called as a critical
section.
2. To avoid race condition and inconsistent result one must identify codes in critical
section in each process.
3.considered a system consisting of N processes(P0,P1,…Pn-1) each process has a
segment of code called critical section in which the process may be changing common
variables, updating a table, writing a file, and so on.
4. The important feature of the system is that when one process is executing in its
critical Section No other process is to be allowed to excute in its critical section.That is
no two processes are executing in their critical section at the same time.
5. The critical section problem is to design a protocol that the processes can use to
corporate .each process must request permission to enter its critical section. the
section of code implementing this request is the entry section.
do
{
Entry Section
Critical Section
Exit Section
Remainder section
}while(True);
Fig :General Structure of a Typical Process Pi.
The critical section can be defined by the following three section:
1. Entry section:Each process must request permission to enter its critical section
processes may go to critical section through Entry section.
2. Exit section:Each critical section is terminated by exit section.
3. Remainder section:The code after exit section is a remainder section.
The shared data is called as a critical resources and the portion of the program that
uses the critical resources is called a critical section. A solution to the critical section
problem must be satisfied the following three requirement:
1. mutual exclusion :
1.if process Pi is executing on its critical section then no other processes can be
executed in their critical section so mutual exclusion can be defined as “a way of
making sure that if one process is using a shared modifiable data The Other processes
will be executed from doing the same thing”.
2.We need food condition to have a good solution for the mutual exclusion is :
1.No two processes may at the same moment inside the critical Section
2. No assumptions are made about relative speed of processes or numbers of the CPU’s
3.No process should outside its critical section should block other processes
4.No process should wait arbitrator in long to enter its critical section
• 2. Progress
1.if no process executing in its critical section and suppresses wish to enter their
critical sections then only those processes that are not executing in their remainder
section can participation in deciding which will enter its critical section next and
this selection can not be postponed indefinitely.
2. The progress criteria prevent an infinite loop executing by one process in a non
critical section code from stopping a different processes from entering a critical region.
But it does not prevent an infinite loop inside a critical region from making the system
hang.
3. bounded waiting
1.There exists bound, or limit on the number of times that other processes are allowed
to enter their critical sections after a process has a made a request to enter its critical
section and before that request is granted.
Critical section solutions
1.There are two processes P0 and P1.We can assume that first process is Pi and other
process is Pj.
Algorithm 1:
1.In the first algorithm all processes are the common integer variable ‘turn’
initialize as 0 or 1. if turn=i, then process Pi is allowed to execute in its critical sections.
2. this algorithm insurance that only one process at a time can be critical section, but
does not satisfy the progress condition the structure of process Pi is shown in the
figure as follow:
do
{
While (turn!=i)
Critical section
Turn=j;
Remainder Section
}while(1);
Fig A.
Algorithm 2:
1.The problem of algorithm-1 is that it does not retain sufficient information about the
state of each process It remember only which process is allowed to enter its critical
section.
2. To remove this problem we can replace the variable turn with the array is “ Boolean
flag” .The elements of the array are initialized to false. if flag[i] is true this value
indicates that Pi is ready to enter in the critical section.
3. This algorithm ensures mutual exclusion condition but does not satisfy the progress
condition .in this process pi is shown in the figure .
4.process Pi is a set flag[i] to be true signaling that it is ready to enter its critical
section then pi checks to verify that Pj is not also ready to enter its critical section. If pj
is ready then pi would wait until Pj had indicated that it no longer needed to be in the
critical section i.e. until flag[j] was false.
5. At this point Pi would enter the critical section. on existing critical section Pi would
set flag[i] to be false allow the other process to enter its critical sections.
do
{
Flag[i]=true
While(flag[j])
Critical section
Flag[i]=false;
Remainder section
}while(1);
Fig B.
• Algorithm 3:
1.In this algorithm We combine the idea of both above where all critical sections
requirement are met. Each process shares two variables as a Boolean flag[2];int turn.
2.initially flag[0] =flag 1= false and the value of turn is a 0 or 1.this algorithm ensures
all critical section requirements. in this case process Pi is shown in the figure .
3.process Pi can enter to critical section if flag[i] is true and turn is equal to i.
do
{
Flag[i]=true;
turn=j;
While(flag[j]&&turn=j)
Critical section
Flag[i]=false;
Remainder section
}
While(1);
Fig C.
• Concept of Semaphores:
• 1. In designing of operating system it is required to allow concurrent execution of
collection of cooperating sequential processes.
• 2.The two processes can co-operate through the means of signals so that process
can be held at the specific point ,until it receives a specific signal for this purpose a
special tool is used to called as semaphores.
• Specifically,semaphores are used to:
• 1.control access to a shared resources (mutual exclusion)
• 2. signal the occurrence of event.
• 3.Allow to tasks to synchronize their activities.
• 3. Semaphore is basically a key that your code acquires to continue execution. if
the semaphore is already in use the requesting task is suspended until the
Semaphore is released by its current owner.
• 4. In other words the requesting tasks says:” Give me the key if you don't have it.
I ‘m willing to wait for it”.
• Types of Semaphores
• 1.Binary Semaphore:
1. Binary semaphore And counting semaphore as the name implies a binary
semaphore can only take two values namely the Zero Or one.
2. Their usually implemented so that attempting to lock a semaphore whose value
is zero simply blocks until the value is 1 then they unlock and set it zero.
• 2.Counting Semaphore: Entry

Critical
Section
Exit
signal S
1. A counting Semaphore however ,allows values between 0 and 255 65,535 or
4,294,967,295 ,depending on whether it is implemented using 8 ,16 or 32 bit
respectively .
2.the size depends on the kernel used. In practice 32 bit Semaphores are pretty
rare. along with the semaphores value you need to keep track of any task that are
waiting for it.
3. Semaphore S is an integer variable that a part of from initialization, is an accessed
two standard atomic operations wait and signal.
4. This operation where originally termed P for wait and V for signal
5.The classical defination of wait and signal are:
wait(S)
{
while(S<=0)
S=S-1
}
Signal(S)
{
S=S+1;
}

4.The integer value of semaphore in the wait and signal operation must be executed
indivisibly. That is when one process modifies the semaphore value, no other process
can simultaneously modify that same semaphore value .
5.In addition in this case of the wait(S) the testing of the integer value of S(S 0 ),and its
possible modification(S:S-1),must also be executed without interruption.Semaphores
are not provided by hardware.+
3.4.Threads
1. A thread, sometime called a Light Weight process (LWP)is a basic unit of CPU utilization; it
comprises a thread ID , program counter, a register set and a stack .
2.A thread is defined “as a unit of concurrency within a process and had access to the entire code
and data parts of the process”. thus thread of the same process can share their code and data
with one another.
3. It Shares with other threads belonging to the same process its Code section ,data section and
other operating system resources such as open files and signals.
4. Traditional (or heavyweight) process has a single thread of a control. If a process as a multiple
thread i.e.it can do more than one task at a time .Many software packages that run on desktop
PC’S are multithreaded.
5. A word processor may have a thread for displaying graphics ,another thread for reading key
strokes from the user and third thread for performing spelling and grammar checking in the
background etc. Threads play a vital role in Remote Procedure Call(RPC) systems.
6.RPC system RPC as the name implies a communication mechanism that allows a process to call
a procedure on a remote system connected via network the calling process (client) can call the
procedure on the remote host (server) in the same way as it would call the local procedure.
• Multithreaded Programming:
• 1. A Thread is the fundamental unit of CPU utilization. A thread is a program
execution that uses resource of a process.
• 2. A traditional or heavyweight process has a single thread of control comprises a
single thread of a control i.e. it can execute one task at a time and thus is referred
to as a single-threaded process.
• 3. If a process has a multiple threads of control .it can perform more than one task
at a time such a process is known as multithreaded process.

code Data File code Data File

Registers Stack Registers Registers Registers

Stack Stack Stack

Thread

• A.Single Threaded Process B.Multithreaded Process


Advantages Of Threads:
1. Threads improve the performance (throughput ,computational speed,
Responsiveness, or some combination) of program.
2. Concurrent operations can be achieving using threads within a process
3. Multiple threads are useful in multiprocessor system where threads run
concurrently on seprate processors.
4. Multiple Threads also improve program performance on single processor system by
permitting the overlap of input and output or other slow operation with
computational operations.
5. Thread minimizes context switching time.
6. A process with multiple Threads makes a great server for example printer server.
7. Because Threads can share common data they do not need to use interprocess
Communication.
8. Context switching are fast when working with threads.
SR.No Parameters Process Thread

1 Weight of process It is heavy weight process It is light weight process

2 Definition/Meaning An executing instance of a program is Process A thread is a subset of the process.

3 Process It has its own copy of the data segment of the It has direct access to the data segment of its process.
parent process.
4 Communication Processes must use Interprocess Threads can be directly communicate with other threads
Communication to communicate with sibling of its process.
process.
5 Overheads Processes have considerable overhead Threads have almost no overhead.

6 Creation New processes require duplication of the New threads are easily created.
parent process.
7 Control Processes can only exercise control over child Threads can exercise considerable control over threadsof
processes. the same process.
8 Changes Any change in the parent process does not Any change in the main thread may affect the behaviour
affect child processes. of the other threads of the process.
9 Memory Run in separate memory spaces. Run in shared memory spaces.

10 File Description More file descriptors are not shared. It shares file system context.

11 File System There is no sharing of the file system context. It shares file system context.

12 Signal It does not share signal handling It shares signal handling.

13 Controlled By Process is controlled by the OS Threads are controlled by programmer in a program.

14 Dependence Processes are independent. Threads are dependent.


3.4.1.Benefits:
• 1. A thread is a single sequence stream within a process. Because threads have a some of the
properties of processes they are sometimes called as lightweight process.
• 1.Responsiveness:
• Multithreading and interactive application may allow a program to continue running even if a
part of it is blocked or is performing a lengthy operation thereby increasing responsiveness to
the user. A multithreaded web browser could still allow user interaction in one thread while
an image is being loaded in another thread.
• 2.Resource sharing:
• By default thread share the same memory and the resources of the process to which they
belong. The benefits of code sharing is that it allows an application to have several different
threads of activity all within the same address space.
• 3.Economy:
• Allocating memory and resource for a process creation is costly .Alternatively because
threads share resource of the process to which they belong, it is more economical to create
and context which threads.
• 4.Utilization of Multiprocessor Architecture:
• The benefits of multi threading can be greatly increased if a multiprocessor architecture
where each thread may be running in a parallel on a different processor.
3.4.2.User and kernel Threads
• Practically threads can be implemented at two different levels namely user level and kernel
level.
• Library which provides the programmers with an application programming interface(API) for
the thread creation and management is referred to as a thread library.
• The thread library maps the user threads to the Kernel threads. A thread library can be
implemented either in the user space or in the kernel space.
3.4.2.1.User Threads

Thread Library User Space

Kernel Space

CPU

Fig:User Level Thread


• 1.The Threads implemented at the user level known as a user thread. In user level thread
management is done by the application ;the kernel is not aware of existence of thread.
• 2. user threads are supported above the kernel and are implemented by a thread library at
the user level. The library provides support for thread creation scheduling and management
with no support from the kernel .
• 3.Because the kernel us unaware of user level threads all thread creation and scheduling are
down in user space without the need for kernel intervention. Therefore user level thread are
generally fast to create and manage.
• 4. User thread libraries include POSIX PThreads.,Mach C-Threads,and Solaris 2 UI-Threads.
• Advantages of user level Threads
• 1. user level thread can run on any operating system.
• 2. A user thread does not require modification of operating system .
• 3.User thread library easy to portable .
• 4.low cost thread operations is possible .
• Disadvantage of user level thread
• 1.multithreaded applications cannot take advantage of multiprocessing .
• 2.At most one user level thread can be operation at one time which limits the degree of
parallelism
3.4.2.2.Kernel Threads

User space
Kernel Space

CPU
• Fig:Kernel Level Space.
• 1. The threads implemented at the kernel level are known as kernel thread are supported
directly by the operating system.
• 2. The kernel performs thread creation, scheduling and Management in Kernel space.because
thread management is a done by the operating system,kernel threads are generally slower
to create and manage than are user thread.
• 3. However since the kernel is managing the threads, if a thread performs a blocking system
call, the kernel can schedule another thread in the application for execution. Also in a
multiprocessor environment ,the kernel can schedule threads on a different processor.
• 4.Most contemporary operating system including Windows NT ,windows 2000,BeOS and
Tru64 UNIX-support kernel threads.
• Advantages Of kernel Level Threads:
• 1. Kernel can simultaneously schedule multiple thread from the same process on multiple processes.
• 2. if one thread in a process is blocked, the kernel can schedule another thread of the same process.
• 3. kernel routine themselves can multithreaded.
• Disadvantages Of Kernel Level Threads:
• 1. The kernel -level thread are slow and inefficient for instance ,threads operationsare hundreds of times
slower than that of user -level thread.
• 2. Transfer of control from one thread to another within the same process requires a mode switch to the
kernel.

• Difference Between User Level and Kernel Level Thread:

Sr.No User Level Thread Kernel Level Threads

1 User level thread are faster to create and Kernel level thread are slower to
manage . create and manage.

2 Implemented by a thread library at the operating system support directly to


user level. kernel thread.

3 User level thread can run on any operating kernel level thread are specific to the
system. operating system .

4 Multithread application cannot take kernel routine themselves can


advantage of multiprocessing. multithreaded
3.4.3.Multithreading Models
• Many systems provide support for both user and kernel threads, resulting in different multithreading
model. Three Types of multithreading models implemented.
• 3.4.3.1.one-to-one Model Fig . One-to-one Thread Model

• User Thread


• Kernel Thread

K K K K K
• 1. The one to one model maps each user thread to a kernel thread .
• 2.It provides more concurrency than the many -to -one model by allowing another thread to run when a
thread makes a blocking system call ;it also allows multiple threads to run in parallel on multiprocessors
• 3. Windows NT ,windows 2000 and OS/2 implement the one –to- one model.
• Advantages
1. Multiple Threads can run parallel.
2. less complication in the processing.
• Disadvantages
1. thread creation involves lightweight process creation.
2. every time with user’s thread kernel thread is created .
3.Limited the number of Total thread.
4. kernel thread is an overhead .
5.it reduce the performance of system
3.4.3.2.Many-to-one Model
1.The many- to -one model maps many user threads one kernel thread .Thread management is
done in user space, so it is efficient but the entire process will block if a thread makes a blocking
system call .
2.only one thread can access the kernel at a time. multiple Threads are unable to run in
parallel on multiprocessors.
3.Green threads a thread library available for solaris 2-uses this model.

User Thread

Fig:Many-to-One Model

K Kernel Thread

Advantages
1. Totally portable.
2. Easy to do with few system dependencies .
3. mainly used in language system portable libraries.
4. Efficient system in terms of performance
Disadvantages
1. cannot take advantage of parallelism
2. one block call blocks all user threads.
3.4.3.3.Many-to-Many Model
User Thread

Kernel Thread

K K K

1. The many- to –many model multiplexes many user level threads to a smaller or equal number of
kernel threads.
2. The number of kernel Threads may be specific to either a particular application or particular
machine .
3. this model allows developer to create as many Threads .concurrency is not gained because the
kernel can schedule only one thread at a time.
4. The one to one model allows for Greater concurrency .but the developer has to careful not to
create too many Threads within an application.
5. solaris 2, IRIX, HP-UX and Tru64 UNIX support this model.
• Advantages
• 1.Many Threads can be created as per user’s requirenment.
• 2.2multiple kernel or equal to user that can be created
• Disadvantages
• 1. True concurrency cannot be achieved.
• 2. multiple Threads of kernel is an overhead for operating system.
• 3. performance is less.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy