3.Process Management
3.Process Management
Process Management
3.0.Introduction
1.A running instance of program is called as process.A process is a smallest
unit of work that is scheduled by the OS.
2.A process needs resources,such as CPU time,memory,files and I/O devices
to accomplish its task.These resources are allocated either when the program is
created or when it is executing.
3.A user uses processes to achieve execution of programs in a sequential or
concurrent manner as desired.
4.Process management is the fundamental task of any modern OS.The OS
must allocate resources to processes,enable processes to share and exchange
information,protect the resources of each process from other processes and
enable synchronization among processes.
5.To meet these requirnments,the OS must maintain a data structure for each
process,which describes the state and resource ownership of that process,and
which enables the OS to exert control over each process.
3.1.Process
1.A process is defined as ,”an entity which represents the basic unit of work to
be implemented in the system”.
2.A process is defined as, ”Program under execution,which competes for the
CPU time and other resources.”
3.A process is a program in execution.Process is also called as job,task and
unit of work.
4.The execution of a process must progress in a sequential fashion.That is ,at
any time ,at most one instruction is executed on behalf of the process.
5.A process is an instance of an executing program ,including the current
values of the program counter ,registers and variables.
6.Logically each process has its seprate virtual CPU.in actual,the real CPU
switches from one process to an other.
7.A process is an activity and it has a program,input,output and a state.
• Each process has Following sections:::(Fig:Process in Memory)
6 Program occupy fixed place in storage or Process changes its state during execution.
main memory
3.1.1.Process Model
• In process model the OS is organized into a number of sequential processes.A
process is just an executing program,including the current values of the
program counter,register and variables.
• Each process had its own virtual CPU.in reality of course the real CPU
switches back and forth from process to process;thus it is much a collection of
processes running in parallel.this rapid switching back and forth is called
multiprogramming.
• In fig (A),computer multiprogramming four programs in the memory.
• In fig(B),can see how this abstracted into four processes,each with its own flow
of control,and each one running independent of other ones.
• In fig(C),can see that viewed over a long enough time interval,all the processes
have made progress ,but at any given instant only one process is actually
running.
• With the CPU switching back and forth among the processes,the rate at which a
process performs its computation will not be uniform,and probably not even
reproducible if the same processes are run again.
• With some scheduling algorithm being used to determine when to stop work on
one process and service a different one.
Program Counter
Four Program Counter
A
B
A B C D
D
(B) Process Time
(C)
(A)
3.1.2.Process State
Process States
1.In multiprogramming system, many processes are executed by the OS,but at
any instant of time only one process executes on the CPU.other process wait
for their run.
2.The current activity of a process is known as its state.As a process executes it
changes state.The process state is an indicator of the nature of the current
activity in a process.
3.Following fig shows Process state diagram.A state diagram represents the
different states in which a process can be at different times,along with the
tranisitions from one state to another that are possible in the OS.
• Each process may be in one of the following states.
1.New State:
1.A process that has just been created but has not yet been admitted to the
pool of execution processes by the OS.
2.Every new operation which is requested to the system is known as new born
process.
Process State
2.Ready State
1.When a process ready to execute but it is waiting for the CPU
to execute then this is called as ready state.
2.After the completion of the input and outputs the process will be on ready
State means the process will wait for the processor to execute.
3.Running State
1.The process that is currently being executed.
2.when the process is running under the CPU,or when the program is executed
by the CPU,then this is called as Running state.
3.And When a process is running then this will also provide us some output on
the screen.
4.Waiting or Blocked
1.A process that cannot execute until some event occurs or an I/O completion.
2.When a process is waiting for some input and output operations then this is
called as waiting state.
3.And in this state process is not under the execution instead the process is stored
out of memory and when the user will provide the input then this will again be on
ready state.
Process state
5.Terminated State
1.After the completion of the process,the process will be automatically
terminated by the CPU.,so this is called as terminated state of the process.
2.After executing the whole process the processor will also de allocate the
memory which is allocated to the process.so this is called as the terminated
process.
A Process are availble into any one of the state while execution.
State/Activity Description
New Process is being executed.
Process Number
Program Counter
CPU Registers
Memory Allocation
Event information
--The PCB serves as the repository for any information that may vary from process to
Process.
3.1.5.Operation On Processes
• 1.There are various operations or tasks that can be performed on processes, such as
creating,terminating,suspending or resuming a process etc.
• 2.To successfully execute these operations/tasks , the OS provides run time services for
the process management.
• 1.Process Creation
• 1.when new process is to be added to those currently being managed,the OS builds the
data structures that are used to manage the process and allocates address space in
main memory to the process . This is creation of new process.
• 2.There are four principal events that cause processes to be created:
• I)System Intialization:
When an OS is booted,typically several processes are created.
• II)Execution of a Process Creation System Call by a running process:
1.often a running process will issue system calls to create one or more new processes to
help it do its job.
Operation On Processes
• III)A user request to create a New Process:
1.In interactive systems,users can start a program by typing a command or clicking
an icon.
• IV)Initiation of a batch Job:
• 1.user can submit batch jobs to the system .when an OS decides that it has the run
another job,it creates a new process and runs the next job from the input queue in it.
• --system call CreateProcess() in windows and fork() in unix which tells the OS to
create a new process.
• When a process creates a new process ,two possibilities exist in term of
execution
• 1.The parent continues to execute concurrently with its children.
• 2.The parent waits until some or all of its children have terminated.
• When a process creates a new process, there are two possibilities in terms of
address space of the new process.
• 1.The child process has the same program and data as parent.
• 2.The child process has new program loaded into it.
• 2.Process Termination
Job
completed
3.2.Process Scheduling
1.Many programs at a time on the computer but there is a single CPU . so for
running all the programs concurrently or simultaneously, then we use the
scheduling.
2.Scheduling is that in which each process have some amount of time of
CPU . Scheduling provides time of CPU to the each process.
3.When two or more processes compete for the CPU at the same time then
choice has to be made which process to allocate the CPU next.
4.This procedure of determining the next process to be executed on the CPU is
called as process Scheduling and the module of the OS that makes this
decision is called the scheduler.
5.Processor scheduling is one of the primary function of a multiprogramming
OS . The process of scheduling can be explained with the help of Fig.
6.The process scheduler is the component of the OS that is responsible for
deciding whether the currently running process should continue running and if
not ,which process should run next.
7.The allocation strategy for a job/process to a processor is called process
scheduling.
3.2.1.Scheduling Queues
1.For a uniprocessor system, there will never be more than one running process . If
there are more than one processes , the rest will have to wait until the CPU is free and
can be rescheduled.
2.The process which are ready &waiting to execute ,are kept on a list called the ready
queue.
3.The list is generally a linked list . A ready queue header will contain pointers to the
First and last PCB’s in the list . Each PCB has a pointer field which points to the next
process in the ready queue.
4.There are also other queue in the system . when a process is allocated the CPU, it
executes for a while and eventually quits , is interrupted or waits for the occurrence of a
Particular event , such as the completion of an I/O request.
5.The process therefore may have to wait for the disk . The list of process waiting for a
particular I/O device is called as Device Queue . Each device has its own device queue.
Ready
Queue Head
Registers Registers
Tail
* *
3.2.1.Scheduling Queues
Queue Header PCB7 PCB2
Ready Head
Queue Registers Registers
Tail
Mag
Head
Tape
Unit 0 Tail
Mag
Head
Tape
Unit 1 Tail q PCB3 PCB14 PCB6
Disk Head
Unit 0 Tail
PCB5
Terminal Head
Tail
Unit 0
Interactive
Programs Suspended Queue
Medium
Term Scheduler
Types of Schedulers
• Long-Term Scheduler
• It is also called a job scheduler. The long term scheduler determines which jobs are
admitted to the system processing.
• In batch system there are more jobs submitted that can be executed
immediately.These jobs are spooled to mass storage device,where they are kept for
latter execution(ready queue).
• The long term scheduler selects jobs from this job pool and loads into memory for
execution.it is called as job scheduler or admission Scheduler.
• It selects processes from the queue and loads them into memory for execution.
Process loads into the memory for CPU scheduling.
• The primary objective of the job scheduler is to provide a balanced mix of jobs,
such as I/O bound and processor bound. It also controls the degree of
multiprogramming.
• If the degree of multiprogramming is stable, then the average rate of process
creation must be equal to the average departure rate of processes leaving the
system.
• On some systems, the long-term scheduler may not be available or minimal. Time-
sharing operating systems have no long term scheduler.
• When a process changes the state from new to ready, then there is use of long-
term scheduler.
Types of Schedulers
• Medium-Term Scheduler
• On some system,the long term scheduler may be absent or minimal.For example,time
sharing systems such as Unix and Microsoft Windows Systems often have no long term
scheduler but simply put every new process in memory for short term scheduler.
• Some Os such as time sharing systems,may introduce an additional intermediate level of
Scheduling(medium term scheduler).
• It is part of Swapping so it is also known as swapper. Swapping clears the memory of the
processes.
• The degree of multiprogramming is reduced. The switched out processes are handled by
the medium-term scheduler.
Swap In Swap Out
Partially executed Swapped Out
Processes
•
CPU End
Ready Queue
Fig.MTS
I/O I/O waiting Queue
Types of Schedulers
• Short-Term Sceduler
• The short-term scheduler selects one of the processes from the ready queue and
schedules them for execution.
• A scheduling algorithm is used to decide which process will be scheduled for execution
next.
• The short-term scheduler executes much more frequently than the long-term scheduler
as a process may execute only for a few milliseconds.
• The choices of the short term scheduler are very important.
• If it selects a process with a long burst time, then all the processes after that will have
to wait for a long time in the ready queue.
• This is known as starvation and it may happen if a wrong decision is made by the short-
term scheduler.
• Short term scheduler is invoked very frequently (miliseconds)(must be fast).Long term is
invoked very infrequently(seconds,minutes)(may be slow).The long term scheduler
controls the degree of multiprogramming.
• Processes can be described as
• 1.I/O- Bound Process spends more time doing I/O than computations,many short CPU
bursts.
• 2.CPU-Bound Process spends more time doing computation;few very long CPU bursts.
Long term Scheduler Swapped Out
Child
Execution
Child process creation
1. Scheduler which selects the job or The Scheduler which picks up The medium term Scheduler is
processes which are ready to job from pool and loads into that it removes the process from
execute from the ready queue and main memory for execution is main Memory and again reloads
allocate the CPU to one of them is called as Long Term Scheduler. afterwards when required.
called Short Term Scheduler.
4. Speed is very fast. Speed is lessor than short term Medium term scheduler is
scheduler. called whenever required.
5. It deals with CPU. It deals with main memory for It deals with MM for removing
loading process. processes and reloading
whenever required.
7. Minimal in time sharing Absent or minimal in time Time sharing system use
Sharing system medium term scheduler.
Context Switching
• A context switching is the mechanism to store and restore the state or
context of a CPU in Process Control block so that a process execution can
be resumed from the same point at a later time.
• Switching the CPU to another process requires saving the state of the old
Proess and loading the saved state for the new process.This task is known
as ‘Context Switch’,
• Using this technique, a context switcher enables multiple processes to share
a single CPU.
• Context switching is an essential part of a multitasking operating system
features.
• When the scheduler switches the CPU from executing one process to
execute another, the state from the current running process is stored into
the process control block.
• After this, the state for the process to run next is loaded from its own PCB
and used to set the PC, registers, etc. At that point, the second process can
start executing.
• Context switch times are highly dependent on hardware support.Its speed varies
from machine to machine,depending on the memory speed,the number of registers
that must be copied and the existence of special instructions.
• Typically,the speed ranges from 1 to 1000 microseconds.
• some hardware systems employ two or more sets of processor registers to reduce
the amount of Context Switching time.when the process is switched, the following
information is stored for later use.
• 1.Program Counter
• 2.Scheduling information
• 3.Base and limit register value
• 4.Currently used register
• 5.Changed State
• 6.I/O State information
• 7.Accounting information
3.3.Interprocess Communication
• 1.OS provides co-operating processes to communicate with each other via an IPC facility.
• 2. There are applications that involve each executing multiple processes concurrently.
Processes work together to perform application specific tasks.They are referred to as co-
operating processes.
• 3. Co-operating processes are ” loosely” connected in the sense that they have independent
private address spaces and run at different speeds.the relative speeds of the processes are not
normally known.
• 4. From time to time, the interact among themselves by exchanging information .An exchange
of information among processes is called an inter-process communication.
• 5. When multiple processes run on a system concurrently and more than one process requires
CPU at the same time, then it becomes essential to select any one process to which the CPU
can be allocated. To serve this purpose ,scheduling is required.
Shared Data
Senders Receivers
Structures
Information Placeholder
Fig.A Typical IPC Facility
3.3.1.Introduction To IPC
1.Processes executing concurrently in the operating system might be either independent processes
or co-operating processes.
2.Interprocess communication is a set of programming interfaces that allow a programmer to
co-ordinate activities among different program processes that can run concurrently in the
operating system.
3. This allows a program to handle many user request at the same time.
4. Since even a single user request may result in multiple processes running in the OS on the
user’s behalf,the processes need to communicate with each other .the IPC interfaces is make this
Possible.
5. The interprocess communication is a set of technique for the exchange of data among multiple
Processes. IPC technique divided into method for synchronization ,message passing ,stored
memory etc.
6. A capability supported by some OS that allows one process to communicate with another
process the processes can be running on the same computer or different computer connected
through a network.
7. IPC enables one application to control another application and for several application to share
the same data without interfering with one another.
8. IPC is required in all multiprocessing systems ,but it is not generally supported by single
process operating system such as DOS, OS/2 and Ms Windows etc.
Purpose Of IPC::
1.Data transfer ,sharing data, even tnotification ,Resources sharing and synchronization and
process control .
2.Interprocess communication is one of the most common concepts used in OS and distributed
computing systems. it deals with how multiple process can communicate among each other.
3. Two fundamental models allows inter process communication as explained below :
1.shared memory model :
To processes exchange data or information through sharing region .they can read and write data
from and to this region.
2. Message passing model
In message passing model the data or information is exchanged in the form of messages..
Fig.Communication Model
Process A
Process A M Shared memory
1
M Process B
2
Process B
2 2 1
Kernel M Kernel
A )Message Passing b)shared Memory
3.3.2.Shared Memory System
• 1.IPC is inside the memory requires a region of shared memory among communicating
process. Processes can be an exchange information by reading a writing data to the
shared the region.
• 2. Typically assignment memory region resides in the address space of the process
creating the shared memory segment.
• 3. Other Processes that wish to communicate using this shared memory segment must
attach it to their address space.
• 4.Normally the OS does not allow one process to access the memory region of another
process.
• 5.Shared memory requires that two or more processes agree to remove this
restrictions. they can then exchange information by reading and writing data in the
shared areas.
• 6.The process are also responsible for ensuring that they are not writing to the same
location simultaneously. consider the producer-consumer problem in a producer
process produces information that is consumed by consumer information.
• 7.To allow producer and consumer processes to run concurrently ,we must have
available a buffer of items that can be filled by the producer and emptied by the
consumer.
• 8.This buffer will reside in a region of memory that is shared by the producer and
consumer processeses. A producer can produce one item while the consumer is
consuming and another item.
• Two types of buffer can be used :
• 1.unbounded buffer ::
• 1.The unbounded buffer places no practical limit on the size of the buffer.
• 2. The consumer may have to wait for new items,but the producer can always produce new items.
• 2.Bounded Buffer::
• 1.the bounded buffer has a fixed size buffer. In this case the consumer must wait if the buffer is empty and
producer must wait if the buffer is full.
• Shared memory allows maximum speed and convenience of communication .shared memory is faster than
message passing, as message passing systems are typically implemented using system calls and this require the
more time consuming task at kernel intervention.
• In contrast, in shared memory systems,system calls are required only to establish shared memory regions. once
shared memory is established, all access are treated as a routine memory accesses, and no assistance from the
kernel is required.
• figure shows IPC communication through address space sharing.
Process P2
P2 address Space
P1 address space
V
Shared
Process P1
Memory
i send(j,m) receive(i,m)
j
Critical
Section
Exit
signal S
1. A counting Semaphore however ,allows values between 0 and 255 65,535 or
4,294,967,295 ,depending on whether it is implemented using 8 ,16 or 32 bit
respectively .
2.the size depends on the kernel used. In practice 32 bit Semaphores are pretty
rare. along with the semaphores value you need to keep track of any task that are
waiting for it.
3. Semaphore S is an integer variable that a part of from initialization, is an accessed
two standard atomic operations wait and signal.
4. This operation where originally termed P for wait and V for signal
5.The classical defination of wait and signal are:
wait(S)
{
while(S<=0)
S=S-1
}
Signal(S)
{
S=S+1;
}
4.The integer value of semaphore in the wait and signal operation must be executed
indivisibly. That is when one process modifies the semaphore value, no other process
can simultaneously modify that same semaphore value .
5.In addition in this case of the wait(S) the testing of the integer value of S(S 0 ),and its
possible modification(S:S-1),must also be executed without interruption.Semaphores
are not provided by hardware.+
3.4.Threads
1. A thread, sometime called a Light Weight process (LWP)is a basic unit of CPU utilization; it
comprises a thread ID , program counter, a register set and a stack .
2.A thread is defined “as a unit of concurrency within a process and had access to the entire code
and data parts of the process”. thus thread of the same process can share their code and data
with one another.
3. It Shares with other threads belonging to the same process its Code section ,data section and
other operating system resources such as open files and signals.
4. Traditional (or heavyweight) process has a single thread of a control. If a process as a multiple
thread i.e.it can do more than one task at a time .Many software packages that run on desktop
PC’S are multithreaded.
5. A word processor may have a thread for displaying graphics ,another thread for reading key
strokes from the user and third thread for performing spelling and grammar checking in the
background etc. Threads play a vital role in Remote Procedure Call(RPC) systems.
6.RPC system RPC as the name implies a communication mechanism that allows a process to call
a procedure on a remote system connected via network the calling process (client) can call the
procedure on the remote host (server) in the same way as it would call the local procedure.
• Multithreaded Programming:
• 1. A Thread is the fundamental unit of CPU utilization. A thread is a program
execution that uses resource of a process.
• 2. A traditional or heavyweight process has a single thread of control comprises a
single thread of a control i.e. it can execute one task at a time and thus is referred
to as a single-threaded process.
• 3. If a process has a multiple threads of control .it can perform more than one task
at a time such a process is known as multithreaded process.
Thread
3 Process It has its own copy of the data segment of the It has direct access to the data segment of its process.
parent process.
4 Communication Processes must use Interprocess Threads can be directly communicate with other threads
Communication to communicate with sibling of its process.
process.
5 Overheads Processes have considerable overhead Threads have almost no overhead.
6 Creation New processes require duplication of the New threads are easily created.
parent process.
7 Control Processes can only exercise control over child Threads can exercise considerable control over threadsof
processes. the same process.
8 Changes Any change in the parent process does not Any change in the main thread may affect the behaviour
affect child processes. of the other threads of the process.
9 Memory Run in separate memory spaces. Run in shared memory spaces.
10 File Description More file descriptors are not shared. It shares file system context.
11 File System There is no sharing of the file system context. It shares file system context.
Kernel Space
CPU
User space
Kernel Space
CPU
• Fig:Kernel Level Space.
• 1. The threads implemented at the kernel level are known as kernel thread are supported
directly by the operating system.
• 2. The kernel performs thread creation, scheduling and Management in Kernel space.because
thread management is a done by the operating system,kernel threads are generally slower
to create and manage than are user thread.
• 3. However since the kernel is managing the threads, if a thread performs a blocking system
call, the kernel can schedule another thread in the application for execution. Also in a
multiprocessor environment ,the kernel can schedule threads on a different processor.
• 4.Most contemporary operating system including Windows NT ,windows 2000,BeOS and
Tru64 UNIX-support kernel threads.
• Advantages Of kernel Level Threads:
• 1. Kernel can simultaneously schedule multiple thread from the same process on multiple processes.
• 2. if one thread in a process is blocked, the kernel can schedule another thread of the same process.
• 3. kernel routine themselves can multithreaded.
• Disadvantages Of Kernel Level Threads:
• 1. The kernel -level thread are slow and inefficient for instance ,threads operationsare hundreds of times
slower than that of user -level thread.
• 2. Transfer of control from one thread to another within the same process requires a mode switch to the
kernel.
1 User level thread are faster to create and Kernel level thread are slower to
manage . create and manage.
3 User level thread can run on any operating kernel level thread are specific to the
system. operating system .
•
• Kernel Thread
K K K K K
• 1. The one to one model maps each user thread to a kernel thread .
• 2.It provides more concurrency than the many -to -one model by allowing another thread to run when a
thread makes a blocking system call ;it also allows multiple threads to run in parallel on multiprocessors
• 3. Windows NT ,windows 2000 and OS/2 implement the one –to- one model.
• Advantages
1. Multiple Threads can run parallel.
2. less complication in the processing.
• Disadvantages
1. thread creation involves lightweight process creation.
2. every time with user’s thread kernel thread is created .
3.Limited the number of Total thread.
4. kernel thread is an overhead .
5.it reduce the performance of system
3.4.3.2.Many-to-one Model
1.The many- to -one model maps many user threads one kernel thread .Thread management is
done in user space, so it is efficient but the entire process will block if a thread makes a blocking
system call .
2.only one thread can access the kernel at a time. multiple Threads are unable to run in
parallel on multiprocessors.
3.Green threads a thread library available for solaris 2-uses this model.
User Thread
Fig:Many-to-One Model
K Kernel Thread
Advantages
1. Totally portable.
2. Easy to do with few system dependencies .
3. mainly used in language system portable libraries.
4. Efficient system in terms of performance
Disadvantages
1. cannot take advantage of parallelism
2. one block call blocks all user threads.
3.4.3.3.Many-to-Many Model
User Thread
Kernel Thread
K K K
1. The many- to –many model multiplexes many user level threads to a smaller or equal number of
kernel threads.
2. The number of kernel Threads may be specific to either a particular application or particular
machine .
3. this model allows developer to create as many Threads .concurrency is not gained because the
kernel can schedule only one thread at a time.
4. The one to one model allows for Greater concurrency .but the developer has to careful not to
create too many Threads within an application.
5. solaris 2, IRIX, HP-UX and Tru64 UNIX support this model.
• Advantages
• 1.Many Threads can be created as per user’s requirenment.
• 2.2multiple kernel or equal to user that can be created
• Disadvantages
• 1. True concurrency cannot be achieved.
• 2. multiple Threads of kernel is an overhead for operating system.
• 3. performance is less.