0% found this document useful (0 votes)
11 views

Unit 2

Uploaded by

Mohammad kaif
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views

Unit 2

Uploaded by

Mohammad kaif
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

UNIT 2 PROCESS MANAGEMENT

Structure
2.0 Introduction
2.1 Objcctives
2.2 Process Concept
2.3 Processor Scheduling
2.3.1 Types of Schedulcrs
2.3.2 Scheduling and Performance Criteria
2.3.3 Scheduling Algorithms
2.4 Interprocess Communication and Synchronization
2.4.1 Basic Concepts of Concurrency
2.4.2 Basic Concepts of Interprocess Communication and Synchronization
2.4.3 Mutual Exclusion
2.4.4 Semaphores
2.4.5 Hardware Support for Mutual Exclusion
2.4.6 Mechanism for Structured form of Interprocess Communication and Synchronization
2.5 Deadlocks
2.5.1 System Model
2.5.2 Deadlock Charecterization and Modelling
2.6 Summary
2.7 Model Answers
2.8 Further Readings

2.0 INTRODUCTION
As mentioned earlier in thc prcvious unit, operating system is a collection of programs to
manage system's resources: proccssor, memory, 110 deviccs and file systems. All thcse
resources are very valuable and it is the job of opcrating systcm to see that they are used in a
very efficient and cooperative manner.
The operating system must kcep track of the status of each resource, allocate resources to
different processes based on certain policy, dccidc how long thcse processes will be utilising
these resources and finally deallocate it.
In this unit we will have detailed discussion on proccssor management issues of operating
system only. The other resource management features of opcrating systcms will be discussed
in the subsequent units.
Processor managemcnt is conccmed with the managemcnt of physical processors (CPUs) i.e.
the allocation of processcs (tasks or jobs) to a processor. The notion of a process is central to
the understanding of Opcrating Systcm's functioning. Everything is centcred around h i s
concept and it is very important to undcrstand and apprcciatc this concept dght from the
beginning.
Aprocess is basically a program while it is being executed. Aprocess is a running program
with some specific tasks to do. For example in UNIX Operating Systcm, Shell or command
interpreter is also a process, performing the task of listening to whatever is typcd on a
terminal. Every time we run a command, shell runs that commands as a separate process.
'
When a process is created, it rcquires sevcral resources such as CPU time, memory, files,
stack, registers to run the program.
We will come back to this concept in section 2.2., but to start with, take one example of timc
sharing system to get a feel for this conccpt.
Tn a time shared system, scveral uscr's program residc in computer's memory. Each uscr is
allocated only a fraction of CPU timc for hisher program. Whcn CPU executes a program
or command, a process is creatcd. When onc proccss stops running bccausc it has taken its
share of CPU time, another proccss starts.
When a process is temporarily suspended, information about it is stored in a memory
location so that its cxecution could s t a t from thc same location whcre from it was
suspended. In many operating systcm, all thc information about each proccss is storeti in a
Process table. Opcrating system also providcs several systcm calls to managc processcs.
: ~ ~ ~ d a m c noft a l These system calls are mainly to create and kill proL,:sses. In UNIX operating system, for
berating Svstcm
example when a user types a command to compile 'i C-language program, shell creates a
new process to run the compiler. When the compi:ation is over, it executes a system call to
terminate itself. In this unit we will introduces several new concepts, such as the concept of
process, process hierarchy, processor status, process scheduling, interprocess
communication and Synchronization, deadlock etc. This unit is organiscd as follows:
Section 2.2 covers process concept in more details which includes process hierarchy, process
status and implementation of processes.
Section 2.3 introduces the basic scheduling concepts and presents several different
scheduling mechanisms.
All processes executing in multiprogrammed operating system environment compete with
other processes for system resources such as memory and CPU. Thereforc, they must be
synchronized with each other while accessing. Swtion 2.4 surveys interprocess and
synchronization issues.
Section 2.5 presents problems of deadlock which occurs frequently in multiprogrammed
environment as a rcsult of the unconrrolled granting of system resources to requesting processes.
Section 2.6 gives a summary of this unit. It also contains problems, its solutions and further
readings.

2.1 OBJECTIVES
After going through this unit you will be able to:
define the concept of process, process hierarchy, scheduling, intcrprocess com-
munication, synchronization and deadlock.
classify different types of schedulers.
analyse several types of scheduling algorithms
interpret different types of interprocess communication and synchronizatiori
mechanisms.
analyse deadlock detection and prevention mechanism.

2.2 PROCESS CONCEPT


This section examines the process concept more closely. It describes process hierarchises,
process status diagram and implementation of a process. From the previous discussion on a
process one should understand a difference between a program and a process, although it is
marginal. A program is a passive entity whereas a process is an active entity. The key idea
about a process is xhat it is an activity of some kind and consists of pattern of bytes (that the
CPU interprets as machine instruction, data, register and stack). A single processor may be
shared among several processes with some scheduling policy being used by the processor to
allocate one process and deallocate another one.
Process hierarchies: Operating system needs some way to create and kill processes. When a
process is created, it creates another process(es) which in turn crates some more process(es)
and so on, thus forming process hierarchy or process tree.
In UNIX operating system, a process is created by the fork system call and terminated by
exit. As a simple example of how process trees are created, let us take one example of how
Y I X operating system initialises itself when it is started. When UNIX operating system
gets started a special process init starts running. Then it forks off (creates) one process per
terminal. These processes wait for user to login. If login is correctly fed in, the login process
executes a shell process to accept command. These commands may furthcr create new
processes and so on. Thus all the processes in the whole system bclong to a single tree with
init at the root.
Process States: The lifetime of a process can be divided into several stages as states, each
with certain characterislics that describe the process. It means that as a process starts
I executing, it goes through one state to another state. Each process may be in one of the Rocess Management
following states:
New - The process has been created.
Ready - The process is waiting to be allocated to a processor. Processor comes
to this state immediately after it is created.
t-
All ready processes (kept in queue) keeps waiting for CPU time to be allocated by operating
system in order to run. Aprogram called scheduler which is a part of operating system,
pick-ups one ready process for execution by passing a control to it.
Running- Instructions are being executed. When a process gets a control from CPU plus
other resources, it starts executing. The inning process may require some 110 during
execution. Depending on the particular scheduling policy of operating system, it may pass
its control back to that process or it (operating system) may schedule another process if one
is ready,to run.
I Suspended: A suspended process lacks some resource other than the CPU. Such processes
are normally not considered for execution until the related suspending conditions is fulfilled.

r The running process become suspended by invoking I


order to proceed.
Droutines whose result it needs in

Terminate: When the process finally stops. A process terminates when it finishes executing
its last statement. At that point in time, the process may return some data to its parent
process. Sometimes there are additional circumstances when termination occurs. A process
can cause the termination of another process via an appropriate system call.
A general form of process state diagram is illustrated in Figure 1.

a
,Terminated

Figure 1: Process State Diagram

A parent may terminate the execution of one of its children for a variety of reasons, such as:
(i) The task assigned to the child is no longer required.
(ii) The child has exceeded its usage of some of resources it has been allocated.

Process Implementation
The operating system groups all information that it needs about a particular process into a
data structure called a Process Control Block (PCB). It simply serves as the storage for any
information for processes. When a process is created, the operating system creates a
correspondng PCB and when it terminates, its PCB is released to the pool of free memory
locations from which new PCBs are drawn. A process is eligible to compete for system
resources only when it has active PCB associated with it. A PCB is implemented as a record
containing many pieces of information associated with a specific process, including:
Process number: Each process is identified by its process number, called process ID.
Priority.
- Process state: Each process may be in any of these states: new, ready, running,
suspended and terminated.
Program counter: It indicates the address of the next instruction to be executed for
this process.
Fundamental of Registers: They include accumulator, general purpose registers, index registers etc.
.Operating System Whenever a processor switches over from one process to another process, infona-
tion abdut current status of the old process is saved in the register along with the pro-
gram counter so that the process be allowed to continue correctly afterwards. The
process is shown in figure2

Process 1

Figure 2: Passing olCPU control from one proee~sto another process

Accounting information: It includes actual CPU time used in executing a process.


110 status information: It includes outstanding 110 requests, list of open files, information
about allocation of peripheral devices to processes.
Processor scheduling details: It includes a priority of a process, address to scheduling queues
and any other.
Check Your Progress 1
1. Give several definitions of a process. Is there any universally accepted definition''

2. List advantage(s) if any to using a multiprocess operating system opposed to anal


processing one for single user con~putersystem. Explain also ihc rradeoffs
involved in this process.
Prorcw Management
2.3 PROCESSOR SCHEDULING
1 Scheduling is a fundamental operating system function. All computer resources are
L
scheduled before use. Since CPU is one of the primary computer resources, its scheduling is
centralto operating system design.
Scheduling refers to a set of policies and mechanisms supported by operating system that
controls the order in which the work to be done is completed. A scheduler is an operating
system program (module) that selects the next job to be admitted for execution. The main
objective of scheduling is to increase CPU utilisation and higher throughput. Throughput is
the amount of work accomplished in a given time interval. CPU scheduling is the basis of
operating system which supports multiprogramming concepts. By having a number of .
programs in computer memory at the same time, the CPU may be shared among them. This
mechanism improves the overall efficiency of the computer system by getting more work
done in less hme. The advantage of multiprogramming is explained in the previous unit.
In this section we describe the roles of three types of schedulers encountered in operating
system. After this discussion, we will touch upon various performance criterias that
schedulers may use in maximizing system performance and finally we will present several
scheduling algorithms.

2.3.1 Types of Schedulers


In this subsection we describe three types of schedulers: long term, medium term and
short term schedulers in terms of its objectives, operating environment and relationship to
other schedulers in a complex operating system environment.
Long term scheduler: Sometimes it is also called job scheduling. This determines which
, job shall be admitted for immediate processing.

Long term

= + +lprogram
Short term
i scheduler Scheduler

1 Ready
queue
end of

1
t

i
t
I Suspended
queue

Figure 3: Long term and short term schr

There are always more processes than it can be executed by CP, ,ua*n Operating
System. These processes are kept in large storage devices like disk for later processing. The
long term scheduler selects processes from this pool and loads them into memory. In
memory these processes belong to a ready queue. Queue is a type of data structure which
has been discussed in course 4. Figure 3 shows the positioning of all three type of
schedulers. The short term scheduler (also called the CPU scheduler) selects from among the
processes in memvry which are ready to execute and assigns the CPU to one of them. The
long term scheduler executes less frequently. 'If the average rate of number of processes
arriving in memory is equal to that of departuring the system then the long term scheduler
may need to be invoked only when a process departs the system. Because of longer time
taken by CPU during execution, the long term scheduler can afford to take more time to
decide which process should be selected for execution. It may also be very important that
long term scheduler should take a careful selection of processes i.e. processes should be
combination of CPU and 110 bound types. Generally, most processes can be put into any of
+
two categories: CPU bound or I10 bound. If all processes are 110 bound, the ready queue
will always be empty and the short term scheduler will have nothing to do. If all processes
l of are CPU bound, no process will be waiting for If0 operation and again the system will be
stem
unbalanced. Therefore, the long term scheduler pr~videsgood performance by selecting
combination of CPU bound and IfO-boundprocess.
Medium term scheduler: Most of the processes require some 110 operation. In that case, it
may become suspended for 110 operation after running a while. It is beneficial to remove
these ,pocess(suspended) from main memory to hard disk to make mom for other processes.
At some later time these process can be reloaded into memor); and continued where from it
was left earlier. Saving of the suspended process is said to be swapped out or rolled out.
The process is swapped in and swap out by the medium term scheduler. The figure 4 shows
the positioning of the medium term scheduler.

Suspended and C-
swapped-out queue
-
Long term
scheduler I
Ready
Jobs

Suspended
queue

Flgure 4: M d l u m term scheduler

The medium term scheduler has nothing to do with the suspended processes. But the
moment the suspending condition is fulfilled, the medium term scheduler get activated to
allocate the memory and swap in the process and make it ready for commenting CPU
resources. In order to work properly, the medium term scheduler must be provided with
information about the memory requirem-nt of swapped out processes which is usually
recorded at the time of swapping and stored in the related process control block. In term of
the process state transition diagram (figure 1) the medium term scheduler controls suspended
to ready transition of swap* processes.
The short term scheduler: It allocates processes belonging to ready queue to CPU for
immediate processing. Its main objective is to maximize CPU requirement. Compared to
the other two schedulerd it is more frequent It must select a new process for execution quite
often because a_W executes a process only for few millisecond before it goes for UO
operation. Often the short term scheduler executes at least once very 10 millisecond. If it
takes 1 millisecond to decide to execute a process for 10 millisecond, the lf(10tl) = 9% of
the CPU is being wasted simply for scheduling the work. Therefore, it must be very fast.
In terms of the process state transition diagram it is in charge of ready to running state transition.

Check Your Progress 2


1. Describe the objectives of long term schedulers.

...............................................................................................................
2. Explain the functioning of multiple-level-queuescheduling.
Process Management

.......................................................................................,.......................
3. Which of the following are prc-emptivc or non-preemptive scheduling algorithms?
a. First-Comc-First-Served
b. Shorzcsl-Job-First
c. Rocnd Robin

2.3.2 Scheduling and Performance Criteria


. .

In this section we will discuss some performance criterias that are frequently used by
schedulers to maximize system performance. These are:
1. CPU Utilisation
1 2. Throughput
3. Turnaround Time
4. Waiting Time

i 5. Response Time
CPU Utilisation: The key idea is that if the CPU is busy all the time, the utilisation factor of
all the components of the system will be also high.
Throughput: It refers to the amount of work completed in a unit of time. One way to
measure throughput is by means of the number of processes that are completed in a unit of
time. The higher the number of processes, the more work apparently being done by the
system. But this approach is not very useful for comparison because this is dependent on the
characteristics and resource requirement of the process being executed. Therefore to
compare throughput of several scheduling algorithms it should be fed the process with
similar requirements.
Turnaround Time: It may be defined as interval from the time of submission of a process
to the time of its completion. It is the sum of the periods spent waiting to get into memory,
waiting in the ready queue, CPU time and 110 operations.
Waiting Time: In multiprogramming operating system several jobs reside at a time in
memory, CPU executes only one job at a time. The rest of jobs wait for the CPU. The
waiting time may be expressed as turnaround time, less the actual processing time i.e.
waiting time = turnaround time - processing time. But the scheduling algorithm affects or
considers the amount of time that a process spends waiting in a ready queue. Thus rather
~hanlooking at turnaround time waiting time is usually the waiting time for each process.
Response time: It is most frequently considered in time sharing and real time operating
systcm. However its charactcrists differs in the two systems. In time sharing system it may
be dcfincd as interval from the time the last character of a zommand line of a program or
transaction is entered to the time the last result appears on the terminal. In real time system it
may be defincd as interval from the time an internal or external event is signaled to the time
the first insmction of the respective service routine is executed.
One of the problems in designing schedulers and selecting a set of its performance criterias is
that they often conflict with each other. For example, the fastest response time in time
sharing and rcal time system may result in low CPU utilisation. Throughput and CPU
utilisa~ionmay be increased by executing the large number of processes, but then response
time may suffer. Therefore, the design of a scheduler usually requires balance of all the
different requirements and constraints.

2.3.3 Scheduling Algorithms


CPU scheduling deals with the problem of deciding which of the processes in the ready
queue to be allocated the CPU. There are several scheduling algorithms which we will
examine in this section.
A major division among scheduling algorithms is that whether they support pre-emptive or
non-preemptive scheduling discipline. A scheduling discipline is non-precmptiveif once a
process has been given the CPU, the CPU cannot be taken away from that process. A
scheduling discipline is pre-emptive if the CPU can be taken away.
Funuamenhl d Preemptive scheduling is more useful in high priority process which requires immediate
Operating System
response. For example in Reil time system the consequence of missing one interrupt could
be dangerous.
In non-preemptivesystems,jobs are made to wait by longer jobs, but the treatment of all
processes is fairer. The decision whether to schedule preemptive or not depends on the
environment and the type of application most likely to be supported by a givcn operating system.

First-Come-First-Served (FCFS) Scheduling


This is one of the simplest scheduling Gob) in the order of their arrival (Figure 5). Its
implementation is also straightforward which is maintained by FIFO (First-in-First-out)
queue. Once a process has the CPU, it runs to com2letion.

Ready queue
Completed
CPU job

Figure 5: First-in-First-out Scheduling

A FCFS scheduling is non preemptive which usually results in poor performance. As a


consequence of non preemption, there is a low rate of component utilisation and system
throughput Short jobs may suffer considerable turnaround delays and waiting times when
CPU has been allocated to longer jobs. Consider the following example of two processes:
Process Execution time

If both processes PI and P2 arrive in order P1-P2 in quick succession, thc lurnaround times
are 20 and 24 units of time respectively (P2 must wait for P1 to complete) hus giving an
average of 22 (2&24)/2) units of time. The corresponding waiting times are 0 and 20 units
of time with average of 10 time units. However, when the same processcs anive in P2-P1
order, the turn around times are 4 and 24 units of time respectively giving an average of 14
(4+24)/2) units of time and the average waiting time+ (0+4)/2) = 2. This is a substantial
reduction. This simple example explain how short jobs may be delayed in FCFS scheduling
algorithm.

Shortest-job-First (SJF)-Scheduling
A different approach to CPU scheduling is the shortest-job-First where thc scheduling of a
job or a process is done on the basis of its having shortest execution time. If two processes
have the same CPU time, FCFS is used. As an example consider the following set of
processes units of time.
Process CPU time

Using Shortest job first scheduling, these processes would be scheduled in the P4-Pl-P3-P2
order. Waiting time is
O
l6 -- 27 = 6.75 units of time. If we were using the FCFS scheduling the,
+ +

4 4
the average waiting time will be

Shortest job first (SJF) may be implemented in either non- preemptive or preemptive
vardies. In either case, whenever the SJF scheduler is invoked, it searches the ready queue Process Management
to find the job or the process with the shortest executive time. The difference between the
two (preemptive or non-Preemptive) SJF scheduler lies in the condition that lead to
invocation of the scheduler and consequently the frequency of its execution.
SJF scheduling is an optimal scheduling algorithm in terns of minimizing the average
waiting time of a given set of processes. It would always schedule the two process P1 and
P2 discussed in the FCFS scheduling section. When both are available in a ready queue in
the shorter-longer order and thus achieve set of lower waiting time.
SJF scheduling algorithm works optimally only when the exact future execution times of
jobs or processes are known at the time of scheduling. In the case of short-time scheduling
and preemption, even more detailed knowledge of each individual CPU burst is required. In
other words, the optimal performance of SJF scheduling is &pendent upon future knowledge
of the process/job behaviour. This comes on a way to effective implementation of SJF
scheduling in practice, because there is difficulty in estimating future process behaviour
c reliably except for very specialised deterministic cases.
The occurrence of this problem is SJF scheduling can be tackled through sorting of ready list
of processes according to the increasing values of their remaining execution times. This
approach can also improve the schedulers' performance by removing the search process of
shortest processes. However, insertion into a sorted list are generally more complex if the
list is to remain sorted after inserting.
Round Robin Scheduling: This is one of the oldest, simplest and widely used algorithm.
The round robin scheduling algorithm is primarily used in a time-sharing and a multi-user
system where the primary requirement is to provide reasonably good response times and in
general to share the system fairly among all system users. Basically the CPU time is divided
into bme slices. Each process is allocated a small time-slice (from 10-100 millisecond)
while it is running. No process can run for more than one time slice when there are others
waiting in the ready queue. If a process needs more CPU time to complete after exhausting
one time slice, it goes to the end of ready queue to await the next allocation (Figure 6).
Otherwise, if the running process releases a control to operating system voluntarity due to
I@ request or termination, another process is scheduled to run.

r
Completed
.: ','
..,.
..:.
..z
.... .
.
. ., : .,;::$;y;
................
............
. ...... .
,/:>
, ,,,xi>;
...:
.: . .:.
.. job
.. .. . ..:... :
....

I
I Preemption I
Figure 6: Round Robin Scheduling

Round robin scheduling utilises the system resources in an equitable manner. Small process
may be executed in a single time-slice giving good response time whereas long processes
may require several time slices and thus be forced to pass through ready queue a few times
before completion. For example there are 3 processes: P1,P2 and P3 which require the
following CPU time
Process Burst time/Execution time

If we use a time-slice of 5 units of time, then P1 gets the first 5 units of time. Since it
requires another 20 units of time, it is pre-empted after the first time slice and the CPU is
given to the next process i.e. P2. Since P2 just needs 5 units of time, it terminates as
time-slice expires. The CPU is then given to the next process P3. Once each process has
received one time slice. The CPU is returned to P1 for an additional time-slice. Thas the
resulting round robin schedule is:
Fundamental of
Operating System
Implementation of round robin scheduling requires the support of a dedicated timer. The
timer is usually set to interrupt the operating system wherever a time slicc cxpires and thus
force the scheduler to be involved. Processing the interrupt to switch the CPU to another
process requires saving all the registers for the old process and then loading the registers for
the new process. This task is known as context switching. The scheduler itself simply
stores the context of the running process, moves it to the end-of the ready queue, and
despatches the process at the head of ready queue. Whenever the running process surrenders
control to the operating system before expiration of its time-slice, the schcduler is again
invoked to despatch a new process to CPU.
Round-robin scheduling is often regarded as a fair scheduling discipline. It is also one of
the best known scheduling disciplines for achieving good and relativelj evcnly distributed
response time.
Priority based Scheduling: A priority is associated with each process and the scheduler
always picks up the highest priority process for execution from the ready queue. Equal
piority processes are scheduled FCFS. The level of priority may be deter~ninedon the bash
of resource requirements, processes characteristics and its run time behaviour.
A major problem with a priority based scheduling is indefinite blocking of a low ;riority
process by a high priority process. In general, completion of a process within 2 f.ite time
cannot be guaranteed with this scheduling algorithm.
A solution to the problem of indefinite blockage of low priority process is provided by aging
priority. Aging priority is a technique of gradually increasing the priority of processes (of
low priority) that wait in the system for a long time. Eventually, the older processes attain
high priority and are ensured of completion in a finite time.
Multiple-Ievel-Queue (MLQ) scheduling processes are classified into diflerent groups.
For example, interactive processes (foreground) and batch processes (background) cauld be
considered as two types of processes because of their different response lime requirements,
scheduling needs and priorities.
A multi queue scheduling algorithm partitions the ready queue into separaic queues.
Processes are permanently assigned to each queue, usually based upon properties such as
memory size or process type. Each queue has its own scheduling algorithm. The interact;
queue might be scheduling by a round-robin algorithm while batch queue lbllows FCFS.
As an example of multiple queues scheduling one simple approach to par~itioningof the
ready queue into system processes, interactive processes and batch processcs which creates a
three ready queues (Figure 7).

I Process

lnieract'ive Round Robin


Process scheduling

Batch
Process

Figure 7: Multiple queue Scheduling

Each queue is serviced by some scheduling discipline best suited to the type of processes
stored in the queue. There are different possibilities to manage queues. One possibility is to
assign a time slice to each queue, which it can schedule among different processes in its
queue. Foreground processes can be assigned 80%of CPU whereas background processes
are given 20%of the CPU time.
The second possibility is to cxeclite h c high priority queue first. No process in the batch Prwcss Management
queue, for example could run unless the queue for system processes and interactive processes
were all empty. If an interactive proccss entered the ready queue while a batch process is
running, the batch process wouid be prc-ernpled,

2.4 INTERPROCESS COMMUNICATION AND


SYNCHR ONXZ,YI'ION
2.4.1 Basic Concepts of Concurrency
Concurrent Process: We discussed the concept of a process earlier in this unit. The
opcrating system consists of a collcct~onof such processes which are basically two types:
Operating system processes: Thow that cxecute system code and the rest being user
processes those that executc user's c d e All of these processes can potentially execute in
I concurrent manner. Co~currencyrefer$ to a parallel execution of a program. Aconcurrent
program specifies two or more sequential programs (a sequential program specifies
sequential execution of a list of statements) that may be executed concurrently as parallel
L processes. For example, an airline rcscrvation system that involves processing transactions
from many terminals has a natural specifications as a concurrent program in which each
terminal is controlled by its own scqucntial process. Even when processes are not executed
simultaneously, it is often easier to structure as a c~llectionof cooperating sequential
processes rather than as a single sequential program. A simple batch operating system can be
viewed as 3 processes -a readcr proccss, an executor process and a printcr process. The
reader reads carda from card rcadcr and placcs card images in an input buffer. The executor
process reads card imagcs from input buffer and pcrfoms the specified computation and
store the result in an outpu~buff~i.The piinter process retrieves the data from the output
buffer and writes thcm to a printer. Concurrent processing is the basis of operating system
which supports multiprogramming. The opcratiag system supports concurrent execution of a
program without necessarily supporting e l a b o r ~ ~form
c of memory and file management.
This form of operation is aiso known as multitasking. Multiprogramming is a more general
concept in operating system that supports memory management and file management
features, in addition to supprtirig concurrent execution of programs.

2.4.2 Basic Concepts of Interprocess Communication and


Synchronization
In order to cooperate concurrently exccuting processcs must communicate and synchronize.
Interprocess communication is bascd on the use of shared variables (variables that can be
referenced by more than onc proccss) or message passing.
Synchronization is often necessary when processes communicate. Processes are executed
with unpredictable speeds. Yci to communicate one process must perform some action such
as setting the value of a variable or sending a mcssage that the other detects . This only
works if the events perform an action or detect an action are constrained to happen in that
order. Thus one can view sycchroniz~bionas a set of constraints on the ordering of events.
The programmer employs a synchronizati~~n mcchanism to delay execution of a process in
order to satisfy such constraints.
To make this concept more clea;, considcr the batch operating system again. A shared buffer
is used for communication Sctween the readcr process and the executor process. These
processes must be synchronizd so that, for cxample, the execut6r process never attempts to
read data from the input if ihe buffer is empty. The next sections are mainly concerned with
these two issues.

2.4.3 Mutual Exclusion


Processes frequently need to comsl7unicate with other processes. When a user wants to read
from a file, it must tell the file prucess what it ;vants, then the file process has to inform the
disk pfocess to read the required hlock.
Processes that are working together ol~cilslxue some common storage that one can read and
write. The shared storage may bc in main memory or it may be a shared file. Each process
b s segment of code, callcci a crilical section. which accesses shared memory or files. The
Key issue involving sharcd mem~rrryor slixlre:! files is to find way to prohibit more than one
process from reading and wriiiag ?hc ;h,rcd data at the same time. What we need is mutual
exclusion - some way of making sure that if one process is executing in its critical section.
the other processes will be excluded from doing the same ~hing.Now we present algorithm
to support mutual exclusion. This is applicable for two processes only.

Module Mutex

var
Plbusy, P2busy.: boolean;
Process P1

begin
while true do
begin
Plbusy := true;
while P2busy do (keeptesting);
critical-section;
Plbusy := false;
other-Plbnsy-Processing
end (while)
end; ( P l )
Process P2;

begin
while true do
begin
P2busy := true;
while Plbusy do (keep testing);
critical-section;
other-P2busy-Processing
end (while)
end; (P2)
(Parent process)
begin (mutex)
Plbusy := false;
P2busy := false;
Initiate PI, P2
end {mutex)

Program 1: Mutual Exclusion algorithm

P1 first sets Plbusy and then tests P2busy to determine what to do next. When it finds
P2busy to be false, process PI may safely proceed to the Critical section knowing that no
matter how the two processes may be interleaved, process P2 is certain to rind P2busy set
and to stay away from the critical section. The single change ensurcs murual exchsion. But
consider a case where PI wishes to enter the critical section and sets Plbusy to indicate the
fact. If process P2 wishes to enter the critical section at the same time and pre-empts process
P1 just before PI tests =busy. Process P2 may set P2busy and start looping while waiting
for Plbusy to become false. When control is eventually returned to Proccss PI, it finds
P2busy set and star&looping while it waits for P2busy to become false. And so both
processes are looping forever, each awaiting the other one to clear the way. In order to
remove this kind of behaviour, we must add another requirement to occur in our algorithm.
When more than one process wi'shes to enter the critical seclion,'the decision to grant
entrance to one of them must be made in finite time.

2.4.4 Semaphores
In the previous section, we discussed the mutual exclusion problem. The solution we
presented did not solve all the problems of mutual exclusion. The Dutch niathematician
Dekker is believed to be the first to solve the mutual exclusion problcm. But its original
algorithm works for two processes only and it cannot be extended bcyontl that nurnbcr. To
overcome this problem, a synchronization tool called semaphore was proposed by Dijkstra
which gaincd wide acceptance and implemented in several commercial operating system
through system calls or as built-in functions. A semaphore is a variable which accepts
non-ncgative integer values and except for initialisation may be accessed and manipulated
through two primilive operations - wait and signal (originally dcfinrd as P and V
respectively). These namcs come from the Dutch words Problcm (to iccl) :in(! Vcrogcrl (to
increment). The two primitives take only argument as the semaphore variable, and may be Process Management
defined as follows:

a) Wait(s):
while S <= 0 do (keep testing) S: = S- 1;
wait operation decremcn~sthe valuc of semaphore variable as soon as it would become
non-negative.

b ) Signal(s) S: S+l
Signal operation increments tilt valuc of scmaphore variable.
Modifications to the integer valuc of the scmaphorc in the wait and signal operations are
executed indivisibly. That is, when onc process modifies the semaphore no othcr process can
simultaneously modify the samc semaphore value. In adtiition in the case of wajt(s), the
testing of the integcr value of S (S <= 0) and its possible modification (S :=S-1) must also be
executed without any interruption.
I
Program 2 demonstrates the functioning of semaphores. In this program, there are 3
processes to trying to sharc a common resource which is being protected by a binary
semaphore (bsem). (A binary semaphore is a variable which contains only values of 0 and 1)
by enforcing its use in mutually exclusive fashion. Each process ensures the integrity of its
critical section by opening it with a WAIT operation and closing with a SIGNAL operation
on the related semaphore, bscm in our cxamplc. This way any number of concurrent
processor might share the rcsource provided each of thcsc process use wait and signal
opcration.

We also present a table (figure 8) showing the run time behaviour of three processes and
functioning of semaphore. Eacn column of the table show the activity of a particular process
and the value of a semaphore after certain action has been taken on this process.
The parent proccss in the program first initialiscs binary semaphore variable bsem to 1
indicating that the source is available. As shown in the table (figure 8) at time T1 no
process is active to share the resource. But at time T2 all the three processes become active
and want to cnter their critical sections to share the resource by running the wait operation.
At T2, the bscm is decremcntcd to 0 which indicates that some processes has been given
, permission to enter the critical section. At time T3, we find that it is P3 which has been
given some pcrmission. One important thing is to be notcd that only one process is allowed
, by semaphore at a time to he critical section.

Oncc P1 is given the pcrmission, it prevents othcr processes P2 & P3 to read the value of
bsem as 1 till the wait operation of P1 decrements bsem to 0. This is why wait operation is
executed without intcrruption.

After grabbing the control from semaphore P1 starts sharing he resource which is depicted
at time T3. At T4, P1 executes signal operation to release the resource and comes out of its
critical section. As shown in thc table that the valuc of bsem becomes 1 since the resource is
now free.

The two remaining processes P2 and P3 have an equal chance to compete the resource. In
our example, process P3 become thc ncxt to entcr the critical section and to use the shared
rcsource. At timc T7, proccss P3 rclcascs thc rcsourcc and semaphore variable bsem again
bccomcs 1. At this timc, the two other processes P1 and P2 will attempt to compete for the
resource and thcy have equal chancc to get access. In our example, it is P2 which gets the
chance b d it might happens one oC tic threc processes could have never got thc chancc.
Thcre are number of problems related to concurrency control which can be tackled easily
with semaphore. Thesc so callcd problclns arc Bounded Buffer Problem, the
rcaders/writers problem. But thesc solution are bcyond this unit.
Fundamental of Module Sem-mutex
Operating System

var bsem : semaphore; (binary semaphore)


process P1;
begin
while true do
begin
wait (bsem)
CriticaLsection
Signal (bsem)
The rest-of P1-Processing
end (which)

process P2;
begin
while true do
wait (bsem)
critical-section;
signal @$EM)
Rest.~2 -Processing
end; (P2)

process P3;
begin
while true do
wait (bsem)
critical-section;
signal (bsem)
Rest. P3 -Processing
end; (P3j
(Parent process)
begin (sem-mutex)
bsem := 1 (free)
initiate PI, P2, P3
end; (Mutux)

Program 2: Mutual Exclusion with Semaphore

Flgure8: Run time behavlwr of Processes


Prrtccss Management
2.4.5 Hardware Support for Mutual Exclusion
Scmaphorc solve the mutual exclusion problem in a simple and natural way. However it
requircs indivisibility of operations to function properly. In this section we will discuss the
implementation of semaphore at hardware level.
Test-and-Set Instruction: The test and set (TS) instruction is inlended for direct hardware
support of mutual exclusion. It is designed to allow only one process among several
concurrent processes to enter in its critical section.
The basic idea of this mechanism is to set the global control variable to 0 (FREE) indicating
hat the shared resource is available for access. Each process wishing to access this resource
is to execute the TS insuuction.with a control variable as an operand to get permission. In
principle, the TS takes one operand, the address of the control variable or a register that may
act as a semaphore. Now let us see how wait operation on the semaphore variable S is '
ipplemented through TS instruction set if it is available at supporting hardware (S is passed
by the process competing for the resource).
Wait.-s TS S; request for entering into critical section
BREFREE WAIT-S; repeat until granted enter to critical
section
Let us assume that S is initially set to O(Zero) i.e. resource is available. When S is set to 1, it
enters he BUSY state. Suppose there are several processes executing wait-S Waccess the
resource protected by a global semaphore variable S. When the TS instruction is executed
on behalf of the first process, the global variable S is set to 1 (BUSY), therefore, the resource
can be accessed now. Since S was initially set to 0 prior to execution of TS instruction, the
sccond instruction
BRNFREE WAIT-S will not be executed. All other processes upon executing the TS
instruchon find S set to 1 and contin?z looping (testing). As usual the process using the
resource is supposed to rcleasc it and reset S to 0. When S is reset one of the process&
loop~ngon S (competing for resources) gets permission to access the resource. The
indivisible part of TS makes certain that only one among several competing process, gets the
pcrnmisaon.
The cssencc of the TS instruction is the indivisible testing of the global variable and its '
subsequent sclting to BUSY. Therefore, other instructions that allow the contents of a
variable to be tcstcd and subscquently set as an indivisible operation are also candidates for
implemcntation of mutual exclusion.
TS insuuction is simple to use and also it does not affect interrupts or the operating system
I while performing its job and it may be used in a multiprocessor environment But there are
also some drawbacks with this mechanism. Suppose P1 is in critical section. Another
process P2 which is of higher priority than P2 preempts P1 and attempts to use the same
resource. This will cause P2 looping on the control variable awaiting its resetting but the
resctling will be done only by P1 which is preempted. Two processes can become
deadlocked in such a situatior,. One solution to deal with such a situation is not to preempt
any proccss within its critical section by its contenders. In that case operating system has to
keep track of the nature of instruction being executed by user process which is rarely
implemented because of overhead imposed on operating system. As a consequence the
possibility of preemption of a process within its critical section is a serious problem in a
system that rely on hardware implementation of mutual exclusion by means of the TS
instruction.

2.4.6 Mechanism for Structured Form of Interprocess Communication


and Synchronization
In thc previous section we demonstrated that semaphore is a powerful tool for interprocess
synchronization and mutual exclusion. The simplicity and ease of implementation has made
a semaphore a very popular tool that is usually found in the most of the operating system
packages. Thcre are also some drawbacks with this mechanism. They are:
1. Semaphores are unstructured. They force programmers to follow strictly the
synchronization protocols (WAIT & SIGNAL). Any change in WAIT and
SIGNAL operation sequence.forgettingeither of them or simply jumping around
them may easily corrupt or block the entire system.
2. Semaphores do not support data abstraction. Data abstraction is a software
I
I
- - ,/
--
.".A
model that 3pecifies a s c i:f
~ h t a 2nd cr.b~raliclr:%
rbat can be performed on the dam.
-st -!qtem
Even when used projxrly. wmaphores can it:\!; protect access to critical sections,
but cannot reqtrict the typ; of operations on shared resource performed by
processes that have been grantcd permission. On one hand, semaphores encourage
interprocess communication via global var~able,on the other hand, it only protects
from dangers of concurrency. Such global variables remain vulnerable to illegal on
meaningless manipulation by processes legally allowed to modify them by
executing correctly the semaphore operauons themselves.
There are several alternative mechanisms that support or enforce more structured form of
interprocess communication and synchroniration. In this section we simply introduce these
mechanisms without going into its implementation aspects. These mechanisms are critical
regions, conditional critical regions, monitors and message passing. Onc of the main
problems with semaphore that it is not syntactically related to the resources it protects. It
does not warn the compiler that a specific data structure/resources is being shared and its
accesses need to be controlled. To remove this borlleneck Brinch-Harson proposed two
language constructs (1) Critical region (2) Conditional critical region. Critical region is a
language construct that strictly enforces mutually exclusjve use of a resourceldata structure
declared as shared. Critical regions enforce usage of shared variables and prevent potential
errors resulting from improper use of ordinary semaphores. The critical rcgion construct can
be effectively used to solve critical section problem. It cannot, however, bc used to solve
some general synchronization problem. For this purpose, conditional critical region must
be used. It allows a process to waittill a certain condition is satisfied within a critical section
without preventing other eligible processes from accessing the shared resource.
These mechanisms concern themselves with allowing only one process at a time to access a
shared resource at any time. The process permitted to access the resourcc may also compt
it. A monitor is a mechanism that suppons the safe and effective sharing of resources
among processes in addition to concurrency and synchicnization. The basic philosophy
behind monitor is to allow access to shared resources through set of well-defined public
procedure so as to prevent illegal or harmful operations. This form of data abstraction
provides a system integrity and easier program modification. Monitor as wcll as other
mechanisms discussed earlier support only centralized access to data through global variable.
Theu implementation tend to rely on a large extent on the assumption of common memory
accessible to all synchronizing processes. Accessing such global variable can result in
considerable communication delays in distributed system wi!h no common memory. As a
result the straightforward application of centralized mechanism for concurrency control to
distributed environment is often inefficient and slow.
Critical region, conditional critical region and monitor are one outgrowth of semaphores.
They all provide structured ways to control access to shared variable. A different outgrowth
is message passing (Figure 9) which can be viewed as extending semaphore to convey data
(message) as well as to implement synchronization.

Busy wailing

Figure 9: Synchronization Techniques and Language Claqses A message is sent by executing


Messages passing is a relatively simple mechanism suitable for both interprocess Process Management
communication and synchronization in centralised as well as distributed environmenl, When
nicssage passing is used for communication and synchronization, processes send and receive
messages instead of reading and writing shared variable. Communication is performed
bccause a process upon receiving a mcssage obtains values from some sender process.
Sy~lchronizationis achieved because a message can be received only after it has been sent,
which constrains the order in which these two events can occuy.

send (message)
to destination-mode
The message contains the values of expressions in expression-list at the time send is
executed. The destination-mode gives the program control over where the message goes and
hence over which statemenls can receive it. A message is received by executing:

receive (message)
from source-mode

The message is received through g list of variables. The source-mode gives the programmer
control where the message come from and hence over which statements could have sent it.
Receipt of messages causes, first assignment of the values in the messages to the variables in
variable-list and second, subsequent destruction of the message.

If processes P and Q want to communicate they must send and receive messages from each
other. In order to do so, a communication link must exist between them. This link can be
iir~plementcdin a variety of ways. We are not concerned here with physical implementation
of a lirk (such as sharcd memory or a hardware bus), but more in the issues of its logical
~mpiernentalion,such as its logical properties. Some basic implementation questions are:
How are links establish&?
Can a link be associatea with more than two processes?
Wow many links can there be between every pair of processes?
What is Lhe capacity of a link? That is, does the link have somebuffer space? If so,
how much?
= What is the size of messages? Can the link accommodate variable size or fixed size
mcssage?
is a link unidirectional or bi-directional? That is, if a link exisis between P and Q,
can message flow in one direction (such as ynly from P and Q) on in both direc-
tlons?
We will elabonte on these issues in an advance course in operating system to be offered in
thc 3rd year MCA course.

2.5 DEADLOCKS
In a nlultiprogramming environment several processes may compete for a fixed number of
rcsourccx A process requests resources and if the resources are not available at that time, it
. I I L C a~ wail statc. It may happen that it will never gain access to the resources. Since those
resources are being held by other waiting processes. For example, take a system with one
!:lpe drive and one plotter. Process PI request the tape drive and process P2 requests the
plottcr. Both requests are granted. Now P1 requests the plotter (without giving up the tape
dr~ve)and P2 requests the tape drive (without giving up the plokr). Neither request can be
granted so both processes enter a deadlock situation. A deadlock is a situation where a
group of processes is permanently blocked as a result of each process having acquired a sct
of resources needed for its cqmpletion and having to wait for release of the remaining
rcsourccs held by others thus-making it impossible for a n y ~the f deadlocked processes 1,.
171 t ~ e e d .

Dcadlocks can occur in concurrent environmcnu as a r c s u ~r r ~


l hie unconuoi~cu
granting of
the rystem resources to the requesting processes. In the following sections we will look at
deadlocks more closely, see how they arise, and study some ways of preventing them.
1 Fuhdamental of
2.5.1 System Model
Operating System
Deadlocks can occur when processes have been grantcbd exclxlve access to devices, files
and so forth. A system consists of a finite number of resources to be distributed among a
number of competing processes. The resources can be tiivlded into several types, each of
which consists of some number of identical instances. GFU cyclcs, memory space, files
and 110 devices (such as printers and tape drives) arc examples of resourcc types. If a
system has two tape drives then the resource type tape drivc nas two instances.
If a process requests an instance of a resource type, any type of that resourcc of class may
satisfy the request. If this is not the case, then the instances q e not identical and the resource '
type classes have not been properly defined. For examp!e a s:istem may have two printers
These two printers may be defined into same printer class, if one is nor. concern& about type
of printers (Dot Matrix or Laser Printer).
Whenever a process wants to utilize any resource, it must makc a request for it. It may
request as many resources as it wants but it should not exceed the total number of resources
available with the system. Once the process has utilizcti the rcsource it must release it.
Therefore, a sequence of events to use a resource is:
$1 Request the resource
(ii) Use the resource
(iii) Release the resource
If the resource is not available when it is requested rhcn the rccuestlng prtncss mud wait
until it can acquire the resource. In some operating system, the process 1s automatically
blocked when a resource request fails and awakened when it bccomes aval lable.
The exact nature of requesting resource is highiy system dependent. The rcqucst and release
of resources are implemented through system calis wnic.h. vary [row one syctcm to another.
Examples of these calls are request/release device, openlclose file ew.

2.5.2 Deadlock Characterisation and Modelling


Deadlocks are undesirable features. In the most of deadiock sbtuat:un proccss is waiting for
the release of some resource concurrently possessed by somc d~adlockedprocess. Before we
go further it would be helpful to describe some conditions that chnacterisc deadlock:

(a) Holding a (b) Making a (c) Dzadlock Situation


resource request for
resource

Figure 10: Graphic Represe~tatic~~r


of Resource 4llwztion

1. Mutual exclusion: Only one process at a time can u.se the resource. If another
process requests that resource the requesting process milst be dclayed until the
resource has been released.
2. Hold and wait: There must be one process that is having one rcsource and waiting
for another resource that is being currenlly hzld by another proccss.
Process Management
3. No preemption: Resources previously granted cannot be forcibly taken away
from a process. They must be explicitly released by the process holding them.
4. Circular wait condition: There must be a circular chain of two or more processes,
each of which is waiting for a resource held by the next member of the chain.
Now we will show how these four conditions can be modeled using directed graphs. The
graphs have two kinds of nodes: processes, shown as circles, and resources, shown as
squares. An arc from a resource node (square) to a process node (circle) means that the
resource is currefitly being held by process. In Figure l q a ) resource R1 is currently
assigned to process P1. When an arc is coming from a process to a resource it specifies that a
process is waiting for a resource. In Figure lqb), process P2'is waiting for resource R2. In
Figure IO(c) we see a deadlock. Both processes P3 and Pn are waiting for resources which
they will never get. Process P3 is waiting for R3, which is currently held by P4. P4 will not
release R3 because it is waiting for resource R4.
To demonstrate one example, let us imagine that we have three processes: P1. P2 and P3,
and three resources R1, R2 and r3. The requests and releases of the three processes are
given in Figure (a)-(cj. The operating system is free to run any unblocked process at any
instant, so it could decide to run P1 until PI finished all its work, then run P2 to
completion, and finally run P3.
If there is no competition for resources, (as we saw in the Figure 11 (a)-(c), there will be no
deadlock occurrence. Let us suppose that the request of resources is being made in the order
of Figure 1l(d). If these six requests are carried out in the same order, the six resulting
resource graphs are shown in Figure 11 (e)-(0. After a request 4 has been made, P1 locks
waiting for R2 as shown in Figure 1i(h). In the next two steps: 5 and 6, P2 and P3 also block
ultimately leading to a cycle and the deadlock situation shown in Figure 11ti).
The operating system is not required to run the processes in any special order. In particular,
if granting a particular request might lead to deadlock, the operating system can simply
suspend the process without granting the request (i.e. just not schedule the process) until it is
safe. In fig.... , if the operating system knew about the impending deadlock, it could suspend
P2instead of granting it R2. By running only PI and P3, we would get:the requests and
releases of Figure 9(k) instead of Figure ll(d). This sequence leads to the resource graphs of
Figure ll(1)-(q), which do not lcad to deadlock.
After step (q), process P2can be granted R2 because PI is finished and P3 has everything it
needs. Even if P2 should eventually block when requesting R3, no deadlock can occur. P2
will just wait until P3 is finished.
The point to understand here is that resource graphs are a tool to notice how a given
requesthelease sequence leads to deadlock, we just carry out the requests and releases step
by step, and after every step check the graph to see if it contains any cycles. If so, we have a
deadlock; if not, there is no deadlock. Resource graphs can also be generalised to handle
multiple resources of thc same type.
In general, four strategies are used for dealing with deadlocks.
1. Just ignore the problem altogether.
2. Detection and recovery.
3. Prevention, by negating one of the four necessary conditions.
4. Dynamic avoidance by careful resource allocation.
Descussion on Lhese strategies is beyond this unit.
p1 p2 p3
Request R1 Request R2 Request R3
Request R2 Request R3 Request Ri
Release R1 Release Rz Release R3
Release Ra Release R3 Release R1

1. Pi requests R1
2 . P2 requests R2
3. P3 requests R3
4. P1 requests R2
5. P2 requests R3
6. P3 requests R 1
deadlock

1. P1 requests R1
2. P3 requests Rg
3. P1 requests R2
4. P3 requests R1
5. P1 requests R1
6. P1 requests R2
no deadlock

Figure 11: Occurrence 8r Avoidance of Deadlock

Check Your Progress 3


~ ? docs sc~naphoresojs.e mutual < . k ~ ! ~ l s i ;vr.!i::-.;?
1. Define mutoal c x c l u s i : ~ ~ Haw u~;
Process Managemeit!

...............................................................................................................
:'. IV11at is a scmaphnrc'? What are i t i drawbacks'?

7 G I ~ >evcr;il
C rcssons why the study of concurrency is appropriate in o;leratlng
xy\lcrn ,tudy'.'

4 l..ist scvcr.al cxamplcs oTdcadlocks that are not rcl;itcd to computcr system
i.n\,ironn~cnt.Dcscrilx sonic or its characteristics'!

2.6 SUMMARY
Process is an important concept in modern operating system. Processes provide a suitable
means for informing the oprating system about independent activities that may be scheduled
for concurrent exccution. Each process is represented by a process control block (PCB).
Several PCBs can be linked together to form a queue of waiting processes. The selection and
allocation of processes is done by a scheduler. There are several scheduling algorithms.
These are first-in-first-out scheduling, Round robinscheduling, shortest job first and
priority algorithm.
Cooperating process must synchronize with each other whcnever they like to share resources
shared by several other processes. At most one process should be allowed to enter the critical
section of code within which particular shared variable or data structure is updated.
Semaphores are a simple but powerful interprocess synchronization mechanism based on this
concept. We also discussed hardware suppon for concurrency control in one form or another
which has become an integral part of virtually all computer architecture.
We could not discuss semaphore based solution to several synchronization problems such as
producer/consumer and readerslwriters problems. Monitor concept provides structured form
of interprocess synchronization and communication compared to semaphores. Message
passing allows interprocess communication and synchronization.without the need for global
variables which is suitable for both centralized and distributed systems.
Deadlocks are a common problem is systems where concurrent processes need simultaneous
exclusive access to a collection of shared resources. A deadlock situation may occur if and
only if four necessary conditions simultaneously hold in the.systcm: mutual exclusion, hold
and wait, no precmplion and circular wait condition To prevent deadlocks it is csscntial that
one of the necessary conditions never &cur.
We discussed all these issues in greater details.
dam ental of
stem
tratir1g SY' 2.7 MODEL ANSWERS
Check Your Progress 1
1. The concept of a process is either directly or indirectly available in practically all
operating system supporting multiprogramming feature. Even high level
languages like Ads and Modula incorporate this mechanism for the management of
concurrent processes. A process is an instance of a program in exccution. It is the
smallest piece of work that is individually schedulable by an operating system.
'

Process used somewhat interchangeably with task, has been given many
definilions. Some of these are:
. An asynchronous activity
rn Entity to which processors are assigned.
Which is manifested by the existence of "Process Contnbl Block" in the
operating system.
Many other definitions have been given. There is no universally agreed upon definition but
the "Instance of a program in execution" seems to be most frequently uscd.
2. Concurrent processing or concurrent execution of procesxs is ust~allybeneficial in
single user environments as weli. for example, in a workstation cnvironment,
multiprocesses operating system receipt of network broadcasts. Concurrently with
other user activities, support of multiple active windows, concurrent printing etc.
The drawback is increased complexity, overhead, and resource requirements of
multiproces operation. However, this is not a big problem in single user
environment where CPU and most other resources are lying unuli Lized most of the
time.

Check Your Progress 2


I'. There are usually 3 different types of schedulers that are existing in a complex
operating systems: long-term, medium-tern and short-tern schcdulers.
The primary objective of the long term scheduler is to provide a balanced mix of
jobs such as CPU bound and VO bound, to the short-term scheduler. It keeps the
resource utilisation at the desired level. For example, when the CPU utilisation is
low, it may admit more jobs in the ready queue for the allocation of CPU time.
Conversely, when the utilisation factor becomes high and it may rcduce the
number of admitted jobs.
2. No model answer.
3. (a) FCFS is non-preemptive algorithm.
(b) Shortest-job-firstand priority based algorithm may be ei~herpreemptive or
non-preemtpive.
(c) Round robin is a preemptive algorithm.

Check Your Progress 3


4. Concurrently with other user activities; support of multiple activc windows,
concurrent printing, etc. The drawback is increased complexity overhead and
resource utilisation of multiprocess operation. However, this is not a big problem
in single user environment where CPU and most other resources are lying
unutilised most of the time.

Check Your Progress 3


1. When only the process executing the critical section (or critical region) should be
allowed to access the shared resources and all the other processes computing for
the shared resources should be excluded from doing so until the completion of the
critical section. This is often referred to as mutual exclusion. Whereby a single
process temporarily prevents all others from using a shared resource during the
critical operation that might otherwise adversely affect the systems' integrity.
Conforcing mutual exclusion is one of the key problems in concurrent
programming. Many solutions have been developed; some software solution's and
some hardware solutions; some requiring voluntary cooperations among processes,
and some demanding rigid adherence to strict protocols.
Process Management
A semaphore mechanism supports two primitive operations that operate on a special type of
semaphore variable bsem in our example (ptograrn 2)
The semaphore variable can assume nonnegative integer value and can be accessed
~nanipulatedby SIGNAL & WAIT operation only. A binary semaphore bsem is psed
to protect the shared resource by enforcing its use in a mutually exclusive manner.
Each process (in our example there are 3 processes) ensures the integrity of its critical
section by opening it with a WAIT operation and closing it with SIGNAL p t i o n . In
this manner, several concurrent processes might compete for the same resource safely
provided each of them uses WAIT & SIGNALoperations.
2. No model answer.
3. (i) Today the trend is towards multiprocessing and massive parallelirn.
(ii) It is difficult to determine manually what activities can and cannot be per-
formed in parallel.
(iii) Parallel program are much more difficult to debug than sequential program.
Interaction between processes cao be complex.
4. No model answer.

2.8 FURTHER READINGS


1. Operating System - Concept and Design by Madnick & Donovan, MCGrawHill
Intl. Edition.
2. Operating System Concepts by Abraham Silberschat. and JamesL. Peterson,
Addision Wesely Publishing Company
3. Introduction to Operating Systems by Harvey M. Deital, Addision Wesely
Publishing Company.
4. Operating Systems Design and Implementation by Andrew S. Tanenbaum, Prentice
Hall of India.
5. Operating System :Concepts and Design by Milan Milenkovic, McGraw Hill.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy