0% found this document useful (0 votes)
65 views67 pages

Unit 2-2

The document covers the concept of concurrent processes in operating systems, detailing process states, control blocks, and synchronization mechanisms. It discusses classical problems like the Producer-Consumer and Dining Philosopher problems, along with solutions such as Dekker’s and Peterson’s algorithms, semaphores, and critical section management. The principles of concurrency, including mutual exclusion and deadlock avoidance, are emphasized as essential for efficient process execution.

Uploaded by

kanoujiasatyam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
65 views67 pages

Unit 2-2

The document covers the concept of concurrent processes in operating systems, detailing process states, control blocks, and synchronization mechanisms. It discusses classical problems like the Producer-Consumer and Dining Philosopher problems, along with solutions such as Dekker’s and Peterson’s algorithms, semaphores, and critical section management. The principles of concurrency, including mutual exclusion and deadlock avoidance, are emphasized as essential for efficient process execution.

Uploaded by

kanoujiasatyam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 67

Concurrent Process

UNIT-2
OUTLINE
• Process Concept
• Principle of Concurrency
• Producer and Consumer Problem
• Mutual Exclusion
• Critical Section Problem
• Dekker’s Solution
• Peterson’s Solution
• Semaphores
• Test and Set Operations
• Classical Problem in Concurrency – Dinning Philosopher Problem
• Sleeping Barber Problem
• Inter process Communication models
• Process Generation
Process Concept
• Process
• Process States
• Process Control Blocks
• Threads
Process
• Process – a program in execution; process execution must progress in a sequential fashion.
• A Program is a passive entity but a process is an active entity
• A Program becomes a process when an executable file is loaded into memory
• A System consists of a collection of processes.
• All these processes can execute concurrently, with the CPU by switching the CPU between the processes
hence, the CPU utilization increased.

A process includes:
• program counter
• stack
• data section
• Heap
The Structure of Process in Memory
• Process memory is divided into four sections as shown in Figure
below:
• The text section comprises the compiled program code, read in from
non-volatile storage when the program is launched.
• Program Counter
• The data section stores global and static variables, allocated and initialized
prior to executing main.
• The heap is used for dynamic memory allocation, and is managed via calls to
new, delete, malloc, free, etc.
• The stack is used for local variables. Space on the stack is reserved for local
variables when they are declared ( at function entrance or elsewhere,
depending on the language ), and the space is freed up when the variables go
out of scope. Note that the stack is also used for function return values.
Process in Memory
Process State
• Processes may be in one of 5 states, as shown in Figure 3.2 below.
• New - The process is in the stage of being created.
• Ready - The process has all the resources available that it needs to run, but the CPU
is not currently working on this process's instructions.
• Running - The CPU is working on this process's instructions.
• Waiting - The process cannot run at the moment, because it is waiting for some
resource to become available or for some event to occur. For example the process
may be waiting for keyboard input, disk access request, inter-process messages, a
timer to go off, or a child process to finish.
• Terminated - The process has completed.
• The load average reported by the "w" command indicate the average number
of processes in the "Ready" state over the last 1, 5, and 15 minutes, i.e.
processes who have everything they need to run but cannot because the CPU
is busy doing something else.
• Some systems may have other states besides the ones listed here.
Process States
Process Control Block
• A process control block (PCB) is a data structure used by
computer operating systems to store all the information about a
process.
• It is also known as a process descriptor.
• When a process is created (initialized or installed), the operating
system creates a corresponding process control block.
• For each process there is a Process Control Block, PCB, which stores
the following ( types of ) process-specific information
Process Control Block
• Process State - Running, waiting, Ready, new, terminated
• Process ID, and parent process ID.
• CPU registers and Program Counter - These need to be saved and
restored when swapping processes in and out of the CPU.
• CPU-Scheduling information - Such as priority information and
pointers to scheduling queues.
• Memory-Management information - E.g. page tables or segment
tables, base register, limit register.
• Accounting information - user and kernel CPU time consumed,
account numbers, limits, etc.
• I/O Status information - Devices allocated, open file tables, etc.
Process Control Block
Context Switching
• A context switching is a process that involves switching of the
CPU from one process or task to another.
• In this phenomenon, the execution of the process that is
present in the running state is suspended by the kernel and
another process that is present in the ready state is executed by
the CPU.
• It is one of the essential features of the multitasking operating
system.
• The processes are switched so fastly that it gives an illusion to
the user that all the processes are being executed at the same
time.
CPU Switch from process to process
Principle of Concurrency
• Concurrency in operating systems refers to the ability of an operating
system to handle multiple tasks or processes at the same time.
• With the increasing demand for high performance computing,
concurrency has become a critical aspect of modern computing
systems.
• Operating systems that support concurrency can execute multiple tasks
simultaneously, leading to better resource utilization, improved
responsiveness, and enhanced user experience.
Principle of Concurrency
• The principles of concurrency in operating systems are designed to
ensure that multiple processes or threads can execute efficiently and
effectively, without interfering with each other or causing deadlock.
• Interleaving − Interleaving refers to the interleaved execution of
multiple processes or threads. The operating system uses a scheduler
to determine which process or thread to execute at any given time.
Interleaving allows for efficient use of CPU resources and ensures that
all processes or threads get a fair share of CPU time.
Principle of Concurrency
• Synchronization − Synchronization refers to the coordination of
multiple processes or threads to ensure that they do not interfere with
each other. This is done through the use of synchronization primitives
such as locks, semaphores, and monitors. These primitives allow
processes or threads to coordinate access to shared resources such as
memory and I/O devices.
• Mutual exclusion − Mutual exclusion refers to the principle of
ensuring that only one process or thread can access a shared resource
at a time. This is typically implemented using locks or semaphores to
ensure that multiple processes or threads do not access a shared
resource simultaneously.
Principle of Concurrency
• Deadlock avoidance − Deadlock is a situation in which two or more
processes or threads are waiting for each other to release a resource,
resulting in a deadlock. Operating systems use various techniques such as
resource allocation graphs and deadlock prevention algorithms to avoid
deadlock.
• Process or thread coordination − Processes or threads may need to
coordinate their activities to achieve a common goal. This is typically
achieved using synchronization primitives such as semaphores or message
passing mechanisms such as pipes or sockets.
• Resource allocation − Operating systems must allocate resources such as
memory, CPU time, and I/O devices to multiple processes or threads in a
fair and efficient manner. This is typically achieved using scheduling
algorithms such as round-robin, priority-based, or real-time scheduling.
Process Synchronization
• Process Synchronization is the coordination of execution of multiple
processes in a multi-process system to ensure that they access shared
resources in a controlled and predictable manner.
• The main objective of process synchronization is to ensure that
multiple processes access shared resources without interfering with
each other, and to prevent the possibility of inconsistent data due to
concurrent access.
• To achieve this, various synchronization techniques such as
semaphores, monitors, and critical sections are used.
Process Synchronization
Processes are categorized as one of the following two types:
• Independent Process: The execution of one process does not affect
the execution of other processes. No data sharing.
• Cooperative Process: A process that can affect or be affected by other
processes executing in the system. Example Producer-Consumer
problem
Process synchronization problem arises in the case of Cooperative
process also because resources are shared in Cooperative processes.
Race Condition:
• When more than one process is executing the same code or accessing
the same memory or any shared variable in that condition there is a
possibility that the output or the value of the shared variable is wrong
so for that all the processes doing the race to say that my output is
correct this condition known as a race condition.
• A race condition is a situation that may occur inside a critical section.
This happens when the result of multiple thread execution in the
critical section differs according to the order in which the threads
execute. Race conditions in critical sections can be avoided if the
critical section is treated as an atomic instruction.
Necessary Requirement to Solve the Critical Section Problem
• A critical section is a code segment that can be accessed by only one
process at a time. The critical section contains shared variables that
need to be synchronized to maintain the consistency of data variables.
Critical Section
• Any solution to the critical section problem must satisfy three requirements:
• Mutual Exclusion: If a process is executing in its critical section, then no
other process is allowed to execute in the critical section.
• Progress: If no process is executing in the critical section and other
processes are waiting outside the critical section, then only those processes
that are not executing in their remainder section can participate in deciding
which will enter in the critical section next, and the selection can not be
postponed indefinitely.
• Bounded Waiting: A bound must exist on the number of times that other
processes are allowed to enter their critical sections after a process has made
a request to enter its critical section and before that request is granted.
Synchronization Mechanism
• With Busy Waiting
• Without Busy Waiting
Solution for Critical Section Problem
• Lock Variable
• Test & SET
• Dekker’s Algorithm
• Peterson’s Algorithm
Lock Variable Synchronization Mechanism
• This is the simplest synchronization mechanism.
• This is a Software Mechanism implemented in User mode.
• This is a busy waiting solution which can be used for more than two
processes.
• In this mechanism, a Lock variable lock is used. Two values of lock
can be possible, either 0 or 1.
• Lock value 0 means that the critical section is vacant while the lock
value 1 means that it is occupied.
Lock Variable Synchronization Mechanism
Test and Set Algorithm
• Test and Set Lock (TSL) is a synchronization mechanism.
• It uses a test and set instruction to provide the synchronization among
the processes executing concurrently.
Test and Set Algorithm
Dekker’s Algorithm(Turn Variable or Strict
Alteration)
• Turn Variable or Strict Alternation Approach is the software
mechanism implemented at user mode.
• It is a busy waiting solution that can be implemented only for two
processes.
• In this approach, A turn variable is used which is actually a lock.
Turn Variable or Strict Alteration
Disadvantages
• Mutual Exclusion is assured but progress is not assured.
Dekker’s Algorithm
Dekker’s Algorithm
Dekker’s Algorithm
Peterson’s Solution
• Peterson’s solution is one of the most widely used solutions to the
critical section. It is a classical software-based solution.

In this solution, we use two shared variables:


1. int turn – For a process whose turn is to enter the critical section.
2. boolean flag[i] – Value of TRUE indicates that the process wants to
enter the critical section. It is initialized to FALSE, indicating no
process wants to enter the critical section.
Peterson’s Solution
Peterson’s Solution
Advantages of Peterson’s solution:
It is able to preserve all the three rules for the solution of the critical
section:
1. It assures Mutual Exclusion, as only one process can access the
critical section at a time.
2. It assures progress, as no process is blocked due to processes that are
outside.
3. It assures Bound Waiting as every process gets a chance.

Disadvantages of Peterson’s solution:
1. Peterson’s solution is limited to two processes.
2. It involves Busy Waiting.
Semaphores
• Semaphores refer to the integer variables primarily used to solve the
critical section problem by combining two atomic procedures, wait
and signal, for the process synchronization.
Wait
• It decrements the value of its A argument in case it is positive. If S is
zero or negative, no operation would be performed.

wait(S)
{
while (S<=0);
S--;
}
Signal
This operation increments the actual value of its argument S.
signal(S)
{
S++;
}
Types of Semaphores
• Binary Semaphore
Mutex lock is another name for binary Semaphore. It can only have two
possible values: 0 and 1, and its value is set to 1 by default. It’s used to
implement numerous processes to solve critical section problems.
Counting Semaphore
• Counting Semaphore’s worth can be found anywhere in the world. It’s
used to restrict access to a resource with many copies. Unrestricted
range of Domain.
Binary Semaphores
• The value of a semaphore variable in binary semaphores is either 0 or
1. The value of the semaphore variable is initially set to 1, but if a
process requests a resource, the wait() method is invoked, and the
value of this semaphore is changed from 1 to 0.
• When the process has finished using the resource, the signal() method
is invoked, and the value of this semaphore variable is raised to 1.
• If the value of this semaphore variable is 0 at a given point in time,
and another process wants to access the same resource, it must wait for
the prior process to release the resource. Process synchronization can
be performed in this manner.
Classical Problems in Concurrency
• Producer Consumer Problem
• Reader/ Writer Problem
• Dining Philosopher Problem
• Sleeping Barber Problem
Reader/ Writer Problem
• The readers-writers problem is a classical problem of process synchronization, it relates to
a data set such as a file that is shared between more than one process at a time.
• Among these various processes, some are Readers - which can only read the data set; they
do not perform any updates, some are Writers - can both read and write in the data sets.
• The readers-writers problem is used for managing synchronization among various reader
and writer process so that there are no problems with the data sets, i.e. no inconsistency is
generated.
• Let's understand with an example - If two or more than two readers want to access the file
at the same point in time there will be no problem.
• However, in other situations like when two writers or one reader and one writer wants to
access the file at the same point of time, there may occur some problems, hence the task is
to design the code in such a manner that if one reader is reading then no writer is allowed
to update at the same point of time, similarly, if one writer is writing no reader is allowed
to read the file at that point of time and if one writer is updating a file other writers should
not be allowed to update the file at the same point of time. However, multiple readers can
access the object at the same time.
Reader/ Writer Problem
• The solution of readers and writers can be implemented using binary
semaphores.
• Writer process:
1. Writer requests the entry to critical section.
2. If allowed i.e. wait() gives a true value, it enters and performs the
write. If not allowed, it keeps on waiting.
3. It exits the critical section.
Writer process:

do {
// writer requests for critical section
wait(wrt);

// performs the write

// leaves the critical section


signal(wrt);

} while(true);
Reader Process
wait(mutex) -----
readerCount++ |
if(readerCount == 1) |--- changing the readerCount
wait(writer) |
signal(mutex) -----

...
read operation

wait(mutex) -----
readerCount-- |
if(readerCount == 0) |--- changing the readerCount
signal(writer) |
signal(mutex) -----
Producer/ Consumer Problem
Producer/ Consumer Problem
Producer/ Consumer Problem
The Dining Philosopher Problem
• The Dining Philosopher Problem – The Dining Philosopher Problem
states that K philosophers seated around a circular table with one
chopstick between each pair of philosophers. There is one chopstick
between each philosopher. A philosopher may eat if he can pick up the
two chopsticks adjacent to him. One chopstick may be picked up by
any one of its adjacent followers but not both.
The Dining Philosopher Problem
Solution with Semaphores
void Philosopher
{
while(1)
{
Wait( take_chopstickC[i] );
Wait( take_chopstickC[(i+1) % 5] ) ;
..
. EATING THE NOODLE
.
Signal( put_chopstickC[i] );
Signal( put_chopstickC[ (i+1) % 5] ) ;
.
. THINKING
}
}
The drawback of the above solution of the dining philosopher
problem
• From the above solution of the dining philosopher problem, we have
proved that no two neighboring philosophers can eat at the same point
in time.
• The drawback of the above solution is that this solution can lead to a
deadlock condition.
• This situation happens if all the philosophers pick their left chopstick
at the same time, which leads to the condition of deadlock and none of
the philosophers can eat.
Sleeping Barber problem
• There is a barber shop with one barber and a number of chairs for
waiting customers.
• Customers arrive at random times and if there is an available chair,
they take a seat and wait for the barber to become available.
• If there are no chairs available, the customer leaves. When the barber
finishes with a customer, he checks if there are any waiting customers.
• If there are, he begins cutting the hair of the next customer in the
queue. If there are no customers waiting, he goes to sleep.
Sleeping Barber problem
Solution of Sleeping Barber Problem using
Semaphores
Chairs = n
Semaphores Customer = 0 // no of customer in waiting
Semaphores barber = 0 // Barber is idle (Sleeping)
Semaphore mutex = 1
Int waiting = 0 //no of waiting customer
Barber Process
While(true)
{
Wait(customer);
Wait(mutex);
Waiting = waiting-1
Signal (barber);
Signal(mutex);
Cut_hair()
}
Customer Process
While(true)
{
Wait(mutex);
If(waiting<chairs)
{
Waiting = waiting +1
Signal(customer);
Signal(mutex);
Wait(barber)
Get_haircut()
}
Else
Signal(mutex)
}
Interprocess Communication
• Interprocess communication takes place between the cooperating
processes. These processes communicate with each other by sharing
data and information.
Methods of Interprocess Communication
• There are two methods or two models that operating systems can
implement to achieve IPC.
1. Shared Memory
2. Message Passing
Shared Memory System
• The operating system implementing the shared memory model is a shared memory
system. Here a cooperating process creates a shared memory segment in its
address space. Now, if there occurs another process that wants to communicate
with this process. Then it must attach itself to the created shared memory segment.

The communicating processes share information by:


• Writing data to the shared region
• Reading from the shared region.

The operating system does not allow a process to access the memory of other
processes. But in a shared memory system, the process can access the memory of
other processes. If two or more processes are ready to remove this restriction.
Message Passing System
• The message-passing system lets the cooperative processes communicate
through a message-passing facility.
• The message-passing system is more useful in the distributed environment.
As in the distributed environment, the communicating processes may be
running on different systems. And these systems are connected via a
network.
• The message-passing system facilitates communication via at least two
operations. That sends messages and receives messages. Although the size
of the message can be variable, or it can be of a fixed size.
• With the fixed-size messages, the system-level implementation becomes
simple. But the programming of such a system becomes difficult.
• With the variable size messages, the system-level implementation is
difficult. But the programming of such a system becomes simpler.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy