0% found this document useful (0 votes)
1 views

unit 2

The document provides an overview of processes in operating systems, detailing their states, resources, and the structure of the Process Control Block (PCB). It also discusses process scheduling, including types of schedulers and CPU scheduling algorithms like FCFS, SJF, SRTF, and Priority Scheduling. Each algorithm is analyzed for its characteristics, advantages, and disadvantages, highlighting their impact on CPU utilization and process management.

Uploaded by

singhalaprajita0
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1 views

unit 2

The document provides an overview of processes in operating systems, detailing their states, resources, and the structure of the Process Control Block (PCB). It also discusses process scheduling, including types of schedulers and CPU scheduling algorithms like FCFS, SJF, SRTF, and Priority Scheduling. Each algorithm is analyzed for its characteristics, advantages, and disadvantages, highlighting their impact on CPU utilization and process management.

Uploaded by

singhalaprajita0
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

UNIT 2

Operating Systems
Process
A process is an instance of a computer program that is getting executed. A computer program is
passive and contains a set of instructions (code) stored in the file system. A process is active and
works by loading the program into memory and executing it.

A process consists of the following resources:

 Text (machine code of the compiled program)

 Data (global, static, constant, and uninitialized variables)

 Heap (dynamic memory allocation)

 Stack (temporary data such as local variables, function calls, parameters, return address, etc)

The above image denotes the representation of a process in memory. Here, the arrows
represent that both the Heap and the Stack can grow depending on the dynamic memory
allocation and variables/functions.

What are the Process States in Operating System?


 From start to finish, the process goes through a number of stages. A minimum of five
states is required. Even though the process could be in one of these states during
execution, the names of the states are not standardised. Throughout its life cycle,
each process goes through various stages. They are:

Have a look at the process state diagram.


New State
 When a program in secondary memory is started for execution, the process is said to be in a
new state.

Ready State
 After being loaded into the main memory and ready for execution, a process transitions
from a new to a ready state. The process will now be in the ready state, waiting for the
processor to execute it. Many processes may be in the ready stage in a multiprogramming
environment.

Run State
 After being allotted the CPU for execution, a process passes from the ready state to the run
state.

Terminate State
 When a process’s execution is finished, it goes from the run state to the terminate state. The
operating system deletes the process control block(PCB) after it enters the terminate state.

Block or Wait State


 If a process requires an Input/Output operation or a blocked resource during execution, it
changes from run to block or the wait state.
 The process advances to the ready state after the I/O operation is completed or the resource
becomes available.

Suspend Ready State


 If a process with a higher priority needs to be executed while the main memory is full, the
process goes from ready to suspend ready state. Moving a lower-priority process from the
ready state to the suspend ready state frees up space in the ready state fo
forr a higher
higher-priority
process.
 Until the main memory becomes available, the process stays in the suspend
suspend-ready
ready state. The
process is brought to its ready state when the main memory becomes accessible.

Suspend Wait State


 If a process with a higher priority needs
needs to be executed while the main memory is full, the
process goes from the wait state to the suspend wait state. Moving a lower
lower-priority
priority process
from the wait state to the suspend wait state frees up space in the ready state for a higher-
higher
priority process.
 The process gets moved to the suspend
suspend-ready
ready state once the resource becomes accessible.
The process is shifted to the ready state once the main memory is available.

PROCESS CONTROL BLOCK

Process Control Block is a data structure that contains information of the process related to
it. The process control block is also known as a task control block, entry of the process table,
etc.
It is very important for process management
managementas as the data structuring for processes is done in
terms of the PCB. It also
o defines the current state of the operating system.
Structure of the Process Control Block
The process control stores many data items that are needed for efficient process
management. Some of these data items are explained with the help of the given diagram
diagr −

The following are the data items −


 Process StateThis
This specifies the process state i.e. new, ready, running, waiting or
terminated.
 Process NumberThis shows the number of the particular process.
 Program CounterThis contains the address of the next instruction that needs to be
executed in the process.
 RegistersThis specifies the registers that are used by the process. They may
include accumulators, index registers, stack pointers, general purpose registers etc.
 List of Open FilesThese are the different files that are associated with the process
 CPU Scheduling InformationThe process priority, pointers to scheduling queues etc.
is the CPU scheduling information that is contained in the PCB. This may also include
any other scheduling parameters.
 Memory Management InformationThe memory management information includes
the page tables or the segment tables depending on the memory system used. It
also contains the value of the base registers, limit registers etc.
 I/O Status InformationThis information includes the list of I/O devices used by the
process, the list of files etc.
 Accounting informationThe time limits, account numbers, amount of CPU used,
process numbers etc. are all a part of the PCB accounting information.
 Location of the Process Control BlockThe process control block is kept in a memory
area that is protected from the normal user access. This is done because it contains
important process information. Some of the operating systems place the PCB at the
beginning of the kernel stack for the process as it is a safe location.

PROCESS SCHEDULING
The process scheduling is the activity of the process manager that handles the removal of the
running process from the CPU and the selection of another process on the basis of a particular
strategy.Process scheduling is an essential part of a Multiprogramming operating systems. Such
operating systems allow more than one process to be loaded into the executable memory at a
time and the loaded process shares the CPU using time multiplexing.

Schedulers

Schedulers are special system software which handle process scheduling in various ways. Their
main task is to select the jobs to be submitted into the system and to decide which process to
run. Schedulers are of three types −

 Long-Term Scheduler

 Short-Term Scheduler

 Medium-Term Scheduler
Long Term Scheduler

It is also called a job scheduler. A long-term scheduler determines which programs are admitted
to the system for processing. It selects processes from the queue and loads them into memory
for execution. Process loads into the memory for CPU scheduling.

The primary objective of the job scheduler is to provide a balanced mix of jobs, such as I/O
bound and processor bound. It also controls the degree of multiprogramming. If the degree of
multiprogramming is stable, then the average rate of process creation must be equal to the
average departure rate of processes leaving the system.

On some systems, the long-term scheduler may not be available or minimal. Time-sharing
operating systems have no long term scheduler. When a process changes the state from new to
ready, then there is use of long-term scheduler.

Short Term Scheduler

It is also called as CPU scheduler. Its main objective is to increase system performance in
accordance with the chosen set of criteria. It is the change of ready state to running state of the
process. CPU scheduler selects a process among the processes that are ready to execute and
allocates CPU to one of them.

Short-term schedulers, also known as dispatchers, make the decision of which process to
execute next. Short-term schedulers are faster than long-term schedulers.

Medium Term Scheduler

Medium-term scheduling is a part of swapping. It removes the processes from the memory. It
reduces the degree of multiprogramming. The medium-term scheduler is in-charge of handling
the swapped out-processes.

A running process may become suspended if it makes an I/O request. A suspended processes
cannot make any progress towards completion. In this condition, to remove the process from
memory and make space for other processes, the suspended process is moved to the secondary
storage. This process is called swapping, and the process is said to be swapped out or rolled out.
Swapping may be necessary to improve the process mix.

Comparison among Scheduler

S.N. Long-Term Scheduler Short-Term Scheduler Medium-Term Scheduler


It is a process swapping
1 It is a job scheduler It is a CPU scheduler
scheduler.

Speed is in between both


Speed is lesser than short Speed is fastest among
2 short and long term
term scheduler other two
scheduler.

It provides lesser
It controls the degree of It reduces the degree of
3 control over degree of
multiprogramming multiprogramming.
multiprogramming

It is almost absent or
It is also minimal in time It is a part of Time sharing
4 minimal in time sharing
sharing system systems.
system

It can re-introduce the


It selects processes from It selects those
process into memory and
5 pool and loads them into processes which are
execution can be
memory for execution ready to execute
continued.

What is CPU Scheduling Algorithm?

Now that we have covered most of the basics, let’s discuss what is a CPU Scheduling algorithm. A
scheduling algorithm in OS is the algorithm that defines how much CPU time must be allotted to
which process and when. There are two types of scheduling algorithms:

 Non-preemptive scheduling algorithms: For these algorithms once a process starts running,
they are not stopped until completion. That is, under non-preemptive algorithms, processes
cannot be pre-empted in favor of other high-priority processes before their runtime is over.

 Preemptive scheduling algorithms: These algorithms allow for low-priority processes to be


preempted in favor of a high-priority process, even if they haven’t run to completion.

The objectives of these process scheduling algorithms are:

 Maximize CPU utilization


 Maximize throughput (i.e. number of processes to complete execution per unit time)

 Minimize wait, turnaround, and response time

Different Types of CPU Scheduling Algorithms


1. First Come First Serve (FCFS) Scheduling Algorithm

The FCFS algorithm is the simplest of all CPU scheduling algorithms in OS. This is because the
deciding principle behind it is just as its name suggests- on a first come basis. The job that
requests execution first gets the CPU allocated to it, then the second, and so on.

Characteristics of FCFS scheduling algorithm

 The algorithm is easy to understand and implement.

 Programs are executed on a first-come, first-served basis.

 It is a non-preemptive scheduling algorithm.

 In this case, the ready queue acts as the First-in-First-out (FIFO) queue, where the job that
gets ready for execution first also gets out first.

 This is used in most batch systems.

Advantages of FCFS scheduling algorithm

 The fact that it is simple to implement means it can easily be integrated into a pre-existing
system.

 It is especially useful when the processes have a large burst time since there is no need for
context switching.

 The absence of low or high-priority preferences makes it fairer.

 Every process gets its chance to execute.

Disadvantages of the FCFS scheduling algorithm

 Since its first come basis, small processes with a very short execution time, have to wait their
turn.

 There is a high wait and turnaround time for this scheduling algorithm in OS.

 All in all, it leads to inefficient utilization of the CPU.


Example of FCFS scheduling algorithm in OS:

In the table above, 5 processes have arrived at the CPU at different times. The process with the
minimal arrival time goes first.

 Since the first process has a burst time of 3, the CPU will remain busy for 3 units of time,
which indicates that the second process will have to wait for 1 unit of time since it arrives at
T=2.

 In this way, the waiting and turnaround times for all processes can be calculated. This also
gives the average waiting time and the average turnaround time. We can contrast this with
other algorithms for the same set of processes.

Using a queue for the execution of processes is helpful in keeping track of whi
which
ch process comes
at what stage. Although this is one of the simplest CPU scheduling algorithms, it suffers from the
convoy effect. This occurs when multiple smaller processes get stuck behind a large process,
which leads to an extremely high average wait time.
time. This is similar to multiple cars stuck behind
a slow-moving
moving truck on a single
single-lane road.

2. Shortest Job First (SJF) Scheduling Algorithm

The Shortest Job First (SJF) is a CPU scheduling algorithm that selects the shortest jobs on
priority and executess them. The idea is to quickly get done with jobs that have short/ lowest CPU
burst time, making CPU available for other, longer jobs/ processes. In other words, this is a
priority scheduling algorithm based on the shortest burst time.

Characteristics of SJF
JF scheduling algorithm
 This CPU scheduling algorithm has a minimum average wait time since it prioritizes jobs with
the shortest burst time.

 If there are multiple short jobs, it may lead to starvation.

 This is a non-preemptive
preemptive scheduling algorithm.

 It is easier to implement the SJF algorithm in Batch OS.

Advantages of SJF scheduling algorithm

 It minimizes the average waiting time and turnaround time.

 Beneficial in long-term
term scheduling.

 It is better than the FCFS scheduling algorithm.

 Useful for batch processes.

Disadvantages of the SJF scheduling algorithm

 As mentioned, if short time jobs keep on coming, it may lead to starvation for longer jobs.

 It is dependent upon burst time, but it is not always possible to know the burst time
beforehand.

 Does not work for interactive systems.

Example of SJF scheduling algorithm in OS


OS-

Here, the first 2 processes are executed as they come, but when the 5th process comes in, it
instantly jumps to the front
ront of the queue since it has the shortest burst time. The turnaround
time and waiting time is calculated accordingly. It's visible that this is an improvement over FCFS,
as it has a smaller average waiting time as well as a smaller average turnaround time. This
algorithm is especially useful in cases where there are multiple incoming processes and their
burst time is known in advance. The average waiting time obtained is lower as compared to the
first-come-first-served scheduling algorithm.

6.Shortest Remaining Time First (SRTF) Scheduling Algorithm

The SRTF scheduling algorithm is the preemptive version of the SJF scheduling algorithm in OS.
This calls for the job with the shortest burst time remaining to be executed first, and it keeps
preempting jobs on the basis of burst time remaining in ascending order.

Characteristics of the SRTF Scheduling Algorithm

 The incoming processes are sorted on the basis of their CPU-burst time.

 It requires the least burst time to be executed first, but if another process arrives that has an
even lesser burst time, then the former process will get preempted for the latter.

 The flow of execution is- a process is executed for some specific unit of time and then the
scheduler checks if any new processes with even shorter burst times have arrived.

Advantages of SRTF Scheduling Algorithm

 More efficient than SJF since it's the preemptive version of SJF.

 Efficient scheduling for batch processes.

 The average waiting time is lower in comparison to many other scheduling algorithms in OS.

Disadvantages of SRTF Scheduling Algorithm

 Longer processes may starve if short jobs keep getting the first shot.

 Can’t be implemented in interactive systems.

 The context switch happens too many times, leading to a rise in the overall completion time.

 The remaining burst time might not always be apparent before the execution.

Example of SRTF scheduling algorithm in OS-


Here, the first process starts first and then the second process executes for 1 unit of time. It is
then preempted by the arrival of the third process which has a lower service time. This goes on
until the ready queue is empty and all processes are done executing.

4. Priority Scheduling Algorithm in OS

This CPU scheduling algorithm in OS first executes the jobs with higher priority. That
Tha is, the job
with the highest priority gets executed first, followed by the second prioritized jobs, and so on.

Characteristics of Priority Scheduling Algorithm

Jobs are scheduled on the basis of the priority level, in descending order.

 If a job with higher


er priority than the one running currently comes on, the CPU preempts the
current job in favor of the one with higher priority.

 But for other purposes, it follows a non-preemptive


non scheduling approach.

 In between two jobs with the same priority, the FCFS pr


process
ocess decides which jobs get
executed first.

 The priority of a process can be set depending on multiple factors like memory
requirements, required CPU time, etc.

Advantages of Priority Scheduling Algorithm

 This process is simpler than most other scheduling algorithms in OS.

 Priorities help in sorting the incoming processes.

 Works well for static and dynamic environments.


Disadvantages of Priority Scheduling Algorithm

 It may lead to the starvation problem in jobs with low priority.

 The average turnaround and waiting time might be higher in comparison to other CPU
scheduling algorithms.

Example of Priority Scheduling Algorithm in OS-


OS

Here, different priorities are assigned to the incoming processes. The lower the number, the
higher the priority. The 1st process to be executed is the second one, since it has higher priority
than the first process. Then the fourth process gets its turn. This is known as priority scheduling.
The calculated times may not be the lowest but it helps to prioritize important processes
proc over
others.

5. Round Robin Scheduling Algorithm in OS

In this scheduling algorithm, the OS defines a quantum time or a fixed time period. And every
job is run cyclically for this predefined period of time, before being preempted for the next job in
the ready queue. The jobs that are preempted before completion go back to the ready queue to
wait their turn. It is also referred to as the preemptive version of the FCFS scheduling algorithm
in OS.

Characteristics of RR Scheduling Algorithm

 Once a job begins


ins running, it is executed for a predetermined time and gets preempted after
the time quantum is over.

 It is easy and simple to use or implement.


 The RR scheduling algorithm is one of the most commonly used CPU scheduling algorithms
in OS.

 It is a preemptive
ive algorithm.

Advantages of RR Scheduling Algorithm

 This seems like a fair algorithm since all jobs get equal time CPU.

 Does not lead to any starvation problems.

 New jobs are added at the end of the ready queue and do not interrupt the ongoing process.

 Leads to efficient utilization of the CPU.

Disadvantages of RR Scheduling Algorithm

 Every time job runs the course of quantum time, a context switch happens. This adds to the
overhead time, and ultimately the overall execution time.

 A low slicing time may lead to low CPU output.

 Important tasks aren’t given priority.

 Choosing the correct time quantum is a difficult job.

Example of RR Scheduling Algorithm in OS-


OS

Let's take a quantum time of 4 units. The first process will execute and get completed. After a
gap of 1 unit, the second process executes for 4 units. Then the third one executes since it has
also arrived in the ready queue. After 4 units, the fourth proc
process
ess executes. This process keeps
going until all processes are done. It is worth noting that the minimum average waiting time is
higher than some of the other algorithms. While this approach does result in a higher
turnaround time, it is much more efficient
efficien in multitasking operating environments in comparison
to most other scheduling algorithms in OS.

What is Process Synchronization in OS?


An operating system is software that manages all applications on a device and basically helps in
the smooth functioning of our computer. Because of this reason, the operating system has to
perform many tasks, sometimes simultaneously. This isn't usually a problem unless these
simultaneously occurring processes use a common resource.

For example, consider a bank that stores the account balance of each customer in the same
database. Now suppose you initially have x rupees in your account. Now, you take out some
amount of money from your bank account, and at the same time, someone tries to look at the
amount of money stored in your account. As you are taking out some money from your account,
after the transaction, the total balance left will be lower than x. But, the transaction takes time,
and hence the person reads x as your account balance which leads to inconsistent data. IIf in
some way, we could make sure that only one process occurs at a time, we could ensure
consistent data.

In the above image, if Process1 and Process2 happen at the same time, user 2 will get the wrong
account balance as Y because of Process1 being transacted when the balance is X.

Inconsistency of data can occur when various


various processes share a common resource in a system
which is why there is a need for process synchronization in the operating system.

How Process Synchronization in OS Works?

Let us take a look at why exactly we need Process Synchronization. For example, If a process1 is
trying to read the data present in a memory location while another process2 is trying to change
the data present at the same location, there is a high chance that
that the data read by
the process1 will be incorrect.

Let us look at different elements/sections of a program:

 Entry Section: The entry Section decides the entry of a process.

 Critical Section: The Critical section allows and makes sure that only one proc
process
ess is
modifying the shared data.

 Exit Section: The entry of other processes in the shared data after the execution of one
process is handled by the Exit section.

 Remainder Section: The remaining part of the code which is not categorized as above is
contained
ned in the Remainder section.

Race Condition

When more than one process is either running the same code or modifying the same memory or
any shared data, there is a risk that the result or value of the shared data may be incorrect
because all processes try to access and modify this shared resource. Thus, all the processes race
to say that my result is correct. This condition is called the race condition. Since many processes
use the same data, the results of the processes may depend on the order of their execution.
ex

This is mostly a situation that can arise within the critical section. In the critical section, a race
condition occurs when the end result of multiple thread executions varies depending on the
sequence in which the threads execute.

But how to avoid this race condition? There is a simple solution:

 by treating the critical section as a section that can be accessed by only a single process at a
time. This kind of section is called an atomic section.
What is the Critical Section Problem?

Why do we need to have a critical section? What problems occur if we remove it?
A part of code that can only be accessed by a single process at any moment is known as a critical
section. This means that when a lot of programs want to access and change a single shared data,
only one process will be allowed to change at any given moment. The other processes have to
wait until the data is free to be used.

The wait() function mainly handles the entry to the critical section, while the signal() function
handles the exit from the critical section. If we remove the critical section, we cannot
guarantee the consistency of the end outcome after all the processes finish executing
simultaneously.

We'll look at some solutions to the Critical Section Problem but before we move on to that, let
us take a look at what conditions are necessary for a solution to Critical Section Problem.

Requirements of Synchronization

The following three requirements must be met by a solution to the critical section problem:

 Mutual exclusion: If a process is running in the critical section, no other process should be
allowed to run in that section at that time.

 Progress: If no process is still in the critical section and other processes are waiting outside
the critical section to execute, then any one of the threads must be permitted to enter the
critical section. The decision of which process will enter the critical section will be taken by
only those processes that are not executing in the remaining section.

 No starvation: Starvation means a process that keeps waiting forever to access the critical
section but never gets a chance. No starvation is also known as Bounded Waiting.

o A process should not wait forever to enter inside the critical section.

o When a process submits a request to access its critical section, there should be a
limit or bound, which is the number of other processes that are allowed to access
the critical section before it.

o After this bond is reached, this process should be allowed to access the critical
section.

SOLUTIONS FOR CRITICAL SECTION PROBLEM


Lock Variable
This is the simplest synchronization mechanism. This is a Software Mechanism implemented in
User mode. This is a busy waiting solution which can be used for more than two processes.

In this mechanism, a Lock variable lockis used. Two values of lock can be possible, either 0 or 1.
Lock value 0 means that the critical section is vacant while the lock value 1 means that it is
occupied.

A process which wants to get into the critical section first checks the value of the lock variable. If
it is 0 then it sets the value of lock as 1 and enters into the critical section, otherwise it waits.

The pseudo code of the mechanism looks like following.

1. Entry Section →

2. While (lock! = 0);

3. Lock = 1;

4. //Critical Section

5. Exit Section →

6. Lock =0;

If we look at the Pseudo Code, we find that there are three sections in the code. Entry Section,
Critical Section and the exit section.

Initially the value of lock variable is 0. The process which needs to get into the critical section,
enters into the entry section and checks the condition provided in the while loop.

The process will wait infinitely until the value of lock is 1 (that is implied by while loop). Since, at
the very first time critical section is vacant hence the process will enter the critical section by
setting the lock variable as 1.

When the process exits from the critical section, then in the exit section, it reassigns the value
of lock as 0.

.Test and Set Mechanism

In lock variable mechanism, Sometimes Process reads the old value of lock variable and enters
the critical section. Due to this reason, more than one process might get into critical section.
However, the code shown in the part one of the following section can be replaced with the code
shown in the part two. This doesn't affect the algorithm but, by doing this, we can manage to
provide the mutual exclusion to some extent but not completely.

In the updated version of code, the value of Lock is loaded into the local register R0 and then
value of lock is set to 1.

However, in step 3, the previous value of lock (that is now stored into R0) is compared with 0. if
this is 0 then the process will simply enter into the critical section otherwise will wait by
executing continuously in the loop.

The benefit of setting the lock immediately to 1 by the process itself is that, now the process
which enters into the critical section carries the updated value of lock variable that is 1.

Test and set algorithm uses a boolean variable 'lock' which is initially initialized to false. This lock
variable determines the entry of the process inside the critical section of the code. Let's first see
the algorithm and then try to understand what the algorithm is doing.

boolean lock = false;

boolean TestAndSet(boolean &target){

boolean returnValue = target;

target = true;

return returnValue;

while(1){

while(TestAndSet(lock));

CRITICAL SECTION CODE;

lock = false;

REMAINDER SECTION CODE;

In the above algorithm the TestAndSet() function takes a boolean value and returns the same
value. TestAndSet() function sets the lock variable to true.
When lock varibale is initially false the TestAndSet(lock) condition checks for TestAndSet(false).
As TestAndSet function returns the same value as its argument, TestAndSet(false) returns false.
Now, while loop while(TestAndSet(lock)) breaks and the process enters the critical section.

As one process is inside the critical section and lock value is now 'true', if any other process tries
to enter the critical section then the new process checks for while(TestAndSet(true)) which will
return true inside while loop and as a result the other process keeps executing the while loop.

while(true); // this keeps executing until lock becomes false.

As no queue is maintained for the processes stuck in the while loop, bounded waiting is not
ensured. If a process waits for a set amount of time before entering the critical section, it is
said to be a bounded waiting condition.

In test and set algorithm the incoming process trying to enter the critical section does not wait in
a queue so any process may get the chance to enter the critical section as soon as the process
finds the lock variable to be false. It may be possible that a particular process never gets the
chance to enter the critical section and that process waits indefinitely.

Turn Variable or Strict Alternation Approach

Turn Variable or Strict Alternation Approach is the software mechanism implemented at user
mode. It is a busy waiting solution which can be implemented only for two processes. In this
approach, A turn variable is used which is actually a lock.

This approach can only be used for only two processes. In general, let the two processes be Pi
and Pj. They share a variable called turn variable. The pseudo code of the program can be given
as following.

For Process Pi

1. Non - CS

2. while (turn ! = i);

3. Critical Section

4. turn = j;

5. Non - CS

For Process Pj
1. Non - CS

2. while (turn ! = j);

3. Critical Section

4. turn = i ;

5. Non - CS

The actual problem of the lock variable approach was the fact that the process was entering in
the critical section only when the lock variable is 1. More than one process could see the lock
variable as 1 at the same time hence the mutual exclusion was not guaranteed
guaranteed there.

This problem is addressed in the turn variable approach. Now, A process can enter in the critical
section only in the case when the value of the turn variable equal to the PID of the process.

There are only two values possible for turn varia


variable,
ble, i or j. if its value is not i then it will definitely
be j or vice versa.

In the entry section, in general, the process Pi will not enter in the critical section until its value is
j or the process Pj will not enter in the critical section until its value
v is i.

Initially, two processes Pi and Pj are available and want to execute into critical section.

The turn variable is equal to i hence Pi will get the chance to enter into the critical section. The
value of Pi remains I until Pi finishes critical section.
Pi finishes its critical section and assigns j to turn variable. Pj will get the chance to enter into the
critical section. The value of turn remains j until Pj finishes its critical section.

Petersons Algorithm

 Set turn to either 0 or 1, indicating which process can enter its critical section first.

 Repeat indefinitely−

 Set flag[i] to true, indicating that process i wants to enter its critical section.

 Set turn to j, the other process index.

 While flag[j] is true and turn equals j, wait.

 Enter the critical section.

 Set flag[i] to false, indicating that process i is done with its critical section.

 Remainder section.
Description of the Algorithm

The mutual exclusion issue has a software-based solution known as Peterson's Algorithm, which
seeks to guarantee that only one process is ever present in its critical region. Two shared
variables, a flag array, and a turn variable, constitute the foundation of the algorithm. One flag is
assigned to each process in the flag array, which contains Boolean values to represent whether
or not a process is interested in accessing its crucial region. A turn variable is a number that
indicates which process, in the event of a dispute, should go first.

The lock() method indicates that the calling process is interested in accessing its crucial section
by first setting the flag of the calling process to true. If both processes want to access their
crucial sections simultaneously, it then sets the turn variable to the index of the other process
(j), signaling that the other process should proceed first. After that, the function enters a busy-
waiting loop where it repeatedly checks to see if the other process's flag is true and if it is now
its turn to reach the crucial area. The loop continues if one of these requirements is not met.
After both requirements are satisfied, the loop ends and the calling procedure moves on to its
crucial portion.

The calling process can leave its crucial area and no longer be interested in entering it by using
the unlock() method to simply change its flag to false.

int turn = 0; // shared variable

bool flag[2] = {false, false}; // shared variable

Process 0:

while (true) {

flag[0] = true;

turn = 1;

while (flag[1] && turn == 1) {} // busy wait

// critical section

flag[0] = false;

// remainder section
}

Process 1:

while (true) {

flag[1] = true;

turn = 0;

while (flag[0] && turn == 0) {} // busy wait

// critical section

flag[1] = false;

// remainder section

Semaphores

Semaphores refer to the integer variables that are primarily used to solve the critical section
problem via combining two of the atomic procedures, wait and signal, for the process
synchronization.

The definitions of signal and wait are given below:

Wait

It decrements the value of its A argument in case it is positive. In case S is zero or negative, then no
operation would be performed.

wait(A)

while (A<=0);

A–;

}
Signal

This operation increments the actual value of its argument A.

signal(A)

A++;

Types of Semaphores

Semaphores are of the following types:

Binary Semaphore

The value of a semaphore variable in binary semaphores is either 0 or 1. The value of the semaphore
variable is initially set to 1, but if a process requests a resource, the wait() method is invoked, and
the value of this semaphore is changed from 1 to 0. When the process has finished using the
resource, the signal() method is invoked, and the value of this semaphore variable is raised to 1. If
the value of this semaphore variable is 0 at a given point in time, and another process wants to
access the same resource, it must wait for the prior process to release the resource.

Counting Semaphore

The semaphore variable is first initialized with the total number of resources available in counting
semaphores. The wait() method is then executed anytime a process requires a resource, and the
value of a semaphore variable gets decreased by one. The process then uses the resource, after
which it calls the signal() function, which increases the value of a semaphore variable by one. When
the value of this semaphore variable reaches 0, that is, when the process has utilised all of the
resources and there are none left to be used, any other process that wishes to consume the
resources must wait for its own turn.

CLASSICAL PROBLEMS OF SYNCHRONISATION

The Bounded-Buffer (Producer-Consumer or Vendor-Customer) Problem

The bounded-buffer (also called producer-consumer or vendor-customer) problem describes two


processes: the producer and the consumer, who share a common, fixed-size buffer used as a queue.
The producer's job is to generate data, put it into the buffer, and start again. At the same time, the
consumer is consuming the data (i.e., removing data from the buffer), one piece at a time. The
challenge is to make sure the producer does not try to add data into the buffer if it is full, and the
consumer does not try to remove data from an empty buffer.

The solution to this problem is to create two counting semaphores (full and empty) to keep track of
the current number
ber of full and empty buffers, respectively. Producers produce a product while
consumers consume the product, but both use of one of the containers each time.

The Dining Philosophers Problem

In this computer systems analogy, the dining philosopher problem describes a situation where a
certain number of diners (philosophers) are seated around a circular table with one chopstick
between each pair of philosophers. A diner may eat if they can pick up the two chopsticks adjacent
to them. One chopstick mayy be picked up by any one of its adjacent followers, but not both.

In computing, this challenge pertains to the allocation of limited resources to a group of processes in
a deadlock-free and starvation-free
free manner.
The Readers and Writers Problem

Suppose a database needs to be shared among several concurrent processes. Some of these
processes may only want to read the database, whereas others may want to update (that is, to read
and write) the database. We distinguish between these two types of processes by referring to the
former as readers and to the latter as writers. In OS we call this situation the readers
readers-writers
writers
problem.

READERS

wait (mutex);

rc ++;

if (rc == 1)

wait (wrt);

signal(mutex);

. READ THE OBJECT

wait(mutex);
rc --;

if (rc == 0)

signal (wrt);

signal(mutex);

WRITERS

wait(wrt);

. WRITE INTO THE OBJECT

signal(wrt);

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy