0% found this document useful (0 votes)
41 views

Process Synchronization Notes

The document discusses operating systems and the critical-section problem in multi-process systems where processes need exclusive access to shared resources. It describes how critical sections work, with entry/exit sections and a remainder section. It outlines the requirements for a solution: mutual exclusion, progress, and bounded waiting. Semaphores are then introduced as a synchronization method, where processes can wait() and signal() a semaphore. Binary semaphores can enforce mutual exclusion for critical sections. Counting semaphores can also solve other synchronization problems. The implementation is improved by allowing waiting processes to block instead of busy waiting.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
41 views

Process Synchronization Notes

The document discusses operating systems and the critical-section problem in multi-process systems where processes need exclusive access to shared resources. It describes how critical sections work, with entry/exit sections and a remainder section. It outlines the requirements for a solution: mutual exclusion, progress, and bounded waiting. Semaphores are then introduced as a synchronization method, where processes can wait() and signal() a semaphore. Binary semaphores can enforce mutual exclusion for critical sections. Counting semaphores can also solve other synchronization problems. The implementation is improved by allowing waiting processes to block instead of busy waiting.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

CS4TH3 OPERATING SYSTEMS

BACKGROUND

1
CS4TH3 OPERATING SYSTEMS

2
CS4TH3 OPERATING SYSTEMS

THE CRITICAL-SECTION PROBLEM

Consider a system consisting of n processes {P0, P1,..., Pn-1}. Each process has a
segment of code, called a critical section, in which the process may be changing common
variables, updating a table, writing a file, and so on. The important feature of the system is
that, when one process is executing in its critical section, no other process is to be allowed
to execute in its critical section.

3
CS4TH3 OPERATING SYSTEMS

That is, no two processes are executing in their critical sections at the same time.
The critical-section problem is to design a protocol that the processes can use to cooperate.
Each process must request permission to enter its critical section. The section of code
implementing this request is the entry section. The critical section may be followed by an
exit section. The remaining code is the remainder section. The general structure of a
typical process Pi, is shown in the following figure:

A solution to the critical-section problem must satisfy the following three requirements:

1. Mutual exclusion. If process P, is executing in its critical section, then no other


processes can be executing in their critical sections.
2. Progress. If no process is executing in its critical section and some processes wish to
enter their critical sections, then only those processes that are not executing in their
remainder sections can participate in the decision on which will enter its critical section
next, and this selection cannot be postponed indefinitely.
3. Bounded waiting. There exists a bound, or limit, on the number of times that other
processes are allowed to enter their critical sections after a process has made a request to
enter its critical section and before that request is granted.
We assume that each process is executing at a nonzero speed. However, we can make
no assumption concerning the relative speed of the n processes.
Two general approaches are used to handle critical sections in operating systems:
(1) preemptive kernels and (2) nonpreemptive kernels.
A preemptive kernel allows a process to be preempted while it is running in kernel
mode.
A nonpreemptive kernel does not allow a process running in kernel mode to be
preempted; a kernel-mode process will run until it exits kernel mode, blocks, or voluntarily

4
CS4TH3 OPERATING SYSTEMS

yields control of the CPU. Obviously, a nonpreemptive kernel is essentially free from race
conditions on kernel data structures, as only one process is active in the kernel at a time.
We cannot say the same about nonpreemptive kernels, so they must be carefully
designed to ensure that shared kernel data are free from race conditions. Preemptive kernels
are especially difficult to design for SMP architectures, since in these environments it is
possible for two kernel-mode processes to run simultaneously on different processors.
A preemptive kernel is more suitable for real-time programming, as it will allow a
real-time process to preempt a process currently running in the kernel. Furthermore, a
preemptive kernel may be more responsive, since there is less risk that a kernel-mode
process will run for an arbitrarily long period before relinquishing the processor to waiting
processes.

5
CS4TH3 OPERATING SYSTEMS

To prove that this solution is correct, we need to show that:


1. Mutual exclusion is preserved.
2. The progress requirement is satisfied.
3. The bounded-waiting requirement is met.

6
CS4TH3 OPERATING SYSTEMS

SYNCHRONIZATION HARDWARE

7
CS4TH3 OPERATING SYSTEMS

8
CS4TH3 OPERATING SYSTEMS

9
CS4TH3 OPERATING SYSTEMS

10
CS4TH3 OPERATING SYSTEMS

SEMAPHORES

A semaphore S is an integer variable that, apart from initialization, is accessed only


through two standard atomic operations: wait () and signal (). The wait () operation was
originally termed P; signal() was originally called V. The definition of wait () is as follows:

11
CS4TH3 OPERATING SYSTEMS

wait(S) {
while S <= 0
; // no-op S--;
}

The definition of signal () is as follows:

signal(S) {
S++;
}

All the modifications to the integer value of the semaphore in the wait () and signal
() operations must be executed indivisibly. That is, when one process modifies the
semaphore value, no other process can simultaneously modify that same semaphore value.
In addition, in the case of wait(S), the testing of the integer value of S (S < 0), and its
possible modification (S—), must also be executed without interruption.

Operating systems often distinguish between counting and binary semaphores. The
value of a counting semaphore can range over an unrestricted domain. The value of a binary
semaphore can range only between 0 and 1. On some systems, binary semaphores are
known as mutex locks, as they are locks that provide mutual exclusion.

We can use binary semaphores to deal with the critical-section problem for multiple
processes. The n processes share a semaphore, mutex, initialized to 1. Each process Pi is
organized as shown:
do {
waiting(mutex);
// critical section signal
(mutex) ,-
// remainder section
}while (TRUE);

12
CS4TH3 OPERATING SYSTEMS

We can also use semaphores to solve various synchronization problems. For example,
consider two concurrently running processes: P1 with a statement S1 and P2 with a
statement S2. Suppose we require that P2 be executed only after S1 has completed. We can
implement this scheme readily by letting P1 and P2 share a common semaphore synch,
initialized to 0, and by inserting the statements.

S1;
signal(synch);

in process P1, and the statements

wait(synch);
S2;

in process P2. Because synch is initialized to 0, P2 will execute S2 only after P1 has
invoked signal (synch), which is after statement S1 has been executed.
Semaphore Implementation:

The main disadvantage of the mutual-exclusion solutions and of the semaphore definition given
here, isthat they all require busy waiting. While a process is in its critical section, any other process
that tries to enter its critical section must loop continuously in the entry code. This continual
looping is clearly a problem in a real multiprogramming system, where a single CPU is shared
among many processes. Busy waiting wastes CPU cycles that some other process might be able to
use productively. This type of semaphore is also called a spinlock (because the process "spins"
while waiting for the lock). Spinlocks are useful in multiprocessor systems. The advantage of a
spinlock is that no context switch is required when a process must wait on a lock, and a context
switch may take considerable time. Thus, when locks are expected to be held for short times,
spinlocks are useful.
To overcome the need for busy waiting, we can modify the definition of the wait and signal
semaphore operations. When a process executes the wait operation and finds that the semaphore
value is not positive, it must wait. However, rather than busy waiting, the process can block itself.
The block operation places a process into a waiting queue associated with the semaphore, and the
state of the process is switched to the waiting state. Then, control is transferred to the CPU
scheduler, which selects another process to execute.
A process that is blocked, waiting on a semaphore S, should be restarted when some other
process executes a signal operation. The process is restarted by a wakeup operation, which changes
the process from the waiting state to the ready state. The process is then placed in the ready queue.
(The CPU may or may not be switched from the running process to the newly ready process,
depending on the CPU-scheduling algorithm.)
To implement semaphores under this definition, we define a semaphore

13
CS4TH3 OPERATING SYSTEMS

as a "C" struct:

typedef struct {
int value ;
struct process *L;
) semaphore;

Each semaphore has an integer value and a list of processes. When a process must wait on a
semaphore, it is added to the list of processes. A signal operation removes one process from the
list of waiting processes and awakens that process.
The wait semaphore operation can now be defined as

void
wait(semaphore
S) {S.value--;
if (S.value < 0) {
add this process
to S . L; block()
;
}
The signal semaphore operation can now be
defined as void signal(semaphore S) {
S.value++;
if (S.value <= 0) {
remove a process P
from S . L ; wakeup
(PI) ;
}

The block operation suspends the process that invokes it. The wakeup(P1) operation resumes the
execution of ablocked process P. These two operations are provided by the operating system as
basic system calls.

Binary Semaphores
The semaphore construct described in the previous sections is commonly known as a counting
semaphore, since its integer value can range over an unrestricted domain. A binary semaphore is
a semaphore with an integer value that can range only between 0 and 1. A binary semaphore can
be simpler to implement than a counting semaphore, depending on the underlying hardware
architecture. We will now show how a counting semaphore can be implemented using binary
semaphores. Let S be a counting semaphore.
To implement it in terms of binary semaphores we need the following data structures:
binary-semaphore S1, S2;
int C;

14
CS4TH3 OPERATING SYSTEMS

Initially S1 = 1, S2 = 0, and the value of integer C is set to the initial value of the counting
semaphore S.The wait operation on the counting semaphore S can be implemented as
follows:
wait (S1) ;
C--;
i f (C < 0) {
signal(S
1) ; wait
(S2) ;
}
signal(S1);
The signal operation on the counting semaphore S can be
implemented as follows: w a i t (S1) ;
C++ ;
i f (C <= 0)
signal (S2)
;e l s e
signal (S1)
;
Classic Problems of Synchronization
We present a number of different synchronization problems as examples for a large class of
concurrency-control problems. These problems are used for testing nearly every newly proposed
synchronization scheme. Semaphores are used for synchronization in our solutions.

i. THE BOUNDED-BUFFER PROBLEM

The bounded-buffer problem is commonly used to illustrate the power of


synchronization primitives.
We assume that the pool consists of n buffers, each capable of holding one item.
The mutex semaphore provides mutual exclusion for accesses to the buffer pool and is
initialized to the value 1. The empty and full semaphores count the number of empty and
full buffers. The semaphore empty is initialized to the value n; the semaphore full is
initialized to the value 0.
The code for the producer process is shown:
do{
// produce an item in nextp
...
wait(empty); wait ( mutex)
// add nextp to buffer
...
signal(mutex);
15
CS4TH3 OPERATING SYSTEMS

signal (full);
}while (TRUE);

The code for the consumer process is shown:

do {
wait (full);
wait(mutex);
...
//remove an item from buffer to nextc
...
signal(mutex);
signal(empty);
//consume the item in nextc
...
}while (TRUE);

ii. THE READERS-WRITERS PROBLEM

A database is to be shared among several concurrent processes. Some of these


processes may want only to read the database, whereas others may want to update (that is,
to read and write) the database. We distinguish between these two types of processes by
referring to the former as readers and to the latter as writers. Obviously, if two readers
access the shared data simultaneously, no adverse affects will result. However, if a writer
and some other thread (either a reader or a writer) access the database simultaneously,
chaos may ensue.
To ensure that these difficulties do not arise, we require that the writers have
exclusive access to the shared database. This synchronization problem is referred to as the
readers-writers problem.
In the solution to the first readers-writers problem, the reader processes share the
following data structures:
semaphore mutex, wrt;
int readcount;

The semaphores mutex and wrt are initialized to 1; readcount is initialized to 0.


The semaphore wrt is common to both reader and writer processes. The mutex semaphore
is used to ensure mutual exclusion when the variable readcount is updated. The readcount

16
CS4TH3 OPERATING SYSTEMS

variable keeps track of how many processes are currently reading the object. The
semaphore wrt functions as a mutual-exclusion semaphore for the writers.
The code for a writer process is shown:

The code for a reader process is shown:

iii. THE DINING-PHILOSOPHERS PROBLEM

Consider five philosophers who spend their lives thinking and eating. The
philosophers share a circular table surrounded by five chairs, each belonging to one
philosopher. In the center of the table is a bowl of rice, and the table is laid with five single
chopsticks.
When a philosopher thinks, she does not interact with her colleagues. From time to
time, a philosopher gets hungry and tries to pick up the two chopsticks that are closest to
her (the chopsticks that are between her and her left and right neighbors).

17
CS4TH3 OPERATING SYSTEMS

A philosopher may pick up only one chopstick at a time. Obviously, she cannot
pick up a chopstick that is already in the hand of a neighbor. When a hungry philosopher
has both her chopsticks at the same time, she eats without releasing her chopsticks. When
she is finished eating, she puts down both of her chopsticks and starts thinking again.

One simple solution is to represent each chopstick with a semaphore. A


philosopher tries to grab a chopstick by executing a wait () operation on that semaphore;
she releases her chopsticks by executing the signal () operation on the appropriate
semaphores. Thus, the shared data are
semaphore chopstick[5];

where all the elements of chopstick are initialized to 1. The structure of philosopher i is
shown

18
CS4TH3 OPERATING SYSTEMS

MONITORS

Incorrect use of semaphore operations:


• suppose that a process interchanges the order in which the wait() and signal() operations
on the semaphore mutex are executed, resulting in the following execution:
signal(mutex);
...
critical section
...
wait(mutex);
• suppose that a process replaces signal(mutex) with wait(mutex). that is, it executes

wait(mutex);
...
critical section
...
wait(mutex);
in this case, a deadlock will occur.

• suppose that a process omits the wait(mutex), or the signal(mutex), or both. in this case,
either mutual exclusion is violated or a deadlock will occur.
A type, or abstract data type, encapsulates private data with public methods to
operate on that data. A monitor type presents a set of programmer-defined operations that
are provided mutual exclusion within the monitor.
Monitor type also contains the declaration of variables whose values define the state
of an instance of that type, along with the bodies of procedures or functions that operate on
those variables. The syntax of a monitor is shown:

19
CS4TH3 OPERATING SYSTEMS

20
CS4TH3 OPERATING SYSTEMS

i. DINING-PHILOSOPHERS SOLUTION USING MONITORS

This solution imposes the restriction that a philosopher may pick up her chopsticks
only if both of them are available. To code this solution, we need to distinguish among
three states in which we may find a philosopher. For this purpose, we introduce the
following data structure:

enum {thinking, hungry, eating}state [5];

21
CS4TH3 OPERATING SYSTEMS

Philosopher i can set the variable state [i] = eating only if her two neighbors are
not eating: (state [(i+4) % 5] != eating) and (state [(i+1) % 5] != eating). We also need
to declare

condition self [5];

where philosopher i can delay herself when she is hungry but is unable to obtain
the chopsticks she needs.
The distribution of the chopsticks is controlled by the monitor dp, whose definition
is shown below.(next page)
Each philosopher, before starting to eat, must invoke the operation pickup (). This
may result in the suspension of the philosopher process. After the successful completion of
the operation, the philosopher may eat. Following this, the philosopher invokes the
putdown() operation. Thus, philosopher i must invoke the operations pickup() and
putdown() in the following sequence:
dp.pickup(i);
...
eat
...
dp.putdown(i);

22
CS4TH3 OPERATING SYSTEMS

ii. IMPLEMENTING A MONITOR USING SEMAPHORES

We now consider a possible implementation of the monitor mechanism using


semaphores. For each monitor, a semaphore mut ex (initialized to 1) is provided. A process
must execute wait(mutex) before entering the monitor and must execute signal (mutex)
after leaving the monitor. Since a signaling process must wait until the resumed process
either leaves or waits, an additional semaphore, next, is introduced, initialized to 0, on
which the signaling processes may suspend themselves.
An integer variable next_count is also provided to count the number of processes
suspended on next. Thus, each external procedure F is replaced by

23
CS4TH3 OPERATING SYSTEMS

wait(mutex) ;
...
body of F
...
if (next_count > 0)
signal(next);
else
signal(mutex);

Mutual exclusion within a monitor is ensured.


We can now describe how condition variables are implemented. For each condition
x, we introduce a semaphore x_sem and an integer variable x_count, both initialized to 0.
The operation x. wait () can now be implemented as
x_count++;
if (next_count > 0) signal(next);
else
signal(mutex);
wait(x_sem);
x_count—;

The operation x. signal () can be implemented as

if (x_count > 0) {
next_count++;
signal(x_sem);
wait(next) ;
next_count—;
}

24

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy