0% found this document useful (0 votes)
36 views

Thread Synchronisation Problems

The document describes three classical synchronization problems: the bounded buffer problem, readers-writers problem, and priority inversion. It then focuses on explaining the bounded buffer problem in detail. It describes how a bounded buffer with capacity N can be implemented using an array and indexes to track the next write and read locations. Mutexes and semaphores are used to provide mutual exclusion on buffer access and synchronize producers and consumers to prevent overfilling or emptying the buffer.

Uploaded by

angrycurie
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views

Thread Synchronisation Problems

The document describes three classical synchronization problems: the bounded buffer problem, readers-writers problem, and priority inversion. It then focuses on explaining the bounded buffer problem in detail. It describes how a bounded buffer with capacity N can be implemented using an array and indexes to track the next write and read locations. Mutexes and semaphores are used to provide mutual exclusion on buffer access and synchronize producers and consumers to prevent overfilling or emptying the buffer.

Uploaded by

angrycurie
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 29

Synchronisation

problems
Module 4 self study material
Bounded buffer
Reader and writers
Priority inversion

Operating systems 2018


1DT044 and 1DT096

February 2018 karl.marklund@it.uu.se Uppsala University


Classical problems of
synchronization

★ The bounded buffer problem

★ The readers and writers problem

★ Priority inversion
Bounded
buffer
Bounded buffer
A bounded buffer lets multiple producers and multiple
consumers share a single buffer. Producers write data to
the buffer and consumers read data from the buffer.
★ Producers must block if the buffer is full.
★ Consumers must block if the buffer is empty.

bounded buffer with capacity N

multiple multiple
producers consumers
Implementation
Use an array of size N to store the data items in the buffer.

★ Keep track of where to produce the next data item


using index next_in.

★ Keep track of from where to consume the next data


item using index next_out.

bounded buffer with capacity N

next_in = 0 next_out = 0
multiple multiple
! ! !
producers consumers
0 1 2 3 4 N-1
The next_in index must be incremented after every write
to the buffer.

next_in = 1 next_out = 0

A A ! ! !
0 1 2 3 4 N-1

next_in = 2 next_out = 0

B A B ! ! !
0 1 2 3 4 N-1

next_in = 3 next_out = 0

C A B C ! ! !
0 1 2 3 4 N-1
The next_out index must be incremented after every read
from the buffer.
next_in = 3 next_out = 1

A B C ! ! ! A
0 1 2 3 4 N-1

next_in = 3 next_out = 2

B C ! ! ! B
0 1 2 3 4 N-1

Let's make an additional write to the buffer.

next_in = 4 next_out = 2

D C D ! ! !
0 1 2 3 4 N-1
The buffer wraps around in a circular manner.

next_in = N-1 next_out = N-3 Assume the buffer is in


the following state. The
! ! ! X Y next write will be to the
0 1 2 N-3 N-2 N-1 last element of the array.

next_in = 0 next_out = N-3


The next write will be
Z ! ! ! X Y Z to the first element of
0 1 2 N-3 N-2 N-1
the array.

next_in = 1 next_out = N-3


The next write will be
a a ! ! ! X Y Z to the second element
0 1 2 N-3 N-2 N-1
of the array.
Wrap around
Use the modulo operator % to make the index next_in
wrap around after N writes and the index next_out wrap
around after N reads.

Producer
next_in = (next_in + 1) % N

Consumer
next_out = (next_out + 1) % N
Mutual exclusion
All updates to the buffer state must be done in a critical
section. More specifically, mutual exclusion must be
enforced between the following critical sections:

★ A producer writing to a buffer slot and updating


next_in.

★ A consumer reading from a buffer slot and updating


next_out

A binary semaphore or a mutex lock can be used to


protect access to the critical sections.
Synchronisation
Producers must block if the buffer is full. Consumers must block if the
buffer is empty.

Use one semaphore named empty to count the empty slots in the buffer.

‣ Initialise this semaphore to N.


‣ A producer must wait on this semaphore before writing to the buffer.
‣ A consumer will signal this semaphore after reading from the buffer.

Use one semaphore named data to count the number of data items in the
buffer.
‣ Initialise this semaphore to 0.
‣ A consumer must wait on this semaphore before reading from the
buffer.
‣ A producer will signal this semaphore after writing to the buffer.
P producers and C consumers using a shared bounded
buffer of size N. Producers writes to buffer at index
next_in and consumer reads at index next_out.
Shared resources

Shared buffer with N Semaphores used to synchronise access to


slots the buffer
Mutual exclusive Atomically counts Atomically
updates of next_in the number of data counts the empty
and next_out. items in the buffer. slots in the buffer.

next_in next_out mutex data empty

0 0 1 0 N

Producer Producer Consumer Consumer


0 P-1 0 C-1

produce produce consume consume


Shared resources
Semaphores used to synchronize access to the buffer
Shared buffer with N slots Provides mutual Counts the number Counts the number
exclusive updates of of data items in the of empty slots in the
next_in and next_out. buffer. buffer.

mutex data empty


next_in next_out 1 0 N

produce(buffer, *data) { 1. Block if buffer is full,


otherwise atomically
1 wait(empty) decrement the empty
counter.
2 wait(mutex) 2. Enter critical section, i.e.,
make sure no other
3 buffer[next_in] = copy(data) producer or consumer
updates the buffer at the
4 same time.
next_in = nexp_in + 1 % N
3. Copy data to slot in buffer.
5 signal(mutex) 4. Update next_in.
5. Leave the critical section.
6 signal(data) 6. Atomically increment the
} data counter.
Shared
Semaphores used to synchronize access to the buffer
Shared buffer with N slots Provides mutual Counts the number Counts the number
exclusive updates of of data items in the of empty slots in the
next_in and next_out. buffer. buffer.

mutex data empty


next_in next_out 1 0 N

consume(buffer, *data) { 1. Block if buffer is empty,


otherwise atomically
1 wait(data) decrement data counter.
2. Enter critical section,
2 wait(mutex) i.e., make sure no other
producer or consumer
updates the buffer at the
3 data = copy(buffer[next_out]) same time.
4 3. Copy data from slot in
next_out = next_out + 1 % N buffer.
4. Update next_out.
5 signal(mutex) 5. Leave the critical
section.
6 signal(empty) 6. Atomically increment
} the empty counter.
Shared
Semaphores used to synchronize access to the buffer
Shared buffer with N slots Provides mutual Counts the number Counts the number
exclusive updates of of data items in the of empty slots in the
next_in and next_out. buffer. buffer.

mutex data empty


next_in next_out 1 0 N

produce(buffer, *data) { consume(buffer, *data) {


wait(empty) wait(data)
wait(mutex) wait(mutex)

buffer[next_in] = copy(data) data = copy(buffer[next_out])


next_in = next_in + 1 % N next_out = next_out + 1 % N

signal(mutex) signal(mutex)
signal(data) signal(empty)
} }
A pipe is a bounded buffer

ls | grep .txt | wc
Concurrent writes to a pipe
Is a single write to a pipe atomic, i.e., is the whole amount written in a single
write operation not interleaved with data written by any other process?

POSIX.1-200
• Using write() to write less than PIPE_BUF bytes must be atomic:
the output data is written to the pipe as a contiguous sequence.
• Writes of more than PIPE_BUF bytes may be nonatomic: the kernel
may interleave the data, on arbitrary boundaries, with data written by
other processes.
The value if PIPE_BUF is defined by each implementation, but the
minimum is 512 bytes (see limits.h).
On Linux:
• PIPE_BUF = 4096.
• The value of PIPE_BUF is a consequence of other logic in the kernel,
it is not a configuration parameter.
Readers
and writers
Readers-Writers Problem
A data set is shared among a number of concurrent
processes. Readers only read the data set; they do not
perform any updates. Writers can both read and write.

Only one single writer can access the shared data at the same time, any other
writers or readers must be blocked.

Shared data accessed by


readers and writers W W W W

W
R R R R R R

Allow multiple readers to read at the same time, any writers must be blocked

Shared data accessed by


readers and writers W W W WW

R
R R R R
R readers and W writers access the same shared data set.
Allow multiple readers to read at the same time.
Only one single writer can access the shared data at the
same time.
Shared
Semaphores used to synchronize Integer
readers and wirters
Shared data Mutual exclusion Mutual exclusive Count the number
among writers. updates of readcount of active readers.
accessed by
readers and
wrt mutex readcount
writers
1 1 0

Reader Reader Writer Writer


0 R-1 0 W-1

read read write write


Shared
Semaphores used to synchronize Integer
readers and wirters
Shared data
Mutual exclusion Mutual exclusive Count the number
accessed by among writers. updates of readcount of active readers.
readers and
writers wrt mutex readcount

write(buffer, *data) { 1. Enter critical section,


1 wait(wrt); block if other task is
writing.

2 // Write shared data 2. Inside critical section,


write to shared data
3 signal(wrt); structure.

} 3. Leave critical section.


read(buffer, *data) { Semaphores
wait(mutex); mutex wrt

readcount++; Integral counter


if readcount == 1: readcount
wait(wrt);
Entering
signal(mutex);
All readers need to mutually
exclusively increment
// Read shared data readcount when entering.

wait(mutex); The first reader also need to


block if a writer is active.
reacount--; Leaving
if readcount == 0:
signal(wrt); All readers need to mutually
exclusively decrement
readcount when leaving.
signal(mutex);
} The last reader also need to
unblock any waiting writer.
Readers-Writers Problem read(buffer, *data) {
A data set is shared among a number of wait(mutex);
concurrent processes.

• Only one single writer can access the shared


readcount++;
data at the same time, any other writers or
readers must be blocked. if readcount == 1:
• Allow multiple readers to read at the same time, wait(wrt);
any writers must be blocked.
Semaphores mutex and wrt, both initialized to 1.
Integer readcount initialized to 0. signal(mutex);

write(buffer, *data) { // Read shared data


wait(wrt); wait(mutex);

reacount--;
// Write shared data if readcount == 0:
signal(wrt);
signal(wrt); signal(mutex);
} }
Priority
inversion
Scenario (1)
A high priority task H is blocked due to a low priority task L
holding a shared resource R (for example a binary semaphore)
task H wants to acquire.

1) Consider two tasks H and L, of high and low priority respectively, either of
which can acquire exclusive use of a shared resource R.
2) L acquires R.
3) If H attempts to acquire R after L has acquired it, then H becomes blocked
until L relinquishes the resource.

Task H Shared Task L


3 2
(blocked) resource (ready)
High L acquires R Low
H requests R
priority R priority

4) Sharing an exclusive-use resource (R in this case) in a well-designed system


typically involves L relinquishing R promptly so that H (a higher priority task)
does not stay blocked for excessive periods of time.
Source: https://en.wikipedia.org/wiki/Priority_inversion 2016-02-11
Scenario (2)
Let's introduce a third task M with medium priority, i.e., a priority
between high priority task H and low priority task L.

Task M becomes ready (to run) during L's use of R.


1) M being higher in priority than L preempts R, causing L to not be able to
relinquish R promptly.
2) H becomes ready to run.
3) H request to acquire R.
4) H (the highest priority process) becomes blocked since H cannot acquire R
hold by L but L is not running (preempted by M).

L holds R

Task H 3 Shared Task M Task L


(blocked) resource (running) (ready)
High H requests R Medium Low
priority H blocked R priority priority

Source: https://en.wikipedia.org/wiki/Priority_inversion 2016-02-11


Priority inversion
A higher priority task is “preempted” by a lower priority one.

A medium priority task M preempts a low priority task L


holding a shared resource R. A high priority task H is not able
to run, although it has higher priority than M and H and M does
not compete for R.

Solution to the priority


inversion problem?

L holds R

Task H Shared Task M Task L


(blocked) resource (running) (ready)
High H requests R Medium Low
priority H blocked R priority priority

Source: https://en.wikipedia.org/wiki/Priority_inversion 2016-02-11


Priority inheritance protocol
When a task blocks one or more high-priority tasks, it ignores its
original priority assignment and executes its critical section at an
elevated priority level. After executing its critical section and
releasing its locks, the process returns to its original priority level.

★ Suppose H is blocked by L for some shared resource R.

★ The priority inheritance protocol requires that L executes its critical


section at H's (high) priority.

★ As a result, M will be unable to preempt L and M will be blocked.

★ That is, the higher-priority job M must wait for the critical section of the
lower priority job L to be executed, because L has inherited H's priority.

★ When L exits its critical section, it regains its original (low) priority and
awakens H (which was blocked by L).

★ H, having high priority, preempts L and runs to completion. This enables


M and L to resume in succession and run to completion.
Source: https://en.wikipedia.org/wiki/Priority_inheritance 2016-02-11
Priority inheritance and mutexes
What if a higher priority task is blocked on a mutex hold (owned)
by a lower priority task?

★ By default, if a task with a higher priority than the


mutex owner attempts to lock a mutex, then the
effective priority of the current owner is increased to
that of the higher-priority blocked thread waiting for
the mutex.

★ The current owner's effective priority is again


adjusted when it unlocks the mutex; its new priority
is the maximum of its own priority and the priorities
of those threads it still blocks, either directly or
indirectly.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy