0% found this document useful (0 votes)
2 views

08-syncproblems

The document discusses classical synchronization problems in computer science, focusing on the Producer-Consumer and Readers-Writers problems. It explains the use of semaphores and monitors for managing concurrency and ensuring mutual exclusion among threads. Additionally, it highlights common mistakes in synchronization and provides guidelines for implementing solutions effectively.

Uploaded by

Techy Aman
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

08-syncproblems

The document discusses classical synchronization problems in computer science, focusing on the Producer-Consumer and Readers-Writers problems. It explains the use of semaphores and monitors for managing concurrency and ensuring mutual exclusion among threads. Additionally, it highlights common mistakes in synchronization and provides guidelines for implementing solutions effectively.

Uploaded by

Techy Aman
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 43

Classical Synchronization

Problems
Announcements
• CS 4410 #1 grades and solutions available in CMS soon.
– Average 54, stddev 9.7
– High of 70.
– Score out of 70 pts

– Regrading request, policies, etc..


• Submit written regrade requests to lead TA, Nazrul Alam
– He will assign another grader to problem
• If necessary, submit another written request to Nazrul to grade directly
• Finally, if still unhappy, submit another written request to Nazrul for Prof

– Advice: Come to all lectures. Go to office hours for clarifications.


Start early on HW

• CS 4410 #2 due today, Tuesday, Sept 23rd


• CS 4411 project due next Wednesday, Oct 1st
Announcements
Homework: Processes
• Can a process transition from waiting for an I/O
operation to the terminated state?
– No. A process waiting for I/O must first transition to the ready
queue and then to the running state before it may terminate.

interrupt Yield

adm
New i tt e Exit
d e
dispatch don
Ready Running

it
wa
I/O omp

t
c

en
or leti

ev
ev on

or
en

Waiting
I/O
t
Review: Paradigms for Threads to
Share Data
• We’ve looked at critical sections
– Really, a form of locking
– When one thread will access shared data, first
it gets a kind of lock
– This prevents other threads from accessing
that data until the first one has finished
– We saw that semaphores make it easy to
implement critical sections
Reminder: Critical Section
• Classic notation due to Dijkstra:

Semaphore mutex = 1;
CSEnter() { P(mutex); }
CSExit() { V(mutex); }

• Other notation (more familiar in Java):


CSEnter() { mutex.wait(); }
CSExit() { mutex.signal(); }
Review: Common errors
Whoever next calls P() will freeze up.
The bug might be confusing
because that other process could be
perfectly correct code, yet that’sProcess
A typo. the J won’t respect
A typo.oneProcess I will get stuck
you’ll see hung when youexclusion
use
mutual even if the other
(forever) the second time
the debugger to lookit does the
at its state!
processes follow the rules correctly.
P() operation. Moreover, every other
Worse still, once we’ve done two
Processprocess
i will freeze up too jwhen trying
Process Process
“extra” V() operations thiskway, other
to enter the critical section!
processes might get into the CS
inappropriately!
P(S) V(S) P(S)
CS CS CS
P(S) V(S)

6
Review: More common mistakes
• Conditional code that
can break the normal
top-to-bottom flow of code
in the critical section P(S)
if(something or other)
• Often a result of someone CS
return;

trying to maintain a V(S)


program, e.g. to fix a bug
or add functionality in code
written by someone else

7
Goals for Today
• Classic Synchronization problems
– Producer/Consumer with bounded buffer
– Reader/Writer
• Solutions to classic problems via semaphores
• Solutions to classic problems via monitors
Bounded Buffer
• Arises when two or more threads communicate
– some threads “produce” data that others “consume”.

• Example: preprocessor for a compiler


“produces” a preprocessed source file that the
parser of the compiler “consumes”
Readers and Writers
• In this model, threads share data that
– some threads “read” and other threads “write”.
• Instead of CSEnter and CSExit we want
– StartRead…EndRead; StartWrite…EndWrite

• Goal: allow multiple concurrent readers but only


a single writer at a time, and if a writer is active,
readers wait for it to finish
Producer-Consumer Problem
• Start by imagining an unbounded (infinite) buffer
• Producer process writes data to buffer
– Writes to In and moves rightwards
• Consumer process reads data from buffer
– Reads from Out and moves rightwards
– Should not try to consume if there is no data

Out In

Need an infinite buffer


Producer-Consumer Problem
• Bounded buffer: size ‘N’
– Access entry 0… N-1, then “wrap around” to 0 again
• Producer process writes data to buffer
– Must not write more than ‘N’ items more than consumer “ate”
• Consumer process reads data from buffer
– Should not try to consume if there is no data

0 1 N-1

In Out
Producer-Consumer Problem
• A number of applications:
– Data from bar-code reader consumed by device driver
– Data in a file you want to print consumed by printer spooler,
which produces data consumed by line printer device driver
– Web server produces data consumed by client’s web browser
• Example: so-called “pipe” ( | ) in Unix
> cat file | sort | uniq | more
> prog | sort
• Thought questions: where’s the bounded buffer?
• How “big” should the buffer be, in an ideal world?
Producer-Consumer Problem
• Solving with semaphores
– We’ll use two kinds of semaphores
– We’ll use counters to track how much data is in the
buffer
• One counter counts as we add data and stops the producer if
there are N objects in the buffer
• A second counter counts as we remove data and stops a
consumer if there are 0 in the buffer
– Idea: since general semaphores can count for us, we
don’t need a separate counter variable
• Why do we need a second kind of semaphore?
– We’ll also need a mutex semaphore
Producer-consumer with a

bounded
Problem Definition
buffer
– Producer puts things into a shared buffer (wait if full)
– Consumer takes them out (wait if empty)
– Use a fixed-size buffer between them to avoid lockstep
• Need to synchronize access to this buffer
• Correctness Constraints:
– Consumer must wait for producer to fill buffers, if none full
• scheduling constraint
– Producer must wait for consumer to empty buffers, if all full
• scheduling constraint
– Only one thread can manipulate buffer queue at a time
• mutual exclusion
• Remember why we need mutual exclusion

• General rule of thumb:


Use a separate semaphore for each constraint
– Semaphore not_empty; // consumer’s constraint
– Semaphore not_full; // producer’s constraint 15
– Semaphore mutex; // mutual exclusion
Producer-Consumer Problem
Init: Semaphore mutex = 1; /* for mutual exclusion*/
Semaphore not_full = N; /* number empty buf entries */
Semaphore not_empty = 0; /* number full buf entries */
any_t buf[N];
int tail = 0, head = 0;
Producer Consumer

void put(char ch) { char get() {

P(not_full); P(not_empty);
P(mutex); P(mutex);

// add ch to buffer // remove ch from buffer


buf[head%N] = ch; ch = buf[tail%N];
head++; tail++;

V(mutex); V(mutex);
V(not_empty); V(not_full);
}
return ch; 16
}
Discussion about Bounded Buffer
Solution
• Why asymmetry?
– Producer does: P(not_full), V(not_empty)
– Consumer does: P(not_empty), V(not_full)
• Is order of P’s important?
– Yes! Can cause deadlock
• Is order of V’s important?
– No, except that it might affect scheduling efficiency
• What if we have 2 producers or 2 consumers?
– Do we need to change anything?

17
What’s wrong?
Init: Semaphore mutex = 1; /* for mutual exclusion*/
Semaphore not_full = N; /* number empty buf entries */
Semaphore not_empty = 0; /* number full buf entries */
any_t buf[N];
Oops!
int tail = 0, Even
headif=you
0; do the correct
Producer operations, the order in which you do
Consumer
semaphore operations can have an
void put(char ch) {incredible impact on correctness char get() {

P(mutex); What if buffer is full? P(not_empty);


P(not_full); P(mutex);

// add ch to buffer // remove ch from buffer


buf[head%N] = ch; ch = buf[tail%N];
head++; tail++;

V(mutex); V(mutex);
V(not_empty); V(not_full);
}
return ch; 18
}
Readers/Writers Problem
W

R
R
R

• Motivation: Consider a shared database


– Two classes of users:
• Readers – never modify database
• Writers – read and modify database
– Is using a single lock on the whole database sufficient?
• Like to have many readers at the same time
• Only one writer at a time 19
Readers-Writers Problem
• Courtois et al 1971
• Models access to a database
– A reader is a thread that needs to look at the database but won’t
change it.
– A writer is a thread that modifies the database
• Example: making an airline reservation
– When you browse to look at flight schedules the web site is
acting as a reader on your behalf
– When you reserve a seat, the web site has to write into the
database to make the reservation
Readers-Writers Problem
• Many threads share an object in memory
– Some write to it, some only read it
– Only one writer can be active at a time
– Any number of readers can be active simultaneously
• Key insight: generalizes the critical section concept

• One issue we need to settle, to clarify problem statement.


– Suppose that a writer is active and a mixture of readers and writers
now shows up. Who should get in next?
– Or suppose that a writer is waiting and an endless of stream of readers
keeps showing up. Is it fair for them to become active?

• Ideally we would like a kind of back-and-forth form of fairness:


– Once a reader is waiting, readers will get in next.
– If a writer is waiting, one writer will get in next.
Readers-Writers (Take 1)
Shared variables: Semaphore mutex, wrl;
integer rcount; Reader
do {
Init: mutex = 1, wrl = 1, rcount = 0; P(mutex);
rcount++;
if (rcount == 1)
P(wrl);
Writer V(mutex);
do { ...
/*reading is performed*/
P(wrl); ...
... P(mutex);
/*writing is performed*/ rcount--;
... if (rcount == 0)
V(wrl); V(wrl);
V(mutex);
}while(TRUE); }while(TRUE);
Readers-Writers Notes
• If there is a writer
– First reader blocks on wrl
– Other readers block on mutex
• Once a reader is active, all readers get to go through
– Trick question: Which reader gets in first?
• The last reader to exit signals a writer
– If no writer, then readers can continue
• If readers and writers waiting on wrl, and writer exits
– Who gets to go in first?
• Why doesn’t a writer need to use mutex?
Does this work as we hoped?
• If readers are active, no writer can enter
– The writers wait doing a P(wrl)
• While writer is active, nobody can enter
– Any other reader or writer will wait
• But back-and-forth switching is buggy:
– Any number of readers can enter in a row
– Readers can “starve” writers
• With semaphores, building a solution that has the
desired back-and-forth behavior is really, really tricky!
– We recommend that you try, but not too hard…
postcript semaphores
• Semaphores are very “low-level” primitives
– Users could easily make small errors
– Similar to programming in assembly language
• Small error brings system to grinding halt
– Very difficult to debug

• Also, we seem to be using them in two ways


– For mutual exclusion, the “real” abstraction is a critical section
– But the bounded buffer example illustrates something different, where
threads “communicate” using semaphores

• Simplification: Provide concurrency support in compiler


– Monitors
25
Monitors
• Hoare 1974
• Abstract Data Type for handling/defining shared resources
• Comprises:
– Shared Private Data
• The resource
• Cannot be accessed from outside
– Procedures that operate on the data
• Gateway to the resource
• Can only act on data local to the monitor
– Synchronization primitives
• Among threads that access the procedures
26
Monitor Semantics
• Monitors guarantee mutual exclusion
– Only one thread can execute monitor procedure at any time
• “in the monitor”
– If second thread invokes monitor procedure at that time
• It will block and wait for entry to the monitor
 Need for a wait queue
– If thread within a monitor blocks, another can enter
• Effect on parallelism?

27
Structure
Monitor monitor_name
of a Monitor
{ For example:
// shared variable declarations
Monitor stack
procedure P1(. . . .) { {
.... int top;
} void push(any_t *) {
....
procedure P2(. . . .) {
}
....
} any_t * pop() {
. ....
.
}
procedure PN(. . . .) {
.... initialization_code() {
} ....
}
initialization_code(. . . .) {
}
....
only one instance of stack can
}
be modified at a time
} 28
Schematic view of a Monitor

29
Synchronization Using Monitors
• Defines Condition Variables:
– condition x;
– Provides a mechanism to wait for events
• Resources available, any writers
• 3 atomic operations on Condition Variables
– x.wait(): release monitor lock, sleep until woken up
 condition variables have waiting queues too
– x.notify(): wake one process waiting on condition (if there is one)
• No history associated with signal
– x.broadcast(): wake all processes waiting on condition
• Useful for resource manager
• Condition variables are not Boolean
– If(x) then { } does not make sense
30
Monitor with Condition Variables

31
Producer Consumer using Monitors
Monitor Producer_Consumer {
any_t buf[N];
int n = 0, tail = 0, head = 0;
condition not_empty, not_full;
void put(char ch) {
if(n == N)
wait(not_full);
What if no thread is waiting
buf[head%N] = ch;
head++;
when signal is called?
n++;
signal(not_empty); Signal is a “no-op” if nobody
} is waiting. This is very different
char get() { from what happens when you call
if(n == 0) V() on a semaphore – semaphores
wait(not_empty); have a “memory” of how many times
ch = buf[tail%N]; V() was called!
tail++;
n--;
signal(not_full);
return ch;
} 32
}
Types of wait queues
• Monitors have several kinds of “wait” queues
– Condition variable: has a queue of threads waiting on
the associated condition
• Thread goes to the end of the queue
– Entry to the monitor: has a queue of threads waiting
to obtain mutual exclusion so they can enter
• Again, a new arrival goes to the end of the queue
– So-called “urgent” queue: threads that were just
woken up using signal().
• New arrival normally goes to the front of this queue

33
Producer Consumer using Monitors
Monitor
Producer_Consumer {
condition not_full;
/* other vars */
condition not_empty;
void put(char ch) {
wait(not_full);
...
signal(not_empty);
}
char get() {
...
}
} 34
Types of Monitors
What happens on signal():
• Hoare: signaler immediately gives lock to waiter (theory)
– Condition definitely holds when waiter returns
– Easy to reason about the program
– Calling thread goes on the urgent queue
• Mesa: signaler keeps lock and processor (practice)
– Condition might not hold when waiter returns
– Fewer context switches, easy to support broadcast
– Consider harder to work with this style of monitor
• Brinch Hansen: signaller must immediately exit monitor
– So, notify should be last statement of monitor procedure
– We recommend this approach!

35
Mesa-style monitor subtleties
char buf[N]; // producer/consumer with monitors
int n = 0, tail = 0, head = 0;
condition not_empty, not_full; Consider the following time line:
void put(char ch)
0. initial condition: n = 0
if(n == N)
wait(not_full);
1. c0 tries to take char, blocks
buf[head%N] = ch; on not_empty (releasing monitor
head++; lock)
n++; 2. p0 puts a char (n = 1), signals
signal(not_empty); not_empty
char get() 3. c0 is put on run queue
if(n == 0) 4. Before c0 runs, another
wait(not_empty); consumer thread c1 enters
ch = buf[tail%N];
and takes character (n = 0)
tail++;
n--;
5. c0 runs.
signal(not_full);
return ch; Possible fixes? 36
Mesa-style subtleties
char buf[N]; // producer/consumer with monitors
int n = 0, tail = 0, head = 0;
condition not_empty, not_full;
void put(char ch)
while(n == N)
wait(not_full);
buf[head] = ch;
When can we replace
head = (head+1)%N; “while” with “if”?
n++;
signal(not_empty);
char get()
while(n == 0)
wait(not_empty);
ch = buf[tail];
tail = (tail+1) % N;
n--;
signal(not_full);
37
return ch;
Condition Variables & Semaphores
• Condition Variables != semaphores
• Access to monitor is controlled by a lock
– Wait: blocks on thread and gives up the lock
• To call wait, thread has to be in monitor, hence the lock
• Semaphore P() blocks thread only if value less than 0
– Signal: causes waiting thread to wake up
• If there is no waiting thread, the signal is lost
• V() increments value, so future threads need not wait on P()
• Condition variables have no history
• However they can be used to implement each other

38
Hoare Monitors using Semaphores
Condition Var Wait: x.wait:
x_count++;
For each procedure F: if(next_count > 0)
V(next);
P(mutex); else
V(mutex);
/* body of F */ P(x_sem);
x.count--;
if(next_count > 0)
V(next); Condition Var Notify: x.notify:
else If(x_count > 0) {
V(mutex); next_count++;
V(x_sem);
P(next);
next_count--;
} 39
Language Support
• Can be embedded in programming language:
– Synchronization code added by compiler, enforced at runtime
– Mesa/Cedar from Xerox PARC
– Java: synchronized, wait, notify, notifyall
– C#: lock, wait (with timeouts) , pulse, pulseall
• Monitors easier and safer than semaphores
– Compiler can check, lock implicit (cannot be forgotten)
• Why not put everything in the monitor?

40
Eliminating Locking Overhead
• Remove locks by duplicating state
– Each instance only has one writer
– Assumption: assignment is atomic
• Non-blocking/Wait free Synchronization
– Do not use locks
– Optimistically do the transaction
– If commit fails, then retry

41
Optimistic Concurrency Control
• Example: hits = hits + 1;
A) Read hits into register R1
B) Add 1 to R1 and store it in R2
C) Atomically store R2 in hits only if hits==R1 (i.e. CAS)
• If store didn’t write goto A
• Can be extended to any data structure:
A) Make copy of data structure, modify copy.
B) Use atomic word compare-and-swap to update pointer.
C) Goto A if some other thread beat you to the update.
Less overhead, deals with failures better
Lots of retrying under heavy load

42
To conclude
• Race conditions are a pain!
• We studied five ways to handle them
– Each has its own pros and cons
• Support in Java, C# has simplified writing multithreaded
applications
• Some new program analysis tools automate checking to
make sure your code is using synchronization correctly
– The hard part for these is to figure out what “correct” means!
– None of these tools would make sense of the bounded buffer
(those in the business sometimes call it the “unbounded
bugger”)

43

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy