Mutual Exclusion Algorithms

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 36

Mutual Exclusion Algorithms

Non-token based:
A site/process can enter a critical section when an
assertion (condition) becomes true.
Algorithm should ensure that the assertion will be true
in only one site/process.
Token based:
A unique token (a known, unique message) is shared
among cooperating sites/processes.
Possessor of the token has access to critical section.
Need to take care of conditions such as loss of token,
crash of token holder, possibility of multiple tokens, etc.

General System Model

At any instant, a site may have several requests for


critical section (CS), queued up, and serviced one at a
time.
Site States: Requesting CS, executing CS, idle (neither
requesting nor executing CS).
Requesting CS: blocked until granted access, cannot
make additional requests for CS.
Executing CS: using the CS.
Idle: action is outside the site. In token-based approaches,
idle site can have the token.

Mutual Exclusion: Requirements

Freedom from deadlocks: two or more sites should not


endlessly wait on conditions/messages that never become
true/arrive.
Freedom from starvation: No indefinite waiting.
Fairness: Order of execution of CS follows the order of
the requests for CS. (equal priority).
Fault tolerance: recognize faults, reorganize, continue.
(e.g., loss of token).

Performance

Number of messages per CS invocation: should be


minimized.
Synchronization delay, i.e., time between the leaving of
CS by a site and the entry of CS by the next one: should
be minimized.
Response time: time interval between request messages
transmissions and exit of CS.
System throughput, i.e., rate at which system executes
requests for CS: should be maximized.
If sd is synchronization delay, E the average CS execution
time: system throughput = 1 / (sd + E).

Performance metrics
Next site
enters CS

Last site
exits CS

Time

Synchronization
delay

CS Request
arrives

Messages
sent

Enter CS

Exit CS

Time

Response Time

Performance ...

Low and High Load:

Best and Worst Case:

Low load: No more than one request at a given point in time.


High load: Always a pending mutual exclusion request at a site.
Best Case (low loads): Round-trip message delay + Execution
time. 2T + E.
Worst case (high loads).

Message traffic: low at low loads, high at high loads.


Average performance: when load conditions fluctuate
widely.

Simple Solution

Control site: grants permission for CS execution.


A site sends REQUEST message to control site.
Controller grants access one by one.
Synchronization delay: 2T -> A site release CS by
sending message to controller and controller sends
permission to another site.
System throughput: 1/(2T + E). If synchronization delay
is reduced to T, throughput doubles.
Controller becomes a bottleneck, congestion can occur.

Non-token Based Algorithms

Notations:

Si: Site I
Ri: Request set, containing the ids of all Sis from which
permission must be received before accessing CS.
Non-token based approaches use time stamps to order requests
for CS.
Smaller time stamps get priority over larger ones.

Lamports Algorithm

Ri = {S1, S2, , Sn}, i.e., all sites.


Request queue: maintained at each Si. Ordered by time stamps.
Assumption: message delivered in FIFO.

Lamports Algorithm

Requesting CS:

Executing CS:

Send REQUEST(tsi, i). (tsi,i): Request time stamp. Place REQUEST


in request_queuei.
On receiving the message; sj sends time-stamped REPLY message to
si. Sis request placed in request_queuej.
Si has received a message with time stamp larger than (tsi,i) from all
other sites.
Sis request is the top most one in request_queuei.

Releasing CS:

Exiting CS: send a time stamped RELEASE message to all sites in its
request set.
Receiving RELEASE message: Sj removes Sis request from its queue.

Lamports Algorithm

Performance.

3(N-1) messages per CS invocation. (N - 1) REQUEST, (N - 1)


REPLY, (N - 1) RELEASE messages.
Synchronization delay: T

Optimization

Suppress reply messages. (e.g.,) Sj receives a REQUEST


message from Si after sending its own REQUEST message with
time stamp higher than that of Sis. Do NOT send REPLY
message.
Messages reduced to between 2(N-1) and 3(N-1).

10

Lamports Algorithm: Example


Step 1:
S1

(2,1)

S2
(1,2)

S3
Step 2:
S1

(1,2) (2,1)

S2 enters CS

S2
(1,2) (2,1)

S3

(1,2) (2,1)

11

Lamports: Example
Step 3:
S1

(1,2) (2,1)

S2 leaves CS

S2
(1,2) (2,1)

S3
Step 4:
S1

(1,2) (2,1)

(2,1)

(1,2) (2,1)

S1 enters CS
S2
(2,1)

(1,2) (2,1)

S3

(1,2) (2,1)

(2,1)

12

Ricart-Agrawala Algorithm

Requesting critical section

Si sends time stamped REQUEST message


Sj sends REPLY to Si, if
Sj is not requesting nor executing CS
If Sj is requesting CS and Sis time stamp is smaller than its
own request.
Request is deferred otherwise.

Executing CS: after it has received REPLY from all sites in


its request set.
Releasing CS: Send REPLY to all deferred requests. i.e., a
sites REPLY messages are blocked only by sites with
smaller time stamps

13

Ricart-Agrawala: Performance

Performance:

2(N-1) messages per CS execution. (N-1) REQUEST + (N-1)


REPLY.
Synchronization delay: T.

Optimization:

When Si receives REPLY message from Sj -> authorization to


access CS till
Sj sends a REQUEST message and Si sends a REPLY
message.
Access CS repeatedly till then.
A site requests permission from dynamically varying set of
sites: 0 to 2(N-1) messages.

14

Ricart-Agrawala: Example
Step 1:
S1

(2,1)

S2
(1,2)

S3
Step 2:
S1
S2 enters CS

S2
(2,1)

S3

15

Ricart-Agrawala: Example
Step 3:
S1
S1 enters CS
S2
(2,1)

S3

S2 leaves CS

16

Maekawas Algorithm

A site requests permission only from a subset of sites.


Request set of sites si & sj: Ri, Rj such that Ri and Rj will have
atleast one common site (Sk). Sk mediates conflicts between Ri
and Rj.
A site can send only one REPLY message at a time, i.e., a site can
send a REPLY message only after receiving a RELEASE message
for the previous REPLY message.
Request Sets Rules:

Sets Ri and Rj have atleast one common site.


Si is always in Ri.
Cardinality of Ri, i.e., the number of sites in Ri is K.
Any site Si is in K number of Ris. N = K(K - 1) + 1 -> K = square root of
N.

17

Maekawas Algorithm ...

Requesting CS

Si sends REQUEST(i) to sites in Ri.


Sj sends REPLY to Si if
Sj has NOT sent a REPLY message to any site after it
received the last RELEASE message.
Otherwise, queue up Sis request.

Executing CS: after getting REPLY from all sites in Ri.


Releasing CS

send RELEASE(i) to all sites in Ri


Any Sj after receiving RELEASE message, send REPLY
message to the next request in queue.
If queue empty, update status indicating receipt of RELEASE.

18

Request Subsets

Example k = 2; (N = 3).

Example k = 3; N = 7.

R1 = {1, 2}; R3 = {1, 3}; R2 = {2, 3}


R1 = {1, 2, 3}; R4 = {1, 4, 5}; R6 = {1, 6, 7};
R2 = {2, 4, 6}; R5 = {2, 5, 7}; R7 = {3, 4, 7};
R3 = {3, 5, 6}

Algorithm in Maekawas paper (uploaded in


Lecture Notes web page).

19

Maekawas Algorithm ...

Performance

Synchronization delay: 2T
Messages: 3 times square root of N (one each for REQUEST,
REPLY, RELEASE messages)

Deadlocks

Message deliveries are not ordered.


Assume Si, Sj, Sk concurrently request CS
Ri intersection Rj = {Sij}, Rj Rk = {Sjk}, Rk Ri = {Ski}
Possible that:
Sij is locked by Si (forcing Sj to wait at Sij)
Sjk by Sj (forcing Sk to wait at Sjk)
Ski by Sk (forcing Si to wait at Ski)
-> deadlocks among Si, Sj, and Sk .

20

Handling Deadlocks

Si yields to a request if that has a smaller time stamp.


A site suspects a deadlock when it is locked by a request
with a higher time stamp (lower priority).
Deadlock handling messages:

FAILED: from Si to Sj -> Si has granted permission to higher


priority request.
INQUIRE: from Si to Sj -> Si would like to know Sj has
succeeded in locking all sites in Sjs request set.
YIELD: from Si to Sj -> Si is returning permission to Sj so that
Sj can yield to a higher priority request.

21

Handling Deadlocks

REQUEST(tsi,i) to Sj:

INQUIRE(j) to Sk:

Sk sends a YIELD (k) to Sj, if Sk has received a FAILED message from a


site in Sks set. (or) if Sk sent a YIELD and has not received a new REPLY.

YIELD(k) to Sj:

Sj is locked by Sk -> Sj sends FAILED to Si, if Sis request has higher time
stamp.
Otherwise, Sj sends INQUIRE(j) to Sk.

Sj assumes it has been released by Sk, places Sks request in its queue
appropriately, sends a REPLY(j) to the top request in its queue.

Sites may exchange these messages even if there is no real


deadlock. Maximum number of messages per CS request: 5 times
square root of N.

22

Token-based Algorithms

Unique token circulates among the participating sites.


A site can enter CS if it has the token.
Token-based approaches use sequence numbers instead of
time stamps.

Request for a token contains a sequence number.


Sequence number of sites advance independently.

Correctness issue is trivial since only one token is present


-> only one site can enter CS.
Deadlock and starvation issues to be addressed.

23

Suzuki-Kasami Algorithm

If a site without a token needs to enter a CS, broadcast a REQUEST for


token message to all other sites.
Token: (a) Queue of request sites (b) Array LN[1..N], the sequence
number of the most recent execution by a site j.
Token holder sends token to requestor, if it is not inside CS. Otherwise,
sends after exiting CS.
Token holder can make multiple CS accesses.
Design issues:

Distinguishing outdated REQUEST messages.


Format: REQUEST(j,n) -> jth site making nth request.
Each site has RNi[1..N] -> RNi[j] is the largest sequence number of request
from j.

Determining which site has an outstanding token request.


If LN[j] = RNi[j] - 1, then Sj has an outstanding request.

24

Suzuki-Kasami Algorithm ...

Passing the token

After finishing CS
(assuming Si has token), LN[i] := RNi[i]
Token consists of Q and LN. Q is a queue of requesting sites.
Token holder checks if RNi[j] = LN[j] + 1. If so, place j in Q.
Send token to the site at head of Q.

Performance

0 to N messages per CS invocation.


Synchronization delay is 0 (if the token holder repeats CS) or T.

25

Suzuki-Kasami: Example
Step 1: S1 has token, S3 is in queue
Site Seq. Vector RN
S1 10, 15, 9
S2 10, 16, 9
S3 10, 15, 9
Step 2: S3 gets token, S2 in queue
Site Seq. Vector RN
S1 10, 16, 9
S2 10, 16, 9
S3 10, 16, 9
Step 3: S2 gets token, queue empty
Site Seq. Vector RN
S1 10, 16, 9
S2 10, 16, 9
S3 10, 16, 9

Token Vect. LN
10, 15, 8

Token Queue
3

Token Vect. LN

Token Queue

10, 15, 9

Token Vect. LN

Token Queue

10, 16, 9

<empty>

26

Singhals Heuristic Algorithm

Instead of broadcast: each site maintains information on other sites, guess the
sites likely to have the token.
Data Structures:

Si maintains SVi[1..M] and SNi[1..M] for storing information on other sites: state
and highest sequence number.
Token contains 2 arrays: TSV[1..M] and TSN[1..M].
States of a site
R : requesting CS
E : executing CS
H : Holding token, idle
N : None of the above

Initialization:

SVi[j] := N, for j = M .. i; SVi[j] := R, for j = i-1 .. 1; SNi[j] := 0, j = 1..N. S1


(Site 1) is in state H.
Token: TSV[j] := N & TSN[j] := 0, j = 1 .. N.

27

Singhals Heuristic Algorithm

Requesting CS

If Si has no token and requests CS:


SVi[i] := R. SNi[i] := SNi[i] + 1.
Send REQUEST(i,sn) to sites Sj for which SVi[j] = R. (sn: sequence
number, updated value of SNi[i]).

Receiving REQUEST(i,sn): if sn <= SNj[i], ignore. Otherwise, update


SNj[i] and do:
SVj[j] = N -> SVj[i] := R.
SVj[j] = R -> If SVj[i] != R, set it to R & send REQUEST(j,SNj[j]) to
Si. Else do nothing.
SVj[j] = E -> SVj[i] := R.
SVj[j] = H -> SVj[i] := R, TSV[i] := R, TSN[i] := sn, SVj[j] = N. Send
token to Si.
Executing CS: after getting token. Set SVi[i] := E.

28

Singhals Heuristic Algorithm

Releasing CS

SVi[i] := N, TSV[i] := N. Then, do:


For other Sj: if (SNi[j] > TSN[j]), then {TSV[j] := SVi[j];
TSN[j] := SNi[j]}
else {SVi[j] := TSV[j]; SNi[j] := TSN[j]}
If SVi[j] = N, for all j, then set SVi[i] := H. Else send token to a
site Sj provided SVi[j] = R.

Fairness of algorithm will depend on choice of Si, since no


queue is maintained in token.
Arbitration rules to ensure fairness used.
Performance

Low to moderate loads: average of N/2 messages.


High loads: N messages (all sites request CS).
Synchronization delay: T.

29

Singhal: Example
Sn
S4

R
R

R
N

N
N

S3

S2

S1

H
1

N
2

(a) Initial Pattern


Each row in the matrix has
increasing number of Rs.

Sn

S4

S2

S1

S3

R
N

N
N

(b) Pattern after S3 gets the token from


S1.
Stair case is pattern can be identified by
noting that S1 has 1 R and S2 has 2 Rs
and so on. Order of occurrence of R in
a row does not matter.

30

Singhal: Example
Assume there are 3 sites in the system. Initially:
Site 1: SV1[1] = H, SV1[2] = N, SV1[3] = N. SN1[1], SN1[2], SN1[3] are 0.
Site 2: SV2[1] = R, SV2[2] = N, SV2[3] = N. SNs are 0.
Site 3: SV3[1] = R, SV3[2] = R, SV3[3] = N. SNs are 0.
Token: TSVs are N. TSNs are 0.
Assume site 2 is requesting token.
S2 sets SV2[2] = R, SN2[2] = 1.
S2 sends REQUEST(2,1) to S1 (since only S1 is set to R in SV[2])
S1 receives the REQUEST. Accepts the REQUEST since SN1[2] is smaller than
the message sequence number.
Since SV1[1] is H: SV1[2] = R, TSV[2] = R, TSN[2] = 1, SV1[1] = N.
Send token to S2
S2 receives the token. SV2[2] = E. After exiting the CS, SV2[2] = TSV[2] = N.
Updates SN, SV, TSN, TSV. Since nobody is REQUESTing, SV2[2] = H.
Assume S3 makes a REQUEST now. It will be sent to both S1 and S2. Only S2
responds since only SV2[2] is H (SV1[1] is N now).

31

Raymonds Algorithm

Sites are arranged in a logical directed tree. Root: token holder. Edges:
directed towards root.
Every site has a variable holder that points to an immediate neighbor
node, on the directed path towards root. (Roots holder point to itself).
Requesting CS

If Si does not hold token and request CS, sends REQUEST upwards
provided its request_q is empty. It then adds its request to request_q.
Non-empty request_q -> REQUEST message for top entry in q (if not
done before).
Site on path to root receiving REQUEST -> propagate it up, if its
request_q is empty. Add request to request_q.
Root on receiving REQUEST -> send token to the site that forwarded the
message. Set holder to that forwarding site.
Any Si receiving token -> delete top entry from request_q, send token to
that site, set holder to point to it. If request_q is non-empty now, send
REQUEST message to the holder site.

32

Raymonds Algorithm

Executing CS: getting token with the site at the top of


request_q. Delete top of request_q, enter CS.
Releasing CS

If request_q is non-empty, delete top entry from q, send token to


that site, set holder to that site.
If request_q is non-empty now, send REQUEST message to the
holder site.

Performance

Average messages: O(log N) as average distance between 2 nodes


in the tree is O(log N).
Synchronization delay: (T log N) / 2, as average distance between
2 sites to successively execute CS is (log N) / 2.
Greedy approach: Intermediate site getting the token may enter CS
instead of forwarding it down. Affects fairness, may cause
starvation.

33

Raymonds Algorithm: Example


Step 1:

Token
holder

S1
S2
S4

Token
request

S3
S6

S5

S7

Step 2:
S1
S3

S2 Token
S4

S5

S6

S7

34

Raymonds Algm.: Example


Step 3:
S1
S3

S2
S4

S5

S6

S7

Token
holder

35

Comparison
Non-Token

Resp. Time(ll) Sync. Delay

Lamport
2T+E
Ricart-Agrawala 2T+E
Maekawa
2T+E

T
T
2T

Messages(ll)

Messages(hl)

3(N-1)
2(N-1)
3*sq.rt(N)

3(N-1)
2(N-1)
5*sq.rt(N)

Token

Resp. Time(ll) Sync. Delay

Messages(ll)

Messages(hl)

Suzuki-Kasami
Singhal
Raymond

2T+E
2T+E
T(log N)+E

N
N/2
log(N)

N
N
4

T
T
Tlog(N)/2

36

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy