0% found this document useful (0 votes)
20 views

cn-unit-2-computer-networks

The document covers the Data Link Layer in computer networks, detailing its functions such as framing, error detection, and flow control. It discusses various services provided to the network layer, including unacknowledged and acknowledged services, as well as framing techniques like byte and bit stuffing. Additionally, it explains error control mechanisms, types of errors, and error detection and correction methods, including CRC and checksums.

Uploaded by

PERALA BHAGYASRI
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views

cn-unit-2-computer-networks

The document covers the Data Link Layer in computer networks, detailing its functions such as framing, error detection, and flow control. It discusses various services provided to the network layer, including unacknowledged and acknowledged services, as well as framing techniques like byte and bit stuffing. Additionally, it explains error control mechanisms, types of errors, and error detection and correction methods, including CRC and checksums.

Uploaded by

PERALA BHAGYASRI
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 53

COMPUTER NETWORKS

UNIT - 2: DATA LINK LAYER


UNIT II The Data Link Layer, Access Networks, and LANs
Data Link Layer Design Issues, Error Detection and Correction, Elementary Data Link Protocols, Sliding
Window Protocols (Textbook 1) Introduction to the Link Layer, Error-Detection and - Correction Techniques,
Multiple Access Links and Protocols, Switched Local Area Networks Link Virtualization: A Network as a Link
Layer, Data Center Networking, Retrospective: A Day in the Life of a Web Page Request (Textbook 2)

DATA LINK LAYER


The data link layer transforms the physical layer, a raw transmission facility, to a link responsible for
node-to-node (hop-to-hop) communication. Specific responsibilities of the data link layer include framing,
addressing, flow control, error control, and media access control.

DATA LINK LAYER DESIGN ISSUES


The data link layer uses the services of the physical layer to send and receive bits over communication
channels. It has a number of functions, including:
1. Providing a well-defined service interface to the network layer.
2. Framing
3. Dealing with transmission errors.(Error Control)
4. Regulating the flow of data so that slow receivers are not swamped by fast senders.(Flow Control)

To accomplish these goals, the data link layer takes the packets it gets from the network layer and
encapsulates them into frames for transmission. Each frame contains a frame header, a payload field for holding
the packet, and a frame trailer, as illustrated in Fig.

Downloaded by PERALA BHAGYASRI


SERVICES PROVIDED TO THE NETWORK LAYER
The function of the data link layer is to provide services to the network layer. The principal service is
transferring data from the network layer on the source machine to the network layer on the destination machine.
The data link layer can be designed to offer various services. The actual services that are offered vary
from protocol to protocol. Three reasonable possibilities that we will consider in turn are:
1. Unacknowledged connectionless service.
2. Acknowledged connectionless service.
3. Acknowledged connection-oriented service.
1. Unacknowledged connectionless service:
Unacknowledged connectionless service consists of having the source machine send independent frames
to the destination machine without having the destination machine acknowledge them. Ethernet is a good
example of a data link layer that provides this class of service. No logical connection is established beforehand
or released afterward. If a frame is lost due to noise on the line attempt is made to detect the loss or recover
from it in the data link layer. This class of service is appropriate when the error rate is very low, so recovery is
left to higher layers.
2. Acknowledged connectionless service:
When this service is offered, there are still no logical connections used, but each frame sent is
individually acknowledged. In this way, the sender knows whether a frame has arrived correctly or been lost. If
it has not arrived within a specified time interval, it can be sent again. This service is useful over unreliable
channels, such as wireless systems. 802.11 (WiFi) is a good example of this class of service.
3. Acknowledged connection-oriented service:
The most sophisticated service the data link layer can provide to the network layer is connection-
oriented service. With this service, the source and destination machines establish a connection before any data
are transferred. Each frame sent over the connection is numbered, and the data link layer guarantees that each
frame sent is indeed received. Furthermore, it guarantees that each frame is received exactly once and that all
frames are received in the right order.
When connection-oriented service is used, transfers go through three distinct phases. In the first phase,
the connection is established by having both sides initialize variables and counters needed to keep track of
which frames have been received and which ones have not. In the second phase, one or more frames are
actually transmitted. In the third and final phase, the connection is released, freeing up the variables, buffers,
and other resources used to maintain the connection.

FRAMING
To provide service to the network layer, the data link layer must use the service provided to it by the
physical layer. What the physical layer does is accept a raw bit stream and attempt to deliver it to the
destination. If the channel is noisy, as it is for most wireless and some wired links, the physical layer will add
some redundancy to its signals to reduce the bit error rate to a tolerable level. However, the bit stream received
by the data link layer is not guaranteed to be error free. Some bits may have different values and the number of
bits received may be less than, equal to, or more than the number of bits transmitted. It is up to the data link
layer to detect and, if necessary, correct errors.
The data link layer to break up the bit stream into discrete frames, compute a short token called a
checksum for each frame, and include the checksum in the frame when it is transmitted.
When a frame arrives at the destination, the checksum is recomputed. If the newly computed checksum
is different from the one contained in the frame, the data link layer knows that an error has occurred and takes
steps to deal with it.
DLL translates the physical layer's raw bit stream into discrete units (messages) called frames.
A good design must make it easy for a receiver to find the start of new frames while using little of
the channel bandwidth. We will look at four methods:
1. Byte count.
2. Flag bytes with byte stuffing.
3. Flag bits with bit stuffing.
4. Physical layer coding violations.

Downloaded by PERALA BHAGYASRI


1. Byte count (Character Count) :
This framing method uses a field in the header to specify the number of bytes in the frame. When the
data link layer at the destination sees the byte count, it knows how many bytes follow and hence where the end
of the frame is. This technique is shown in Fig.(a) For four small example frames of sizes 5, 5, 8, and 8 bytes,
respectively.
The trouble with this algorithm is that the count can be garbled by a transmission error. For example, if
the byte count of 5 in the second frame of Fig.(b) becomes a 7 due to a single bit flip, the destination will get
out of synchronization. It will then be unable to locate the correct start of the next frame.

2. Flag bytes with byte stuffing:


This framing method gets around the problem of resynchronization after an error by having each frame
start and end with special bytes. Often the same byte, called a flag byte, is used as both the starting and ending
delimiter. This byte is shown in Fig.(a) as FLAG. Two consecutive flag bytes indicate the end of one frame and
the start of the next. Thus, if the receiver ever loses synchronization it can just search for two flag bytes to find
the end of the current frame and the start of the next frame.
However, there is a still a problem we have to solve. It may happen that the flag byte occurs in the data,
especially when binary data such as photographs or songs are being transmitted. This situation would interfere
with the framing. One way to solve this problem is to have the sender’s data link layer insert a special escape
byte (ESC) just before each ‘‘accidental’’ flag byte in the data.
The data link layer on the receiving end removes the escape bytes before giving the data to the network
layer. This technique is called byte stuffing.

Four examples of byte sequences before and after byte stuffing.

Downloaded by PERALA BHAGYASRI


3. Flag bits with bit stuffing:
Framing can be also be done at the bit level, so frames can contain an arbitrary number of bits made up
of units of any size. It was developed for the once very popular HDLC (Highlevel Data Link Control)
protocol. Each frame begins and ends with a special bit pattern, 01111110 or 0x7E in hexadecimal. This
pattern is a flag byte. Whenever the sender’s data link layer encounters five consecutive 1s in the data, it
automatically stuffs a 0 bit into the outgoing bit stream. This bit stuffing is analogous to byte stuffing, in
which an escape byte is stuffed into the outgoing character stream before a flag byte in the data. It also ensures
a minimum density of transitions that help the physical layer maintain synchronization. USB (Universal Serial
Bus) uses bit stuffing for this reason.
When the receiver sees five consecutive incoming 1 bits, followed by a 0 bit, it automatically destuffs
(i.e., deletes) the 0 bit. Just as byte stuffing is completely transparent to the network layer in both computers, so
is bit stuffing. If the user data contain the flag pattern, 01111110, this flag is transmitted as 011111010 but
stored in the receiver’s memory as 01111110. Figure gives an example of bit stuffing.

4. Physical layer coding violations:


• This Framing Method is used only in those networks in which Encoding on the Physical
Medium contains some redundancy.
• Some LANs encode each bit of data by using two Physical Bits i.e. Manchester coding is used. Here,
Bit 1 is encoded into high- low (10) pair and Bit 0 is encoded into low-high (01) pair.
• The scheme means that every data bit has a transition in the middle, making it easy for the receiver
to locate the bit boundaries. The combinations high-high and low-low are not used for data but are used for
delimiting frames in some protocols.

ERROR CONTROL:
Error control is concerned with ensuring that all frames are eventually delivered (possibly in order) to a
destination. How? Three items are required.
• Acknowledgements: Typically, reliable delivery is achieved using the “acknowledgments
with retransmission" paradigm, whereby the receiver. returns a special
acknowledgment
(ACK) frame to the sender indicating the correct receipt of a frame. In some systems, the receiver
also returns a negative acknowledgment (NACK) for incorrectly-received frames. This is
nothing more than a hint to the sender so that it can retransmit a frame right away without
waiting for a timer to expire.

Downloaded by PERALA BHAGYASRI


• Timers: One problem that simple ACK/NACK schemes fail to address is recovering from a
frame that is lost? Retransmission timers are used to resend frames that don't produce an ACK.
When sending a frame, schedule a timer to expire at some time after the ACK should have
been returned. If the timer goes o, retransmit the frame.
• Sequence Numbers: Retransmissions introduce the possibility of duplicate frames. To suppress
duplicates, add sequence numbers to each frame, so tha1t 9a receiver can distinguish between
new frames and old copies.

FLOW CONTROL:
Flow control deals with controlling the speed of the sender to match that of the receiver.
Two Approaches:
 feedback-based flow control, the receiver sends back information to the sender
giving it permission to send more data or at least telling the sender how the
receiver is doing
 rate-based flow control, the protocol has a built-in mechanism that limits the
rate at which senders may transmit data, without using feedback from the
receiver.

TYPES OF ERRORS:
There are two main types of errors in transmissions:
1. Single bit error: It means only one bit of data unit is changed from 1 to 0 or from
0 to 1.

2. Burst error: It means two or more bits in data unit are changed from 1 to 0 from 0 to
1. In burst error, it is not necessary that only consecutive bits are changed.
The length of burst error is measured from first changed bit to last changed
bit

ERROR DETECTION AND CORRECTION:


Network designers have developed two basic strategies for dealing with errors. Both add redundant
information to the data that is sent. One strategy is to include enough redundant information to enable the
receiver to deduce what the transmitted data must have been. The other is to include only enough redundancy to
allow the receiver to deduce that an error has occurred and have it request a retransmission. The former strategy
uses error-correcting codes and the latter uses error-detecting codes.
Error Detecting Codes: Include enough redundancy bits to detect errors and use ACKs and
Downloaded by PERALA BHAGYASRI
retransmissions to recover from the errors.

Error Correcting Codes: Include enough redundancy to detect and correct errors.

ERROR-DETECTING CODES:
Error detection means to decide whether the received data is correct or not without having a copy of the
original message. Error detection uses the concept of redundancy, which means adding extra bits for detecting
errors at the destination.

1. Vertical Redundancy Check(VRC):


Append a single bit at the end of data block such that the number of one’s is even
À Even Parity (odd parity is similar)
0110011.01100110
0110001.01100011
VRC is also known as Parity Check. Detects all odd-number errors in a data block.

The problem with parity is that it can only detect odd numbers of bit substitution errors, i.e. 1 bit, 3bit, 5,
bit, etc. Errors. If there two, four, six, etc. bits which are transmitted in error, using VRC will not be
able to detect the error.

Downloaded by PERALA BHAGYASRI


2. Longitudinal Redundancy Check(LRC):

3. Cyclic Redundancy Check(CRC):


The cyclic redundancy check, or CRC, is a technique for detecting errors in digital data, but not for
making corrections when errors are detected. The CRC (Cyclic Redundancy Check), also known as a
polynomial code
Polynomial codes are based upon treating bit strings as representations of polynomials with
coefficients of 0 and 1 only. For example, 110001 has 6 bits and thus represents a six-term polynomial
with coefficients 1, 1, 0, 0, 0, and 1: 1x5 + 1x4 + 0x3 + 0x2 + 0x1 + 1x0.
Polynomial arithmetic is done modulo 2, according to the rules of algebraic field theory. It does
not have carries for addition or borrows for subtraction. Both addition and subtraction are identical
to exclusive OR. For example:
10011011 00110011 11110000 01010101
+ 11001010 + 11001101 − 10100110 − 10101111
01010001 11111110 01010110 11111010

When the polynomial code method is employed, the sender and receiver must agree upon a generator
polynomial, G(x), in advance. Both the high- and low order bits of the generator must be 1. To compute the
CRC for some frame with m bits corresponding to the polynomial M(x), the frame must be longer than the
generator polynomial. The idea is to append a CRC to the end of the frame in such a way that the polynomial
represented by the check summed frame is divisible by G(x). When the receiver gets the check summed frame,
it tries dividing it by G(x). If there is a remainder, there has been a transmission error.
The algorithm for computing the CRC is as follows:
1. Let r be the degree of G(x). Append r zero bits to the low-order end of the frame so it now
contains m + r bits and corresponds to the polynomial xrM(x).
2. Divide the bit string corresponding to G(x) into the bit string corresponding to xrM(x), using
modulo 2 divisions.

Downloaded by PERALA BHAGYASRI


3. Subtract the remainder (which is always r or fewer bits) from the bit string corresponding to
xrM(x) using modulo 2 subtractions. The result is the check summed frame to be transmitted.
Call its polynomial T(x).
Below figure illustrates the calculation for a frame 1101011111 using the generator G(x) = x4 + x + 1.

Example calculation of the CRC.

It should be clear that T(x) is divisible (modulo 2) by G(x). In any division problem, if you diminish the
dividend by the remainder, what is left over is divisible by the divisor.
Example of CRC:

Downloaded by PERALA BHAGYASRI


At sender side calculation of CRC:

At receiver side calculation of CRC:

4. CHECKSUM:
• Checksum is the error detection scheme used in IP, TCP & UDP.
• Here, the data is divided into k segments each of n bits. In the sender’s end the segments are
added using 1’s complement arithmetic to get the sum. The sum is complemented to get the
checksum. The checksum segment is sent along with the data segments
• At the receiver’s end, all received segments are added using 1’s complement arithmetic to get
the sum. The sum is complemented. If the result is zero, the received data is accepted; otherwise
discarded
• The checksum detects all errors involving an odd number of bits. It also detects most
errors involving even number of bits.

Downloaded by PERALA BHAGYASRI


Checksum procedure at sender and receiver end:

Diagrammatic approach:

Example for Checksum:

Downloaded by PERALA BHAGYASRI


Elementary Data Link Layer protocols:
Now let us see how the data link layer can combine framing, flow control, and error control to achieve
the delivery of data from one node to another. The protocols are normally implemented in software by using
one of the common programming languages.
We divide the discussion of protocols into those that can be used for noiseless (error-free) channels and
those that can be used for noisy (error-creating) channels. The protocols in the first category cannot be used in real
life, but they serve as a basis for understanding the protocols of noisy channels.

NOISELESS CHANNELS:

Let us first assume we have an ideal channel in which no frames are lost, duplicated, or corrupted. We
introduce two protocols for this type of channel. The first is a protocol that does not use flow control; the second is
the one that does. Of course, neither has error control because we have assumed that the channel is a perfect
noiseless channel.

Simplest Protocol:
Our first protocol, which we call the Simplest Protocol for lack of any other name, is one that has no flow or
error control. Like other protocols we will discuss in this chapter, it is a unidirectional protocol in which data
frames are traveling in only one direction-from the sender to receiver. We assume that the receiver can
immediately handle any frame it receives with a processing time that is small enough to be negligible. The data
link layer of the receiver immediately removes the header from the frame and hands the data packet to its
network layer, which can also accept the packet immediately. In other words, the receiver can never be
overwhelmed with incoming frames.

Design
There is no need for flow control in this scheme. The data link layer at the sender site gets data from its network
layer, makes a frame out of the data, and sends it. The data link layer at the receiver site receives a frame from
its physical layer, extracts data from the frame, and delivers the data to its network layer. The data link layers of
the sender and receiver provide transmission services for their network layers.

Downloaded by PERALA BHAGYASRI


Algorithms
Sender-site algorithm for the simplest protocol

Analysis The algorithm has an infinite loop, which means lines 3 to 9 are repeated forever once the program
starts. The algorithm is an event-driven one, which means that it sleeps (line 3) until an event wakes it up (line 4).
This means that there may be an undefined span of time between the execution of line 3 and line 4; there is a
gap between these actions. When the event, a request from the network layer, occurs, lines 6 through 8 are
executed. The program then repeats The loop and again sleeps at line 3 until the next occurrence of the event.
We have written pseudo code for the main process. We do not show any details for the modules Get Data,
Make Frame, and Send Frame.

Receiver-site algorithm for the simplest protocol

Analysis This algorithm has the same format as above Algorithm except that the direction of the frames and
data is upward. The event here is the arrival of a data frame. After the event occurs, the data link layer receives
the frame from the physical layer using the ReceiveFrame() process, extracts the data from the frame using the
ExtractData() process, and delivers the data to the network layer using the DeliverData() process. Here, we also
have an event-driven algorithm because the algorithm never knows when the data frame will arrive.

Example:
Below Figure shows an example of communication using this protocol. It is very simple. The sender sends a
sequence of frames without even thinking about the receiver. To send three frames, three events occur at the
sender site and three events at the receiver site. Note that the data frames are shown by tilted boxes; the height
of the box defines the transmission time difference between the first bit and the last bit in the frame.

Downloaded by PERALA BHAGYASRI


Stop-and-Wait Protocol:
If data frames arrive at the receiver site faster than they can be processed, the frames must be stored
until their use. Normally, the receiver does not have enough storage space, especially if it is receiving data from
many sources. This may result in either the discarding of frames or denial of service. To prevent the receiver
from becoming overwhelmed with frames, we somehow need to tell the sender to slow down. There must be
feedback from the receiver to the sender.
The protocol we discuss now is called the Stop-and-Wait Protocol because the sender sends one frame,
stops until it receives confirmation from the receiver (okay to go ahead), and then sends the next frame. We still
have unidirectional communication for data frames, but auxiliary ACK frames (simple tokens of
acknowledgment) travel from the other direction. We add flow control to our previous protocol.

Design
We can see the traffic on the forward channel (from sender to receiver) and the reverse channel. At any time,
there is either one data frame on the forward channel or one ACK frame on the reverse channel. We
therefore need a half-duplex link.

Algorithms
Sender-site algorithm for Stop-and- Wait Protocol

Downloaded by PERALA BHAGYASRI


Analysis Here two events can occur: a request from the network layer or an arrival notification from the
physical layer. The responses to these events must alternate. In other words, after a frame is sent, the algorithm
must ignore another network layer request until that frame is acknowledged. We know that two arrival events
cannot happen one after another because the channel is error-free and does not duplicate the frames. The
requests from the network layer, however, may happen one after another without an arrival event in between.
We need somehow to prevent the immediate sending of the data frame. Although there are several methods, we
have used a simple canSend variable that can either be true or false. When a frame is sent, the variable is set to
false to indicate that a new network request cannot be sent until canSend is true. When an ACK is received,
canSend is set to true to allow the sending of the next frame.

Receiver-site algorithm for Stop-and-Wait Protocol

Analysis This is very similar to above Algorithm with one exception. After the data frame arrives, the receiver
sends an ACK frame (line 9) to acknowledge the receipt and allow the sender to send the next frame.
Example:
Below Figure shows an example of communication using this protocol. It is still very simple. The sender sends
one frame and waits for feedback from the receiver. When the ACK arrives, the sender sends the next frame.
Note that sending two frames in the protocol involves the sender in four events and the receiver in two events.

Downloaded by PERALA BHAGYASRI


NOISY CHANNELS:

Although the Stop-and-Wait Protocol gives us an idea of how to add flow control to its predecessor,
noiseless channels are nonexistent. We can ignore the error (as we sometimes do), or we need to add error
control to our protocols. We discuss three protocols in this section that use error control.

Stop-and-Wait Automatic Repeat Request:


Our first protocol, called the Stop-and-Wait Automatic Repeat Request (Stop-and Wait ARQ), adds a
simple error control mechanism to the Stop-and-Wait Protocol. Let us see how this protocol detects and
corrects errors.
To detect and correct corrupted frames, we need to add redundancy bits to our data frame. When the
frame arrives at the receiver site, it is checked and if it is corrupted, it is silently discarded. The detection of
errors in this protocol is manifested by the silence of the receiver.
Lost frames are more difficult to handle than corrupted ones. In our previous protocols, there was no
way to identify a frame. The received frame could be the correct one, or a duplicate, or a frame out of order.
The solution is to number the frames. When the receiver receives a data frame that is out of order, this means
that frames were either lost or duplicated.
The lost frames need to be resent in this protocol. If the receiver does not respond when there is an error,
how can the sender know which frame to resend? To remedy this problem, the sender keeps a copy of the sent
frame. At the same time, it starts a timer. If the timer expires and there is no ACK for the sent frame, the frame
is resent, the copy is held, and the timer is restarted. Since the protocol uses the stop-and-wait mechanism, there
is only one specific frame that needs an ACK even though several copies of the same frame can be in the
network.

Sequence Numbers
As we discussed, the protocol specifies that frames need to be numbered. This is done by using
sequence numbers. A field is added to the data frame to hold the sequence number of that frame.
One important consideration is the range of the sequence numbers. Since we want to minimize the frame
size, we look for the smallest range that provides unambiguous communication. The sequence numbers of
course can wrap around. For example, if we decide that the field is m bits long, the sequence numbers start from
0, go to 2m - 1, and then are repeated.
Acknowledgment Numbers
Since the sequence numbers must be suitable for both data frames and ACK frames, we use this
convention: The acknowledgment numbers always announce the sequence number of the next frame expected
by the receiver. For example, if frame 0 has arrived safe and sound, the receiver sends an ACK frame with
acknowledgment 1 (meaning frame 1 is expected next). If frame 1 has arrived safe and sound, the receiver
sends an ACK frame with acknowledgment 0 (meaning frame 0 is expected).

Design
Below Figure shows the design of the Stop-and-WaitARQ Protocol. The sending device keeps a copy of
the last frame transmitted until it receives an acknowledgment for that frame. A data frames uses a seqNo
(sequence number); an ACK frame uses an ackNo (acknowledgment number). The sender has a control
variable, which we call Sn (sender, next frame to send), that holds the sequence number for the next frame to be
sent (0 or 1).
The receiver has a control variable, which we call Rn (receiver, next frame expected), that holds the
number of the next frame expected. When a frame is sent, the value of Sn is incremented (modulo-2), which
means if it is 0, it becomes 1 and vice versa. When a frame is received, the value of Rn is incremented (modulo-
2), which means if it is 0, it becomes 1 and vice versa. Three events can happen at the sender site; one event can
happen at the receiver site. Variable Sn points to the slot that matches the sequence number of the frame that
has been sent, but not acknowledged; Rn points to the slot that matches the sequence number of the expected
frame.

Downloaded by PERALA BHAGYASRI


Algorithms
Sender-site algorithm for Stop-and- Wait ARQ

Downloaded by PERALA BHAGYASRI


Analysis We first notice the presence of Sn' the sequence number of the next frame to be sent. This variable is
initialized once (line 1), but it is incremented every time a frame is sent (line 13) in preparation for the next
frame. However, since this is modulo-2 arithmetic, the sequence numbers are 0, 1, 0, 1, and so on. Note that the
processes in the first event (SendFrame, Store Frame, and Purge Frame) use an Sn defining the frame sent out.
We need at least one buffer to hold this frame until we are sure that it is received safe and sound. Line 10 shows
that before the frame is sent, it is stored. The copy is used for resending a corrupt or lost frame. We are still
using the canSend variable to prevent the network layer from making a request before the previous frame is
received safe and sound. If the frame is not corrupted and the ackNo of theACK frame matches the sequence
number of the next frame to send, we stop the timer and purge the copy of the data frame we saved. Otherwise,
we just ignore this event and wait for the next event to happen. After each frame is sent, a timer is started.
When the timer expires (line 28), the frame is resent and the timer is restarted.

Receiver-site algorithm for Stop-and-WaitARQ Protocol

Analysis This is noticeably different from Algorithm 11.4. First, all arrived data frames that are corrupted are
ignored. If the seqNo of the frame is the one that is expected (Rn ), the frame is accepted, the data are delivered
to the network layer, and the value of Rn is incremented. However, there is one subtle point here. Even if the
sequence number of the data frame does not match the next frame expected, an ACK is sent to the sender. This
ACK, however, just reconfirms the previous ACK instead of confirming the frame received. This is done
because the receiver assumes that the previous ACK might have been lost; the receiver is sending a duplicate
frame. The resent ACK may solve the problem before the time-out does it.

Efficiency
The Stop-and-Wait ARQ discussed in the previous section is very inefficient if our channel is thick and
long. By thick, we mean that our channel has a large bandwidth; by long, we mean the round-trip delay is long.
The product of these two is called the bandwidth delay product, as we discussed in Chapter 3. We can think of
the channel as a pipe. The bandwidth-delay product then is the volume of the pipe in bits. The pipe is always
there. If we do not use it, we are inefficient. The bandwidth-delay product is a measure of the number of bits we
can send out of our system while waiting for news from the receiver.

Example
Below Figure shows an example of Stop-and-Wait ARQ. Frame a is sent and acknowledged. Frame 1 is
lost and resent after the time-out. The resent frame 1 is acknowledged and the timer stops. Frame a is sent and

Downloaded by PERALA BHAGYASRI


acknowledged, but the acknowledgment is lost. The sender has no idea if the frame or the acknowledgment is
lost, so after the time-out, it resends frame 0, which is acknowledged.

Pipelining
In networking and in other areas, a task is often begun before the previous task has ended. This is
known as pipelining. There is no pipelining in Stop-and-Wait ARQ because we need to wait for a frame to
reach the destination and be acknowledged before the next frame can be sent. However, pipelining does apply
to our next two protocols because several frames can be sent before we receive news about the previous frames.
Pipelining improves the efficiency of the transmission if the number of bits in transition is large with respect to
the bandwidth-delay product.

Go-Back-N Automatic Repeat Request:


To improve the efficiency of transmission (filling the pipe), multiple frames must be in transition while
waiting for acknowledgment. In other words, we need to let more than one frame be outstanding to keep the
channel busy while the sender is waiting for acknowledgment.
Go-Back-N Automatic Repeat Request protocol we can send several frames before receiving
acknowledgments; we keep a copy of these frames until the acknowledgments arrive.

Sequence Numbers
Frames from a sending station are numbered sequentially. However, because we need to include the
sequence number of each frame in the header, we need to set a limit. If the header of the frame allows m bits for
the sequence number, the sequence numbers range from 0 to 2m - 1. For example, if m is 4, the only sequence
numbers are 0 through 15 inclusive. However, we can repeat the sequence. So the sequence numbers are
0, 1,2,3,4,5,6, 7,8,9, 10, 11, 12, 13, 14, 15,0, 1,2,3,4,5,6,7,8,9,10, 11, ...
In m
other words, the sequence numbers are modulo-2
.

Downloaded by PERALA BHAGYASRI


Sliding Window
In this protocol the sliding window is an abstract concept that defines the range of sequence numbers
that is the concern of the sender and receiver. In other words, the sender and receiver need to deal with only
part of the possible sequence numbers. The range which is the concern of the sender is called the send sliding
window; the range that is the concern of the receiver is called the receive sliding window.
The send window is an imaginary box covering the sequence numbers of the data frames which can be
in transmit. In each window position, some of these sequence numbers define the frames that have been sent;
others define those that can be sent. The maximum size of the window is 2m - 1 we let the size be fixed and set
to the maximum value, below figure a shows a sliding window of size 15 (m =4).
The window at any time divides the possible sequence numbers into four regions. The first region, from
the far left to the left wall of the window, defines the sequence numbers belonging to frames that are already
acknowledged. The sender does not worry about these frames and keeps no copies of them. The second region,
colored in Figure a, defines the range of sequence numbers belonging to the frames that are sent and have an
unknown status. The sender needs to wait to find out if these frames have been received or were lost. We call
these outstanding frames. The third range, white in the figure, defines the range of sequence numbers for frames
that can be sent; however, the corresponding data packets have not yet been received from the network layer.
Finally, the fourth region defines sequence numbers that cannot be used until the window slides, as we see next.
Below Figure b shows how a send window can slide one or more slots to the right when an
acknowledgment arrives from the other end. As we will see shortly, the acknowledgments in this protocol are
cumulative, meaning that more than one frame can be acknowledged by an ACK frame. In Figure b, frames 0,
I, and 2 are acknowledged, so the window has slid to the right three slots. Note that the value of Sf is 3 because
frame 3 is now the first outstanding frame.
The window itself is an abstraction; three variables define its size and location at any time. We call
these variables Sf(send window, the first outstanding frame), Sn (send window, the next frame to be sent), and
Ssize (send window, size). The variable Sf defines the sequence number of the first (oldest) outstanding frame.
The variable Sn holds the sequence number that will be assigned to the next frame to be sent. Finally, the
variable Ssize defines the size of the window, which is fixed in our protocol.

The receive window makes sure that the correct data frames are received and that the correct
acknowledgments are sent. The size of the receive window is always 1. The receiver is always looking for the
arrival of a specific frame. Any frame arriving out of order is discarded and needs to be resent. Below figure
shows the receive window.
We need only one variable Rn (receive window, next frame expected) to define this abstraction. The
sequence numbers to the left of the window belong to the frames already received and acknowledged; the
sequence numbers to the right of this window define the frames that cannot be received. Any received frame
with a sequence number in these two regions is discarded. Only a frame with a sequence number matching the
value of Rn is accepted and acknowledged.

Downloaded by PERALA BHAGYASRI


Timers
Although there can be a timer for each frame that is sent, in our protocol we use only one. The reason is that the
timer for the first outstanding frame always expires first; we send all outstanding frames when this timer
expires.

Acknowledgment
The receiver sends a positive acknowledgment if a frame has arrived safe and sound and in order. If a frame is
damaged or is received out of order, the receiver is silent and will discard all subsequent frames until it receives
the one it is expecting. The silence of the receiver causes the timer of the unacknowledged frame at the sender
site to expire. This, in turn, causes the sender to go back and resend all frames, beginning with the one with the
expired timer. The receiver does not have to acknowledge each frame received. It can send one cumulative
acknowledgment for several frames.

Resending a Frame
When the timer expires, the sender resends all outstanding frames. For example, suppose the sender has already
sent frame 6, but the timer for frame 3 expires. This means that frame 3 has not been acknowledged; the sender
goes back and sends frames 3, 4,5, and 6 again. That is why the protocol is called Go-Back-N ARQ.

Design
Below Figure shows the design for this protocol. As we can see, multiple frames can be in transit in the forward
direction, and multiple acknowledgments in the reverse direction. The idea is similar to Stop-and- Wait ARQ;
the difference is that the send window allows us to have as many frames in transition as there are slots in the
send window.

Downloaded by PERALA BHAGYASRI


Algorithms
Go-Back-N sender algorithm

Analysis this algorithm first initializes three variables. Unlike Stop-and-Wait ARQ, this protocol allows several
requests from the network layer without the need for other events to occur; we just need to be sure that the
window is not full (line 12). In our approach, if the window is full, the request is just ignored and the network
layer needs to try again. Some implementations use other methods such as enabling or disabling the network
layer. The handling of the arrival event is more complex than in the previous protocol. If we receive a corrupted
ACK, we ignore it.

Downloaded by PERALA BHAGYASRI


Analysis This algorithm is simple. We ignore a corrupt or out-of-order frame. If a frame arrives with an
expected sequence number, we deliver the data, update the value of Rn, and send an ACK with the ackNa
showing the next frame expected.

Example
Below Figure shows an example of Go-Back-N. This is an example of a case where the forward channel
is reliable, but the reverse is not. No data frames are lost, but some ACKs are delayed and one is lost. The
example also shows how cumulative acknowledgments can help if acknowledgments are delayed or lost.
After initialization, there are seven sender events. Request events are triggered by data from the network
layer; arrival events are triggered by acknowledgments from the physical layer. There is no time-out event here
because all outstanding frames are acknowledged before the timer expires. Note that although ACK 2 is lost,
ACK 3 serves as both ACK 2 and ACK3.

Downloaded by PERALA BHAGYASRI


Selective Repeat Automatic Repeat Request:
Go-Back-N ARQ simplifies the process at the receiver site. The receiver keeps track of only one
variable, and there is no need to buffer out-of-order frames; they are simply discarded. However, this protocol
is very inefficient for a noisy link. In a noisy link a frame has a higher probability of damage, which means the
resending of multiple frames. This resending uses up the bandwidth and slows down the transmission. For noisy
links, there is another mechanism that does not resend N frames when just one frame is damaged; only the
damaged frame is resent. This mechanism is called Selective RepeatARQ.
Windows
The Selective Repeat Protocol also uses two windows: a send window and a receive window. However, there
are differences between the windows in this protocol and the ones in Go-Back-N. First, the size of the send window
m
is much smaller; it is 2 - I . The reason for this will be discussed later. Second, the receive window
m
is the same size as the send window. The send window maximum size can be 2 -I. For example, if m = 4, the
sequence numbers go from 0 to 15, but the size of the window is just 8

The receive window in Selective Repeat is totally different from the one in GoBack-N. First, the size of
the receive window is the same as the size of the send window (2m- I ). The Selective Repeat Protocol allows as
many frames as the size of the receive window to arrive out of order and be kept until there is a set of in-order
frames to be delivered to the network layer. Because the sizes of the send window and receive window are the
same.

Downloaded by PERALA BHAGYASRI


Design

Algorithms
Sender-side Selective Repeat algorithm

Downloaded by PERALA BHAGYASRI


Analysis The handling of the request event is similar to that of the previous protocol except that one timer is
started for each frame sent. The arrival event is more complicated here. An ACK or a NAK frame may arrive. If
a valid NAK frame arrives, we just resend the corresponding frame. If a valid ACK arrives, we use a loop to
purge the buffers, stop the corresponding timer. and move the left wall of the window. The time-out event is
simpler here; only the frame which times out is resent.

Receiver-site Selective Repeat algorithm

Downloaded by PERALA BHAGYASRI


Analysis Here we need more initialization. In order not to overwhelm the other side with NAKs, we use a
variable called NakSent. To know when we need to send an ACK, we use a variable called AckNeeded. Both
of these are initialized to false. We also use a set of variables to mark the slots in the receive window once the
corresponding frame has arrived and is stored. If we receive a corrupted frame and a NAK has not yet been
sent, we send a NAK to tell the other site that we have not received the frame we expected. If the frame is not
corrupted and the sequence number is in the window, we store the frame and mark the slot. If contiguous
frames, starting from Rn have been marked, we deliver their data to the network layer and slide the window.
Below Figure shows this situation.

Example
This example is similar to go back N Example in which frame 1 is lost. We show how Selective Repeat behaves
in this case. Below Figure shows the situation.

Downloaded by PERALA BHAGYASRI


Piggybacking:
The three protocols we discussed in this section are all unidirectional: data frames flow in
only one direction although control information such as ACK and NAK frames can travel in the
other direction. In real life, data frames are normally flowing in both directions: from node A to
node B and from node B to node A. This means that the control information also needs to flow in
both directions. A technique called piggybacking is used to improve the efficiency of the
bidirectional protocols. When a frame is carrying data from A to B, it can also carry control
information about arrived (or lost) frames from B; when a frame is carrying data from B to A, it
can also carry control information about the arrived (or lost) frames from A.
We show the design for a Go-Back-N ARQ using piggybacking in below Figure. Note
that each node now has two windows: one send window and one receive window. Both also need
to use a timer. Both are involved in three types of events: request, arrival, and time-out.
However, the arrival event here is complicated; when a frame arrives, the site needs to handle
control information as well as the frame itself. Both of these concerns must be taken care of in
one event, the arrival event. The request event uses only the send window at each site; the arrival
event needs to use both windows.

Downloaded by PERALA BHAGYASRI


Multiple access protocol- ALOHA, CSMA,
CSMA/CA and CSMA/CD
The data link layer is used in a computer network to transmit the data between two devices
or nodes. It divides the layer into parts such as data link control and the multiple
access resolution/protocol. The upper layer has the responsibility to flow control and the
error control in the data link layer, and hence it is termed as logical of data link control.
Whereas the lower sub-layer is used to handle and reduce the collision or multiple access on a
channel. Hence it is termed as media access control or the multiple access resolutions.

Data Link Control


A data link control is a reliable channel for transmitting data over a dedicated link using various
techniques such as framing, error control and flow control of data packets in the computer network.

What is a multiple access protocol?


When a sender and receiver have a dedicated link to transmit data
packets, the data link control is enough to handle the channel.
Suppose there is no dedicated path to communicate or transfer the
data between two devices. In that case, multiple stations access the
channel and simultaneously transmits the data over the channel. It
may create collision and cross talk. Hence, the multiple access
protocol is required to reduce the collision and avoid crosstalk
between the channels.

For example, suppose that there is a classroom full of students. When a


teacher asks a question, all the students (small channels) in the class
start answering the question at the same time (transferring the data
simultaneously). All the students respond at the same time due to
which data is overlap or data lost. Therefore it is the responsibility of a
teacher (multiple access protocol) to manage the students and make
them one answer.

Following are the types of multiple access protocol that is subdivided


into the different process as:
Downloaded by PERALA BHAGYASRI (bejagam.bhagyasri@gmail.com)
Downloaded by PERALA BHAGYASRI
A. Random Access Protocol
In this protocol, all the station has the equal priority to send the data
over a channel. In random access protocol, one or more stations
cannot depend on another station nor any station control another
station. Depending on the channel's state (idle or busy), each
station transmits the data frame. However, if more than one station
sends the data over a channel, there may be a collision or data
conflict. Due to the collision, the data frame packets may be lost or
changed. And hence, it does not receive by the receiver end.

Following are the different methods of random-access protocols for


broadcasting frames on the channel.

o Aloha
o CSMA
o CSMA/CD
o CSMA/CA

ALOHA Random Access Protocol

It is designed for wireless LAN (Local Area Network) but can also be
used in a shared medium to transmit data. Using this method, any
station can transmit data across a network simultaneously when a data
frameset is available for transmission.

Aloha Rules

1. Any station can transmit data to a channel at any time.


2. It does not require any carrier sensing.
3. Collision and data frames may be lost during the transmission of data
through multiple stations.
4. Acknowledgment of the frames exists in Aloha. Hence, there is no
collision detection.
5. It requires retransmission of data after some random amount of time.

Downloaded by PERALA BHAGYASRI


Pure Aloha

Whenever data is available for sending over a channel at stations, we


use Pure Aloha. In pure Aloha, when each station transmits data to a
channel without checking whether the channel is idle or not, the
chances of collision may occur, and the data frame can be lost. When
any station transmits the data frame to a channel, the pure Aloha waits
for the receiver's acknowledgment. If it does not acknowledge the
receiver end within the specified time, the station waits for a random
amount of time, called the backoff time (Tb). And the station may
assume the frame has been lost or destroyed. Therefore, it retransmits
the frame until all the data are successfully transmitted to the receiver.

1. The total vulnerable time of pure Aloha is 2 * Tfr.


2. Maximum throughput occurs when G = 1/ 2 that is 18.4%.
3. Successful transmission of data frame is S = G * e ^ - 2 G.

Downloaded by PERALA BHAGYASRI


As we can see in the figure above, there are four stations for accessing a
shared channel and transmitting data frames. Some frames collide
because most stations send their frames at the same time. Only
two frames, frame
1.1 and frame 2.2, are successfully transmitted to the receiver end. At the
same time, other frames are lost or destroyed. Whenever two frames
fall on a shared channel simultaneously, collisions can occur, and both
will suffer damage. If the new frame's first bit enters the channel
before finishing the last bit of the second frame. Both frames are
completely finished, and both stations must retransmit the data frame.

Slotted Aloha

The slotted Aloha is designed to overcome the pure Aloha's efficiency


because pure Aloha has a very high possibility of frame hitting. In
slotted Aloha, the shared channel is divided into a fixed time
interval called slots. So that, if a station wants to send a frame to a
shared channel, the frame can only be sent at the beginning of the
slot, and only one frame is allowed to be sent to each slot. And if the
stations are unable to send data to the beginning of the slot, the station
will have to wait until the beginning of the slot for the next time.
However, the possibility of a collision remains when trying to send a
frame at the beginning of two or more station time slot.

1. Maximum throughput occurs in the slotted Aloha when G = 1 that is 37%.


2. The probability of successfully transmitting the data frame in
the slotted Aloha is S = G * e ^ - 2 G.
3. The total vulnerable time required in slotted Aloha is Tfr.

Downloaded by PERALA BHAGYASRI


CSMA (Carrier Sense Multiple Access)

It is a carrier sense multiple access based on media access protocol


to sense the traffic on a channel (idle or busy) before transmitting the
data. It means that if the channel is idle, the station can send data to
the channel. Otherwise, it must wait until the channel becomes idle.
Hence, it reduces the chances of a collision on a transmission medium.

CSMA Access Modes

1-Persistent: In the 1-Persistent mode of CSMA that defines each node,


first sense the shared channel and if the channel is idle, it immediately
sends the data. Else it must wait and keep track of the status of the
channel to be idle and broadcast the frame unconditionally as soon
as the channel is idle.

Non-Persistent: It is the access mode of CSMA that defines before transmitting the data,
each node must sense the channel, and if the channel is inactive, it immediately sends the data.
Otherwise, the station must wait for a random time (not continuously), and when the channel
is found to be idle, it transmits the frames.

Downloaded by PERALA BHAGYASRI


P-Persistent: It is the combination of 1-Persistent and Non-persistent modes. The P-
Persistent mode defines that each node senses the channel, and if the channel is inactive, it
sends a frame with a P probability. If the data is not transmitted, it waits for a (q = 1-p
probability) random time and resumes the frame with the next time slot.

O- Persistent: It is an O-persistent method that defines the


superiority of the station before the transmission of the frame on
the shared channel. If it is found that the channel is inactive, each
station waits for its turn to retransmit the data.

CSMA/ CD

It is a carrier sense multiple access/ collision detection


network protocol to transmit data frames. The CSMA/CD protocol
works with a medium access control layer. Therefore, it first senses the
shared channel before broadcasting the frames, and if the

Downloaded by PERALA BHAGYASRI


channel is idle, it transmits a frame to check whether the transmission
was successful. If the frame is successfully received, the station sends
another frame. If any collision is detected in the CSMA/CD, the station
sends a jam/ stop signal to the shared channel to terminate data
transmission. After that, it waits for a random time before sending a
frame to a channel.

CSMA/ CA

It is a carrier sense multiple access/collision avoidance


network protocol for carrier transmission of data frames. It is a
protocol that works with a medium access control layer. When a data
frame is sent to a channel, it receives an acknowledgment to check
whether the channel is clear. If the station receives only a single (own)
acknowledgments, that means the data frame has been successfully
transmitted to the receiver. But if it gets two signals (its own and one
more in which the collision of frames), a collision of the frame occurs in
the shared channel. Detects the collision of the frame when a sender
receives an acknowledgment signal.

Following are the methods used in the CSMA/ CA

to avoid the collision:

Interframe space: In this method, the station waits for the channel
to become idle, and if it gets the channel is idle, it does not
immediately send the data. Instead of this, it waits for some time, and
this time period is called the Interframe space or IFS. However, the
IFS time is often used to define the priority of the station.

Contention window: In the Contention window, the total time is


divided into different slots. When the station/ sender is ready to
transmit the data frame, it chooses a random slot number of slots as
wait time. If the channel is still busy, it does not restart the entire
process, except that it restarts the timer only to send data packets
when the channel is inactive.

Acknowledgment: In the acknowledgment method, the sender


station sends the data frame to the shared channel if the
acknowledgment is not received ahead of time.

B. Controlled Access Protocol


It is a method of reducing data frame collision on a shared channel. In
the controlled access method, each station interacts and decides to
send a data frame by a particular station approved by all other stations.
It means that a single station cannot send the

Downloaded by PERALA BHAGYASRI


data frames unless all other stations are not approved. It has three
types of controlled access: Reservation, Polling, and Token
Passing.

In the Controlled access technique, all stations need to consult with one
another in order to find out which station has the right to send the data.

 The controlled access protocols mainly grant permission to send only


one node at a time; thus in order to avoid the collisions among the
shared mediums.
 No station can send the data unless it has been authorized by the
other stations.

The protocols lies under the category of Controlled access are as


follows:

1. Reservation
2. Polling
3. Token Passing

Let us discuss each protocol one by one:

1. Reservation

In this method, a station needs to make a reservation before sending the


data.

 Time is mainly divided into intervals.


 Also, in each interval, a reservation frame precedes the data
frame that is sent in that interval.

Downloaded by PERALA BHAGYASRI


 Suppose if there are 'N' stations in the system in that case there are
exactly 'N' reservation minislots in the reservation frame; where each
minislot belongs to a station.
 Whenever a station needs to send the data frame, then the station
makes a reservation in its own minislot.
 Then the stations that have made reservations can send their data
after the reservation frame.
 Example
 Let us take an example of 5 stations and a 5-minislot reservation
frame. In the first interval, the station 2,3 and 5 have made the
reservations. While in the second interval only station 2 has made the
reservations.

2. Polling

The polling method mainly works with those topologies where one
device is designated as the primary station and the other device is
designated as the secondary station.

 All the exchange of data must be made through the primary device even
though the final destination is the secondary device.
 Thus to impose order on a network that is of independent users, and in
order to establish one station in the network that will act as a
controller and periodically polls all other stations is simply referred to
as polling.

Downloaded by PERALA BHAGYASRI


 The Primary device mainly controls the link while the secondary device
follows the instructions of the primary device.
 The responsibility is on the primary device in order to determine which
device is allowed to use the channel at a given time.
 Therefore the primary device is always an initiator of the session.

Poll Function

In case if primary devices want to receive the data, then it usually asks the
secondary devices if they have anything to send. This is commonly known
as Poll Function.

 There is a poll function that is mainly used by the primary


devices in order to solicit transmissions from the secondary
devices.
 When the primary device is ready to receive the data then it must
ask(poll) each secondary device in turn if it has anything to send.
 If the secondary device has data to transmit then it sends the data
frame, otherwise, it sends a negative acknowledgment (NAK).
 After that in case of the negative response, the primary then polls the
next secondary, in the same manner until it finds the one with the
data to send. When the primary device received a positive response
that means (a data frame), then the primary devices reads the frame
and then returns an acknowledgment (ACK )frame,

Downloaded by PERALA BHAGYASRI


Select Function

In case, if the primary device wants to send the data then it tells the
secondary devices in order to get ready to receive the data. This is
commonly known as the Select function.

 Thus the select function is used by the primary device when it has
something to send.
 We had already told you that the primary
device always controls the link.
 Before sending the data frame, a select (SEL ) frame is created and
transmitted by the primary device, and one field of the SEL frame
includes the address of the intended secondary.

Downloaded by PERALA BHAGYASRI


 The primary device alerts the secondary devices for the
upcoming transmission and after that wait for an
acknowledgment (ACK) of the secondary devices.

Advantages of Polling

Given below are some benefits of the Polling technique:

1. The minimum and maximum access times and data rates on


the channel are predictable and fixed.
2. There is the assignment of priority in order to ensure
faster access from some secondary.

Drawbacks

There are some cons of the polling method and these are as follows:

Downloaded by PERALA BHAGYASRI


 There is a high dependency on the reliability of the controller
 The increase in the turnaround time leads to the reduction of the data
rate of the channel under low loads.

3. Token Passing

In the token passing methods, all the stations are organized in the form of
a logical ring. We can also say that for each station there is a predecessor
and a successor.

 The predecessor is the station that is logically before the station in the
ring; while the successor is the station that is after the station in the
ring. The station that is accessing the channel now is the current
station.
 Basically, a special bit pattern or a small message that circulates from
one station to the next station in some predefined order is commonly
known as a token.
 Possessing the token mainly gives the station the right to access the
channel and to send its data.
 When any station has some data to send, then it waits until it receives a
token from its predecessor. After receiving the token, it holds it and
then sends its data. When any station has no more data in order to send
then it releases the token and then passes the token to the next logical
station in the ring.
 Also, the station cannot send the data until it receives the token again
in the next round.
 In Token passing, when a station receives the token and has no data to
send then it just passes the token to the next station.
 The problem that occurs due to the Token passing technique is the
duplication of tokens or loss of tokens. The insertion of the new station,
removal of a station, also needs to be tackled for correct and reliable
operation of the token passing technique.

Downloaded by PERALA BHAGYASRI


The performance of a token ring is governed by 2 parameters, which are
delay and throughput.

Delay is a measure of the time; it is the time difference between a packet


ready for transmission and when it is transmitted. Hence, the average time
required to send a token to the next station is a/N.

Throughput is a measure of the successful traffic in the


communication channel.

Throughput, S = 1/ (1 + a/N) for a<1

S = 1/[a(1+1/N)] for a>1, here N = number of stations & a = Tp/Tt

Tp = propagation delay &Tt = transmission delay

In the diagram below when station-1 posses the token, it starts


transmitting all the data-frames which are in its queue. now after
transmission, station-1 passes the token to station-2 and so on. Station-1
can now transmit data again, only when all the stations in the network
have transmitted their data and passed the token.

Downloaded by PERALA BHAGYASRI


Note: It is important to note that A token can only work in that channel, for
which it is generated, and not for any other.

Downloaded by PERALA BHAGYASRI


Switched Local Area Networks

Local Area Network (LAN)

broadcast channel shared among many hosts


any frames sent to the broadcast address reach all hosts
on the LAN
earlier: all frames broadcast, those who don’t want the data
ignore it (bus topology)
now: frames sent to a particular MAC address reach only the
destination host (star topology)

MAC Addresses

used to get frame from one interface to another physically-


connected interface (on the same network)
most are 48 bits long, depends on link-layer
protocol address burned into the adapter ROM
broadcast address usually all ones (FF-FF-FF-FF-FF-FF)

Downloaded by PERALA BHAGYASRI


MAC Addresses are Globally Unique

address assignment administered by IEEE


manufacturer buys portion of MAC address space (pre x)
uses that pre x for all MAC addresses and ensures it does not
reuse the su x
uniqueness provides address portability
can move Ethernet card from one LAN to another
don’t need hierarchy or aggregation like with IP addresses
because MAC addresses only used on one LAN

ARP: Address Resolution Protocol

when forwarding a packet with IP, a router knows when the packet
has reached its destination network
how can the router determine the associated MAC address for a given
IP address?

ARP: Address Resolution Protocol

when forwarding a packet with IP, a router knows when the packet
has reached its destination network

Downloaded by PERALA BHAGYASRI


how can the router determine the associated MAC address for a given
IP address?

keep an ARP table: maps an IP address to a MAC address

Building an ARP Table

Building an ARP Table

ARP table

entry IP

address MAC

address

TTL (e.g. 20 minutes)

host A has no entry for IP address B in table

A broadcasts ARP query for B all hosts on LAN receive query

host with address B responds by unicast to A with its MAC address

all hosts hear query and response, cache translations for A and B in their ARP tables

all hosts process all ARP packets, even if not addressed to themselves

ARP Example

send packet from A to B via R


need ARP from A to R, R to B

Downloaded by PERALA BHAGYASRI


ARP table for node 222.222.222.220

Ethernet
dominant wired LAN technology
very inexpensive: $20 for 100
Mbps
simpler and cheaper than FDDI (token ring), ATM
speeds have improved dramatically over the years: 10 Mbps - 10 Gbps

Ethernet Topologies

originally used a bus


now nearly all installations use a star: hub or switch

Downloaded by PERALA BHAGYASRI


Ethernet Frame Format

IP packet encapsulated inside Ethernet Frame


preamble: 7 bytes of 10101010 followed by one byte of
10101011 used to synchronize the receiver and sender
clock rates
addresses: 6 bytes
type: higher level protocol (IP, IPX (Netware),
AppleTalk) CRC: error detection/correction
Historic Ethernet
10base5 (thick
ethernet) original
standard
coaxial cable, 500 meters max length, 10 Mbps

Downloaded by PERALA BHAGYASRI


bus topology, BNC connectors, must be
terminated 10Base-2 (thin ethernet)
coaxial cable, 185 meters max length, 10 Mbps
bus topology, BNC connectors, must be terminated

10BaseT and 100BaseT (fast


ethernet) star topology
twisted pair, 100 m max distance between node and hub
hub is a physical-layer bit repeater, no frame bu ering all
nodes e ectively on same link
Gigabit ethernet
point-to-point links and shared links CSMA/CD
only for shared links
full duplex at 1 Gbps and now 10 Gbps for point-to-point links

Hubs versus Switches


hub: a bit repeater and signal ampli
er connects two LAN segments
together
extends maximum distance between two nodes
connects collision domains
switch: link layer forwarding
device
stores and forwards frames using MAC address uses
link- layer protocol when forwarding frames
separates collision domains
Ethernet switch: only handles Ethernet frames, uses
CSMA/CD on each LAN segment

Downloaded by PERALA BHAGYASRI


Switches versus Routers

Example Configuration

Downloaded by PERALA BHAGYASRI


Switch Forwarding Algorithm
entry = switch_table{MACdestAddress} if entry:
if MACdestAddress on arriving interface drop
frame else:
entry.interface.forwardframe()
else:
for i in interfaces:
if i != arriving interface:
i.forwardframe()

can the switch topology have loops?

Switch Example

switch delivers a frame from A to I - sends only on interface 3


switch delivers a frame from A to C - sends on all interfaces switch
delivers a frame from C to I - sends only on interface 3,
learns that C is on interface 1

Downloaded by PERALA BHAGYASRI


VLANs

motivation

want to limit the spread of broadcast tra c

want to isolate tra c for small subnets, without requiring a separate, physical
switch for each subnet

want the exibility of moving hosts between subnets without physically

rewiring them use a single switch with intelligence that knows which ports

belong to which VLAN use a router within the switch to route between

VLANs

Downloaded by PERALA BHAGYASRI

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy