Wa0013.

Download as pdf or txt
Download as pdf or txt
You are on page 1of 136

UNIT - II

Data link layer


Data link layer: Design issues, framing, Error detection and correction.
Elementary data link protocols: simplex protocol, A simplex stop and wait
protocol for an error-free channel, A simplex stop and wait protocol for noisy
channel.
Sliding Window protocols: A one-bit sliding window protocol, A protocol using
Go-Back-N, A protocol using Selective Repeat, Example data link protocols.
Medium Access sub layer: The channel allocation problem, Multiple access
protocols: ALOHA, Carrier sense multiple access protocols, collision free protocols.
Wireless LANs, Data link layer switching.
Introdution of Data link layer
Data-link layer is the second layer after the physical layer
This layer is responsible for the error-free transfer of data frames, over the
physical layer.
The data link layer is divided into two sub-layers :
1. Logical Link Control Sub-layer (LLC) –
Provides the logic for the data link, Thus it controls the synchronization, flow
control, and error checking functions of the data link layer. Functions are –
(i) Error Recovery.
(ii) It performs the flow control operations.
(iii) User addressing.
2. Media Access Control Sub-layer (MAC)
It controls the flow and multiplexing for transmission medium.
Transmission of data packets is controlled by this layer. This layer is responsible
for sending the data over the network interface card.
Functions are –
(i) To perform the control of access to media.
(ii) It performs the unique addressing to stations directly connected to LAN.
(iii) Detection of errors.
Design issues(functions) with data link layer are :
1. Services provided to the network layer –
The data link layer act as a service interface to the network layer. The principle service is
transferring data from network layer on sending machine to the network layer on
destination machine. This transfer also takes place via DLL (Data link-layer).
Types of Services
The services are of three types −
• Unacknowledged connectionless service − Sender sends message, receiver is
receiving messages without any acknowledgement both nodes are using
connectionless services.
• Acknowledged connectionless service − Sender sends the message to
receiver, when receiver the message it sends acknowledgement to sender that it
receives the message with connectionless services.
• Acknowledged connection - oriented service − Both sender and receiver are
using connection oriented services, and communication is acknowledged base
communication between the two nodes.
2. Frame synchronization –
The source machine sends data in the form of blocks called frames to the destination machine.
The starting and ending of each frame should be identified so that the frame can be recognized by
the destination machine.
The Frame contains the following −
• Frame Header
• Payload field for holding packet
• Frame Trailer
The following are the types of framing methods that are used in Data Link Layer −
• Byte-oriented framing
• Bit-oriented framing and character count
• The frame is diagrammatically shown below −
3. Flow control –
Flow control is done to prevent the flow of data frame at the receiver end. The source
machine must not send data frames at a rate faster than the capacity of destination machine
to accept them.
Flow control allows the two nodes to communicate with each other and work at different
speeds. The data link layer monitors the flow control so that when a fast sender sends data,
a slow receiver can receive the data at the same speed. Because of this flow control
technique is used.
4. Error control –
Error control is done to prevent duplication of frames. The errors introduced during
transmission from source to destination machines must be detected and corrected at the
destination machine.
Error detection: Errors can be introduced by signal attenuation and noise. Data Link
Layer protocol provides a mechanism to detect one or more errors. This is achieved by
adding error detection bits in the frame and then receiving node can perform an error
check.
Error correction: Error correction is similar to the Error detection, except that receiving
node not only detects the errors but also determine where the errors have occurred in the
frame.
FRAMING
Framing is function of Data Link Layer that is used to separate message from
sender to receiver or simply from all other messages to all other destinations just by
adding sender address and receiver address.
The receiver address is simply used to represent where message is to go and
sender address is used to help recipient to acknowledge receipt.
Framing is simply point-to-point connection among two computers or devices that
consists wire in which data is transferred as stream of bits. However, all of these
bits should be framed into visible blocks of information.
Methods of Framing :
There are basically four methods of framing , used to find the start and end
frame.
1. Character Count
2. Flag Byte with Character Stuffing(Byte stuffing)
3. Starting and Ending Flags, with Bit Stuffing
4. Encoding Violations

1. Character Count
The first framing method uses a field in the header to specify the number of
characters in the frame. When the data link layer at the destination sees the
character count, it knows how many characters follow and hence where the end of
the frame is.
The trouble with this algorithm is that the count can be garbled by a transmission
error.
2. Flag Byte with Character Stuffing(Byte stuffing)
Byte - Stuffing − A byte is stuffed in the message to differentiate from the
delimiter. Character stuffing is also known as byte stuffing or character-oriented
framing and is same as that of bit stuffing but byte stuffing actually operates on
bytes.
If the pattern of the flag byte is present in the message byte, there should be a
strategy so that the receiver does not consider the pattern as the end of the frame.
In character – oriented protocol, the mechanism adopted is byte stuffing.
In byte stuffing, a special byte called the escape character (ESC) is stuffed before
every byte in the message with the same pattern as the flag byte. If the ESC
sequence is found in the message byte, then another ESC byte is stuffed before it.
3. Starting and Ending Flags, with Bit Stuffing:
Bit - Stuffing − A pattern of bits of arbitrary length is stuffed in the message to
differentiate from the delimiter. This is also called bit - oriented framing.

•Frame Header − It contains the source and the destination addresses of the
frame.
•Payload field − It contains the message to be delivered.
•Trailer − It contains the error detection and error correction bits.
•Flags − A bit pattern that defines the beginning and end bits in a frame. It is
generally of 8-bits. Most protocols use the 8-bit pattern 01111110 as flag.
4.Physical layer coding violations method
Physical layer coding violations method of framing is only applicable to networks
in which the encoding on the physical medium contains some redundancy.
Some LANs encode each bit of data by using two physical bits that Manchester
coding uses.
Here, Bit 1 is encoded into a high-low (10) pair and Bit 0 is encoded into a low-
high (01) pair.
The scheme means that every data bit has a transition in the middle, making it
easy for the receiver to locate the bit boundaries.
The combinations high-high and low-low are not used for data but are used for
delimiting frames in some protocols.
As a final note on framing, many data link protocols use combination of a
character count with one of the other methods for extra safety. When a frame
arrives, the count field is used to locate the end of the frame. Only if the
appropriate delimiter is present at that position and the checksum is correct is the
frame accepted as valid. Otherwise, the input stream is scanned for the next
delimiter
Error Detection
Error
A condition when the receiver’s information does not matches with the sender’s
information. During transmission, digital signals suffer from noise that can
introduce errors in the binary bits travelling from sender to receiver. That means a
0 bit may change to 1 or a 1 bit may change to 0.
Error Detecting Codes (Implemented either at Data link layer or Transport
Layer of OSI Model) Whenever a message is transmitted, it may get
scrambled by noise or data may get corrupted.
To avoid this, we use error-detecting codes which are additional data added
to a given digital message to help us detect if any error has occurred during
transmission of the message.
Basic approach used for error detection is the use of redundancy bits, where
additional bits are added to facilitate detection of errors.
Types of error:
There may be three types of errors:
1. Single bit error: In a frame, there is only one bit, anywhere though, which is
corrupt.
2. Multiple bits error: Frame is received with more than one bits in corrupted
state.
3. Burst error: Frame contains more than1 consecutive bits corrupted.
Some popular techniques for error detection are:
1. Simple Parity check
2. Two-dimensional Parity check
3. Checksum
4. Cyclic redundancy check (CRC)
1. Simple Parity check
Single Parity checking is the simple mechanism and inexpensive to detect the
errors.
In this technique, a redundant bit(either 0 or 1) is also known as a parity bit which
is appended at the end of the data unit so that the number of 1s becomes even.
Therefore, the total number of transmitted bits would be 9 bits.
If the number of 1s bits is odd, then parity bit 1 is appended and if the number of
1s bits is even, then parity bit 0 is appended at the end of the data unit.
At the receiving end, the parity bit is calculated from the received data bits and
compared with the received parity bit.
This technique generates the total number of 1s even, so it is known as even-
parity checking.
Then, following cases are possible-
• If total number of 1’s is even and even parity is used, then receiver assumes that
no error occurred.
• If total number of 1’s is even and odd parity is used, then receiver assumes
that error occurred.
• If total number of 1’s is odd and odd parity is used, then receiver assumes that no
error occurred.
• If total number of 1’s is odd and even parity is used, then receiver assumes
that error occurred.
Parity Check Example-
Consider the data unit to be transmitted is 1001001 and even parity is used
At Sender Side-
Total number of 1’s in the data unit is counted.
Total number of 1’s in the data unit = 3.
Clearly, even parity is used and total number of 1’s is odd.
So, parity bit = 1 is added to the data unit to make total number of 1’s even.
Then, the code word 10010011 is transmitted to the receiver.
At Receiver Side-
After receiving the code word, total number of 1’s in the code word is counted.
Consider receiver receives the correct code word = 10010011.
Even parity is used and total number of 1’s is even.
So, receiver assumes that no error occurred in the data during the transmission.
Advantage-
This technique is guaranteed to detect an odd number of bit errors (one, three, five
and so on).
If odd number of bits flip during transmission, then receiver can detect by
counting the number of 1’s.
Drawbacks Of Single Parity Checking
It can only detect single-bit errors which are very rare.
If two bits are interchanged, then it cannot detect the errors.
2. Two-dimensional Parity check
Performance can be improved by using Two-Dimensional Parity Check which
organizes the data in the form of a table.
Parity check bits are calculated for each row, which is equivalent to a simple
parity check bit.
Parity check bits are also calculated for all columns, then both are sent along with
the data. At the receiving end these are compared with the parity bits calculated on
the received data.

Parity check bits are calculated for each row, which is equivalent to a simple
parity check bit. Parity check bits are also calculated for all columns, then both are
sent along with the data. At the receiving end these are compared with the parity
bits calculated on the received data.
Drawbacks Of 2D Parity Check
If two bits in one data unit are corrupted and two bits exactly the same position in
another data unit are also corrupted, then 2D Parity checker will not be able to
detect the error.
This technique cannot be used to detect the 4-bit errors or more in some cases.

3. Checksum
In checksum error detection scheme, the data is divided into k segments each of m
bits.
In the sender’s end the segments are added using 1’s complement arithmetic to get
the sum. The sum is complemented to get the checksum.
The checksum segment is sent along with the data segments.
At the receiver’s end, all received segments are added using 1’s complement
arithmetic to get the sum. The sum is complemented.
If the result is zero, the received data is accepted; otherwise discarded.
4. Cyclic redundancy check (CRC)
Unlike checksum scheme, which is based on addition, CRC is based on binary
division.
In CRC, a sequence of redundant bits, called cyclic redundancy check bits, are
appended to the end of data unit so that the resulting data unit becomes exactly
divisible by a second, predetermined binary number.
At the destination, the incoming data unit is divided by the same number. If at this
step there is no remainder, the data unit is assumed to be correct and is therefore
accepted.
A remainder indicates that the data unit has been damaged in transit and therefore
must be rejected.
• The generator polynomial G(x) = x3 + 1 is encoded as 1001.
• Clearly, the generator polynomial consists of 4 bits.
• So, a string of 3 zeroes is appended to the bit stream to be transmitted.
• The resulting bit stream is 1010000000.
At sender side CRC=011
• The code word to be transmitted is obtained by replacing the last 3 zeroes of
10011101000 with the CRC.
• Thus, the code word transmitted to the receiver = 1010000011.
Now, At receiver side:
• Receiver receives the bit stream = 1010000011.
• Receiver performs the binary division with the same generator polynomial .
From here,
• The remainder obtained on division is a zero value.
• This indicates to the receiver that is no error occurred in the data during the
transmission.
• Therefore, receiver accept the data.
Error Correction
Error Correction codes are used to detect and correct the errors when data is
transmitted from the sender to the receiver.
Error Correction can be handled in two ways:
• Backward Error Correction When the receiver detects an error in the data
received, it requests back the sender to retransmit the data unit.
• Forward Error Correction When the receiver detects some error in the data
received, it executes error-correcting code, which helps it to auto-recover and to
correct some kinds of errors.
A single additional bit can detect the error, but cannot correct it.
For correcting the errors, one has to know the exact position of the error. For
example, If we want to calculate a single-bit error, the error correction code will
determine which one of seven bits is in error. To achieve this, we have to add
some additional redundant bits.
Suppose r is the number of redundant bits and d is the total number of the data
bits. The number of redundant bits r can be calculated by using the formula:
2r >=d+r+1
The value of r is calculated by using the above formula. For example, if the value
of d is 4, then the possible smallest value that satisfies the above relation would be
3.
Error Correction Techniques:
1. Hamming Code:
• Parity bits: The bit which is appended to the original data of binary bits so that
the total number of 1s is even or odd.
• Even parity: To check for even parity, if the total number of 1s is even, then the
value of the parity bit is 0. If the total number of 1s occurrences is odd, then the
value of the parity bit is 1.
• Odd Parity: To check for odd parity, if the total number of 1s is even, then the
value of parity bit is 1. If the total number of 1s is odd, then the value of parity bit
is 0.
• Relationship b/w Error position & binary number.

Let's understand the concept of Hamming code through an example:


Suppose the original data is 1010 which is to be sent.

Total number of data bits 'd' = 4


Number of redundant bits r : 2r >= d+r+1
2r>= 4+r+1 Therefore, the value of r is 3 that satisfies the above relation
. Total number of bits = d+r = 4+3 = 7;
Determining the position of the redundant bits
• The number of redundant bits is 3. The three bits are represented by r1,
r2, r4. The position of the redundant bits is calculated with corresponds to
the raised power of 2. Therefore, their corresponding positions are 1, 21,
22.
1.The position of r1 = 1
2.The position of r2 = 2
3.The position of r4 = 4
• Representation of Data on the addition of parity bits:

Determining the Parity bits


Determining the r1 bit
The r1 bit is calculated by performing a parity check on the bit
positions whose binary representation includes 1 in the first position.
We observe from the above figure that the bit positions that includes 1 in the
first position are 1, 3, 5, 7. Now, we perform the even-parity check at these bit
positions. The total number of 1 at these bit positions corresponding to r1
is even, therefore, the value of the r1 bit is 0.
Determining r2 bit
The r2 bit is calculated by performing a parity check on the
bit positions whose binary representation includes 1 in the
second position.
We observe from the above figure that the bit positions that includes 1 in the
second position are 2, 3, 6, 7. Now, we perform the even-parity check at these bit
positions. The total number of 1 at these bit positions corresponding to r2 is odd,
therefore, the value of the r2 bit is 1.
Determining r4 bit
The r4 bit is calculated by performing a parity check on the bit positions
whose binary representation includes 1 in the third position.
• We observe from the above figure that the bit positions that includes 1 in
the third position are 4, 5, 6, 7. Now, we perform the even-parity check at
these bit positions. The total number of 1 at these bit positions
corresponding to r4 is even, therefore, the value of the r4 bit is 0.
Data transferred is given below:

Suppose the 4th bit is changed from 0 to 1 at the receiving end, then parity
bits are recalculated.
R1 bit
• The bit positions of the r1 bit are 1,3,5,7
• We observe from the above figure that the binary representation of r1 is
1100. Now, we perform the even-parity check, the total number of 1s
appearing in the r1 bit is an even number. Therefore, the value of r1 is 0.
R2 bit
• The bit positions of r2 bit are 2,3,6,7.

We observe from the above figure that the binary representation of r2 is 1001.
Now, we perform the even-parity check, the total number of 1s appearing in
the r2 bit is an even number. Therefore, the value of r2 is 0.
R4 bit
• The bit positions of r4 bit are 4,5,6,7.

 We observe from the above figure that the binary representation of r4 is


1011. Now, we perform the even-parity check, the total number of 1s
appearing in the r4 bit is an odd number. Therefore, the value of r4 is 1.
 The binary representation of redundant bits, i.e., r4r2r1 is 100, and its
corresponding decimal value is 4. Therefore, the error occurs in a 4th bit
position. The bit value must be changed from 1 to 0 to correct the error.
PROTOCOLS IN DATA LINK LAYER
Data link protocols can be broadly divided into two categories, depending on
whether the transmission channel is noiseless or noisy.
Elementary data link protocols
Elementary Data Link protocols are classified into three categories, as given below −
1. simplex protocol (or) Unrestricted simplex protocol
2. A simplex stop and wait protocol for an error-free channel
3. A simplex stop and wait protocol for noisy channel.
1. simplex protocol (or) Unrestricted simplex protocol
It is very simple protocol.The sender sends a sequence of frames without even thinking
about the receiver.
 Data is transmitted in one direction only.
Both sender & receiver always ready.
Processing time can be ignored.
 Infinite buffer space is available at sender side and receiver side.
No errors are occurring that is no damage frames and no lost frames.
which we will nickname ‘‘Utopia,’’ .
The utopia protocol is unrealistic because it does not handle either flow control or
error correction .
The data link layer at the sender site gets data from its network layer, makes a frame
out of the data, and sends it. The data link layer(receiver site) receives a frame from its
physical layer, extracts data from the frame, and convey the data to its network layer.
The data link layers of the sender and receiver provide communication/transmission
services for their network layers. The data link layers utilization the services provided by
their physical layers for the physical transmission of bits.
Advantages of simplex protocol:
1. It is simple protocol
2. It is highly unrealistic protocol
Disadvantages:
1. Flooding : it means continuous data transmission may cause the
congestion(sender transmit the frames faster than the receiver can accept, it takes
more time) in the network, that is your network may get highly loaded in this
transmission process. Overcome this drawback we are going to see simplex stop-
and-wait protocol.
2. A simplex stop and wait protocol for an error-free channel
Sender:
Rule 1) Send one data packet at a time.
Rule 2) Send the next packet only after receiving acknowledgement for the previous.
Receiver:
Rule 1) Send acknowledgement after receiving and consuming a data packet.
Rule 2) After consuming packet acknowledgement need to be sent (Flow Control)
It is still very simple.
We still have unidirectional communication for data frames, but auxiliary ACK frames
travel from the other direction.
The receiver has finite buffer capacity.
In this, Communication is error free.
The receiver has a finite processing speed, means the receiver is also restricted by
specific speed limit.
Problems :
1. Problems due to Lost Data
Sender waits for ACK for an infinite amount of time.
Receiver waits for data for an infinite amount of time.
2. Problem due to Lost Acknowledgement:
 Sender waits for ACK for infinite amount of time.
 Here ACK is lost due to some problems in the network, so the sender will be
waiting for infinite amount of time.
 Here there is no chance for sender to receive a ACK and there is no chance
for sender to send the next frame.

3. Delayed Acknowledgement/Data: After a timeout on the sender side, a long-
delayed acknowledgement might be wrongly considered as acknowledgement of
some other recent packet.
3. A simplex stop and wait protocol for noisy channel(Stop and Wait for
ARQ (Automatic Repeat Request))
The above 3 problems are resolved by Stop and Wait for ARQ (Automatic Repeat
Request).
Working of Stop and Wait for ARQ:
1) Sender A sends a data frame or packet with sequence number 0.
2) Receiver B, after receiving the data frame, sends an acknowledgement with
sequence number 0 (the sequence number of the next expected data frame or
packet)
It is also called as positive ACK with retransmition(PAR) or Atomatic repeat
request protocol(ARQ).
Data transfer is only in one direction.
consider separate sender and receiver.
finite processing capacity and speed at the receiver.
since it is a noisy channel, errors in data frames or acknowledgement frames are
expected.
Every frame has a unique sequence number.
After a frame has been transmitted, the timer is started for a finite time. Before the
timer expires, if the acknowledgement is not received , the frame gets
retransmitted.
when the acknowledgement gets corrupted or sent data frames gets damaged, how
long the sender should wait to transmit the next frame is infinite.
The Simplex Protocol for Noisy Channel is diagrammatically represented as
follows −
Advantages:
1. Handle lost frames by using timer.
Disadvantages:
1. One frame at a time
2. If the interval is too short unneeded retransmissions occur,
3. If it is too long bandwidth is wasted as the sender waits too long before doing a
retransmission.
What is ARQ (Automatic Repeat Request)?
• ARQ stands for Automatic Repeat Request also known as Automatic Repeat
Query.
• ARQ is an error-control strategy used in a two-way communication system.
• It is a group of error-control protocols to achieve reliable data transmission over
an unreliable source or service.
• These protocols reside in Transport Layer and Data Link Layer of the OSI(Open
System Interconnection) model .
• These protocols are responsible for automatic retransmission of packets that are
found to be corrupted or lost during the transmission process.
Working Principle of ARQ
The main function of these protocols is, the sender receives an
acknowledgement from the receiver end implying that the frame or
packet is received correctly before a timeout occurs, timeout is a
specific time period within which the acknowledgement has to be sent
by the receiver to the sender.
 If a timeout occurs: the sender does not receive the acknowledgement
before the specified time, it is implied that the frame or packet has been
corrupt or lost during the transmission. Accordingly, the sender
retransmits the packet and these protocols ensure that this process is
repeated until the correct packet is transmitted.
Sliding Window protocols
Need of sliding window protocol:
Sliding window protocols are data link layer protocols for reliable and
sequential delivery of data frames. The sliding window is also used in
Transmission Control Protocol.
In this protocol, multiple frames can be sent by a sender at a time before
receiving an acknowledgment from the receiver. The term sliding window
refers to the imaginary boxes to hold frames. Sliding window method is also
known as windowing.
Number of frames to be sent is based on window size. So, here sender and
receiver has window size based on window size the no.of frames to be sent .
In this protocols, each frame has sent from the sequence number. The sequence
numbers are used to find the missing data in the receiver end. The purpose of the
sliding window protocol is to avoid duplicate data, so it uses the sequence
number.
Example
Suppose that we have sender window and receiver window each of size 4. So
the sequence numbering of both the windows will be 0,1,2,3,0,1,2 and so on.
The following diagram shows the positions of the windows after sending the
frames and receiving acknowledgments.
Types of Sliding Window Protocols:
1. A one-bit sliding window protocol
2. A protocol using Go-Back-N
3. A protocol using Selective Repeat
1. A one-bit sliding window protocol
• One bit sliding window protocol is based on the concept of sliding
window protocol. But here the window size is of 1 bit for both sides.
So, the sender transmits one frame at a time and waits for its ACK,
then transmit the next frame. It uses the concept of stop-and-wait
protocol.
One bit sliding window protocol is used for delivery of data frames.
Sender has sending window.
Receiver has receiving window.
Sending and receiving windows act as buffer storage.
Here size of windows size is 1.
One bit sliding window protocol uses Stop and Wait.
Sender transmit a frame with sequence number.
Than sender wait for acknowledgment from the receiver.
Receiver send back an acknowledgement with sequence number.
If sequence number of acknowledgement matches with sequence
number of frame then Sender transmit the next frame, Else sender re-
transmit the previous frame.
Its bidirectional protocol.
This protocol provides for full – duplex communications. Hence, the
acknowledgment is attached along with the next data frame to be sent
that is called piggybacking(Piggybacking Is a bi-directional data
transmission technique)
Piggybacking:
The data frames to be transmitted additionally have an ACK
field, ACK field that is of a few bits length.
The ACK field contains the sequence number of the last frame
received without error.
If the sequence number matches with the sequence number of
the frame to be sent, then it is inferred that there is no error and
the frame is transmitted.
Otherwise , it is inferred that there is an error in the frame
and previous frame is retransmitted that is called
piggybacking.
Since this is a bi-directional protocol, the same algorithm
applies to both the communication parties.
Example
• The following diagram depicts a scenario with sequence numbers 0, 1, 2, 3, 0, 1, 2
and so on. It depicts the sliding windows in the sending and the receiving stations
during frame transmission.
Number of Sequence Numbers Required-
For any sliding window protocol to work without any problem,the following
condition must be satisfied-
• Available Sequence Numbers >= Sender Window Size + Receiver Window Size
• Stop and wait ARQ is a one bit sliding window protocol where-
• Sender window size = 1
• Receiver window size = 1
• Minimum number of sequence numbers required= Sender Window Size +
Receiver Window Size
= 1 + 1= 2
Sequence number on acknowledgements help to solve the problem of delayed
acknowledgement.
2. A protocol using Go-Back-N
Go-Back-N ARQ protocol is also known as Go-Back-N Automatic Repeat
Request. It is a data link layer protocol that uses a sliding window method. In this,
if any frame is corrupted, all subsequent frames have to be sent again.
The size of the sender window is N in this protocol. For example, Go-Back-8,
the size of the sender window, will be 8. The receiver window size is always 1.
If the receiver receives a corrupted frame, it cancels it. The receiver does not
accept a corrupted frame. When the timer expires, the sender sends the
correct frame again. The design of the Go-Back-N ARQ protocol is shown
below.
• Go-Back-N ARQ uses the concept of protocol pipelining i.e the sender can send
multiple frames before receiving the ACK for the first frame.
• There are finite number of frames and the frames are numbered in a sequential
manner.
• The number of frames that can be sent depends on the window size of the sender.
• If the ACK of a frame is not received within a agreed upon the time period.all
frames in the current window are retransmitted.
• The size of the sending window determines the sequence number of the outbound
frames. Here N is the senders window size.
Example:
 If the sending window size is 4 i.e 2 power of 2(this 2 is the number of bits in the
sequence number), then the sequence numbers will be 0,1,2,3,0,1,2,3,0,1 and so
on.
 The number of bits in the sequence number is 2 ,to generate the binary sequence is
00,01,10,11
Example 2
• a. First, the sender sends the first four frames in the window (here the window
size is 4).

b. Then, the receiver sends the acknowledgment for the 0th frame.
c. The receiver then slides the window over and sends the next frame in the queue.

d. Accordingly, the receiver sends the acknowledgement for the 1st frame, and
upon receiving that, the sender slides the window again and sends the next
frame. This process keeps on happening until all the frames are sent successfully.

When the timer expires, the sender resends all outstanding frames. For
example, suppose the sender has already sent frame 6, but the timer for frame
3 expires. This means that frame 3 has not been acknowledged; the sender goes
back and sends frames 3,4,5, and 6 again. That is why the protocol is called Go-
Back-N ARQ.
ADVANTAGES
 The sender can send MANY FRAMES at a time.
Timer can be set for agroup of frames.
Efficiency is more.
Waiting time is low.
We can alter the size of the sender window
DISADVANTAGES
Buffer requirement
Transmitter needs to store the last N packets
Scheme is inefficient when delay is large and data transmission rate is high
Unnecessary Retransmission of many error-free packets
3. A protocol using Selective Repeat
Selective-repeat Automatic Repeat Request (ARQ) is one of the techniques where a data link layer
may deploy to control errors.
Techniques to control ARQ
Generally, there are three types of techniques which control the errors by Automatic Repeat
Request (ARQ) they are −
• Stop-and-wait ARQ
• Go-Back-N ARQ
• Selective Repeat ARQ
Requirements for Error Control
There are some requirements for error control mechanisms and they are as follows −
• Error detection − The sender and receiver, or any must ascertain that there is some error in
the transit.
• Positive ACK − Whenever a receiver receives a correct frame, it should acknowledge it.
• Negative ACK − Whenever the receiver receives a damaged frame or a duplicate frame, it
sends a NACK back to the sender and sender must retransmit the correct frame.
• Retransmission − The sender always maintains a clock and sets a timeout period. If an
ACK of data-frame previously transmitted does not arrive before the timeout, the sender
retransmits the frame, thinking that the frame or it’s ACK is lost in transit
It is used for error detection and control in the data link layer.
In the selective repeat, the sender sends several frames specified by a window size
even without the need to wait for individual acknowledgement from the receiver
as in Go-Back-N ARQ. In selective repeat protocol, the retransmitted frame is
received out of sequence.
In Selective Repeat ARQ only the lost or error frames are retransmitted, whereas
correct frames are received and buffered.
The receiver while keeping track of sequence numbers buffers the frames in
memory and sends NACK for only frames which are missing or damaged. The
sender will send/retransmit a packet for which NACK is received.
Explanation
Step 1 − Frame 0 sends from sender to receiver and set timer.
Step 2 − Without waiting for acknowledgement from the receiver another frame,
Frame1 is sent by sender by setting the timer for it.
Step 3 − In the same way frame2 is also sent to the receiver by setting the timer
without waiting for previous acknowledgement.
Step 4 − Whenever sender receives the ACK0 from receiver, within the frame 0
timer then it is closed and sent to the next frame, frame 3.
Step 5 − whenever the sender receives the ACK1 from the receiver, within the
frame 1 timer then it is closed and sent to the next frame, frame 4.
Step 6 − If the sender doesn’t receive the ACK2 from the receiver within the time
slot, it declares timeout for frame 2 and resends the frame 2 again, because it
thought the frame2 may be lost or damaged.
Example data link protocols.
1. High-level Data Link Control (HDLC) protocol
 It is derived from SDLC(Synchronous Data Link Control) earlier it was used
in IBM and it was standardized by ISO organization with some modifications
i.e HDLC protocol.
HDLC (High-Level Data Link Control) is a bit-oriented protocol that is used
for communication over the point-to-point and multipoint links.
This protocol implements the mechanism of ARQ(Automatic Repeat Request).
With the help of the HDLC protocol, full-duplex communication is possible.
HDLC is the most widely used protocol and offers reliability, efficiency, and a
high level of Flexibility. Because it provides both flow control and error
control using the techniques either selective repeat or go-back-N that is
depending on the network.
Types of HDLC Frames:
There are three types of HDLC frames. The type of frame is determined by the
control field of the frame −
I. Information frame:
I-frames or Information frames carry user data from the network layer. They
also include flow and error control information that is piggybacked on user data.
The first bit of control field of I-frame is 0.
II. Supervisory Frame:
 S-frames or Supervisory frames do not contain information field. They are
used for flow and error control when piggybacking is not required. The first
two bits of control field of S-frame is 10.
The control field executes control functions such as acknowledgement of
frames, request for re-transmission, and requests for limited suspension of
frames being sent.
III. Unnumbered Frame
This control field format can also be used for control purposes. It can implement link
initialization, link disconnection and other link control services.
It may contain an information field, if required. The first two bits of control field of U-frame
is 11.

HDLC Frame Fields:


Flag − It is an 8-bit sequence that marks the beginning and the end of the frame. The bit pattern
of the flag is 01111110.
Address − It contains the address of the receiver. If the frame is sent by the primary station, it
contains the address(es) of the secondary station(s). If it is sent by the secondary station, it
contains the address of the primary station. The address field may be from 1 byte to several
bytes.
Control − It is 1 or 2 bytes containing flow and error control information. And it
decide how to control the transmission process. The field includes the commands,
responses and sequences numbers used to support the link's data flow
accountability.
Payload − This carries the data from the network layer. Its length may vary from
one network to another.
FCS − It is a 2 byte or 4 bytes frame check sequence for error detection. The
standard code used is CRC (cyclic redundancy code)
Advantages of HDLC:
1.This protocol uses bits to stuff flags occurring in the data.
2.This protocol is used for point-to-point as well as multipoint link access.
3.HDLC is one of the most common protocols of the data link layer.
4.HDLC is a bit-oriented protocol.
5.This protocol implements error control as well as flow control.
2. Point to point protocol(PPP)
The PPP stands for Point-to-Point protocol. It is the most commonly used
protocol for point-to-point access. Suppose the user wants to access the internet
from the home, the PPP protocol will be used.

It is a data link layer protocol that resides in the layer 2 of the OSI model

It is used to encapsulate the layer 3 protocols and all the information available in
the payload in order to be transmitted across the serial links. The PPP protocol can
be used on synchronous link like ISDN as well as asynchronous link like dial-up.
It is mainly used for the communication between the two devices.
• It is a byte-oriented protocol as it provides the frames as a collection of bytes or
characters. It is a WAN (Wide Area Network) protocol as it runs over the internet
link which means between two routers, internet is widely used.
Services provided by PPP
• It defines the format of frames through which the transmission occurs.
• It defines the link establishment process. If user establishes a link with a server,
then "how this link establishes" is done by the PPP protocol.
• It defines data exchange process, i.e., how data will be exchanged, the rate of the
exchange.
• The main feature of the PPP protocol is the encapsulation. It defines how network
layer data and information in the payload are encapsulated in the data link frame.
• It defines the authentication process between the two devices. The authentication
between the two devices, handshaking and how the password will be exchanged
between two devices are decided by the PPP protocol.
• It does not support flow control mechanism.
Frame format of PPP protocol
• The frame format of PPP protocol contains the following fields:

•Flag: The flag field is used to indicate the start and end of the frame. The flag
field is a 1-byte field that appears at the beginning and the ending of the frame.
The pattern of the flag is similar to the bit pattern in HDLC, i.e., 01111110.
•Address: It is a 1-byte field that contains the constant value which is
11111111. These 8 ones represent a broadcast message.
Control: It is a 1-byte field which is set through the constant value, i.e.,
11000000. It is not a required field as PPP does not support the flow control and a
very limited error control mechanism. The control field is a mandatory field where
protocol supports flow and error control mechanism.

Protocol: It is a 1 or 2 bytes field that defines what is to be carried in the data


field. The data can be a user data or other information.

Payload: The payload field carries either user data or other information. The
maximum length of the payload field is 1500 bytes.
FCS − It is a 2 byte or 4 bytes frame check sequence for error detection. The
standard code used is CRC (cyclic redundancy code)
Medium Access Control sub layer(MACL)
About MACL:
MAC is a sublayer of the data link layer(DLL) in the seven layer OSI network
reference model.
MAC is responsible for the transmission of data packets to and from the
network interface card(NIC), and to and from another remotely shared
channel.
The basic function of MAC is to provide an addressing mechanism and
channel access. So that each node available on a network can communicate with
each other nodes available on the same or other networks.
The channel allocation problems
In a broadcast network, the single broadcast channel is to be allocated to one
transmitting user at a time. When multiple users use a shared network and want to
access the same network. Then channel allocation problem in computer networks
occurs.

So, to allocate the same channel between multiple users, some techniques are
used, which are called channel allocation techniques in computer networks.

Channel allocation is a process in which a single channel is divided and allotted


to multiple users in order to carry user specific tasks. There are user’s quantity
may vary every time the process takes place.
 If there are N number of users and channel is divided into N equal-sized sub
channels, Each user is assigned one portion. If the number of users are small and
don’t vary at times, then Frequency Division Multiplexing(FDM) can be used as it
is a simple and efficient channel bandwidth allocating technique.

Channel allocation problem can be solved by two schemes:

1. Static Channel Allocation in LANs and MANs, and

2. Dynamic Channel Allocation.


1. Static Channel Allocation in LANs and MANs
In static channel allocation scheme, a fixed portion of the frequency channel is
allotted to each user. For N competing users, the bandwidth is divided into N
channels using frequency division multiplexing (FDM) and time-division
multiplexing (TDM), and each portion is assigned to one user.
In these methods, either a fixed frequency or fixed time slot is allotted to each
user.
Static channel allocation is also called fixed channel allocation. Such as a
telephone channel among many users is a real-life example of static channel
allocation.
In this allocation scheme, there is no interference between the users since each
user is assigned a fixed channel. However, it is not suitable in case of a large
number of users with variable bandwidth requirements.
Advantages
• It is particularly suitable for situations where there are a small number of fixed users having a
steady flow of uniform network traffic.
• The allocation technique is simple and so the additional overhead of a complex algorithm need
not be incurred.
• There is no interference between the users since each user is assigned a fixed channel which is
not shared with others
Disadvantages
• If the value of N is very large, the bandwidth available for each user will be very less. This will
reduce the throughput if the user needs to send a large volume of data once in a while.
• It is very unlikely that all the users will be communicating all the time. However, since all of
them are allocated fixed bandwidths, the bandwidth allocated to non-communicating users lies
wasted.
• If the number of users is more than N, then some of them will be denied service, even if there
are unused frequencies.
2. Dynamic Channel Allocation in LANs and MANs
The technique in which channels are not permanently allocated to the users is
called dynamic channel allocation. In this technique, no fixed frequency or fixed
time slot is allotted to the user.

The allocation depends upon the traffic. If the traffic increases, more channels are
allocated, otherwise fewer channels are allocated to the users.

This technique optimizes bandwidth usage and provides fast data transmission.
The following are the assumptions in dynamic channel allocation:
1. Station Model:
The model consists of N independent stations (eg. Computers, pc, mobiles..etc), each with a
program or user that generates frames for transmission. Stations are sometimes called terminals.
Once a frame has been generated , the station is blocked and does nothing until the frame has
been successfully transmitted.
2. Single channel assumption:
A single channel is available for all communication. All stations can transmit on it and all can
receive from it.
3. Collision assumption:
If frames are transmitted at the same time by two or more stations, then the collision occurs. and
both frames must re transmitted.
4. Time assumption

It can be divided into Slotted or Continuous time.

A) Continuous time: frame transmission can begin at any instant. There is no


master clock dividing time into discrete intervals.

B) Slotted time: Time is divided into discrete slots. If a slot does not contain any
frame, it is called an idle slot; if it contains a single frame, then the transmission
is successful; if it contains more than one frames, then a collision is said to
occur.
5. A) carrier sense: The stations may or may not be capable of detecting whether the
channel is in use before sending the frames. In algorithms which are based upon carrier
sense, a station sends frame only when it senses that the channel is not busy.

B) no carrier sense: In algorithms based upon no carrier sense, the stations transmit a
frame when it is available and later are informed whether successful transmission had
occurred or not.
Advantages
• Dynamic channel allocation schemes allots channels as needed. This results in optimum
utilization of network resources. There are less chances of denial of services and call
blocking in case of voice transmission. These schemes adjust bandwidth allotment
according to traffic volume, and so are particularly suitable for bursty traffic.
Disadvantages
• Dynamic channel allocation schemes increases the computational as well as storage load
on the system.
Multiple access protocols
The data link layer is used in a computer network to transmit the data
between two devices or nodes. It divides the layer into parts such
as data link control(logical link control layer) and the multiple
access resolution/protocol(media access layer).
The upper layer(LLC) has the responsibility to flow control and the
error control in the data link layer, and hence it is termed as logical
of data link control. Whereas the lower sub-layer(MAC) is used to
reduce the collision and handle multiple access on a channel. Hence
it is termed as media access control or the multiple access resolutions.
What is a multiple access protocol?
When a sender and receiver have a dedicated link to transmit
data packets, the data link control is enough to handle the
channel.
Suppose there is no dedicated path to communicate or transfer
the data between two devices. In that case, multiple stations
access the channel and simultaneously transmits the data over the
channel. It may create collision and cross talk. Hence, the
multiple access protocol is required to reduce the collision
and avoid crosstalk between the channels.
Random Access Protocol(it is a sub part of the multiple access
protocol)
In this protocol, all the station has the equal priority to send the data over a
channel. In random access protocol, one or more stations cannot depend on
another station or any station control another station.
Depending on the channel's state (idle or busy), each station transmits the
data frame. However, if more than one station sends the data over a channel,
there may be a collision or data conflict. Due to the collision, the data frame
packets may be lost or changed. And hence, it does not receive by the
receiver end.
• Following are the different methods of random-access protocols for
broadcasting frames on the channel.
1. Aloha
2. CSMA(carrier sense multiple access)
3. CSMA/CD(carrier sense multiple access/ collision detection)
4. CSMA/CA(carrier sense multiple access/collision avoidance)
1. ALOHA
 Aloha is designed for wireless LAN (Local Area Network) but can also be used
in a shared medium to transmit data. In aloha, any station can transmit data to a
channel at any time. It does not require any carrier sensing.
 Using this method, any station can transmit data across a network
simultaneously when a data frameset is available for transmission.
 Aloha is the random access protocol having two categories that are pure aloha
and slotted aloha.
a. Pure ALOHA
In pure ALOHA, the stations transmit frames whenever they have data to send.
 When two or more stations transmit simultaneously, there is collision and the
frames are destroyed.
In pure ALOHA, whenever any station transmits a frame, it expects the
acknowledgement from the receiver.
If acknowledgement is not received within specified time, the station assumes that
the frame (or acknowledgement) has been destroyed.
If the frame is destroyed because of collision the station waits for a random
amount of time called back-off time(Tb) and sends it again. This waiting time
must be random otherwise same frames will collide again and again.
Therefore pure ALOHA dictates that when time-out period passes, each station
must wait for a random amount of time before re-sending its frame. This
randomness will help avoid more collisions.
Since different stations wait for different amount of time, the probability of
further collision decreases.
The throughput of pure aloha is maximized when frames are of uniform
length(means fixed size).
In fig there are four stations that .contended with one another for access
to shared channel. All these stations are transmitting frames. Some of
these frames collide because multiple frames are in contention for the
shared channel. Only two frames, frame 1.1 and frame 2.2 survive. All
other frames are destroyed.
Whenever two frames try to occupy the channel at the same time, there
will be a collision and both will be damaged. If first bit of a new frame
overlaps with just the last bit of a frame almost finished, both frames
will be totally destroyed and both will have to be retransmitted.
b. Slotted ALOHA
 The slotted Aloha is designed to overcome the pure Aloha's efficiency
because pure Aloha has a very high possibility of frame
hitting(collision).
 In slotted Aloha, the shared channel is divided into a fixed time
interval called slots. So that, if a station wants to send a frame to a
shared channel, the frame can only be sent at the beginning of the slot,
and only one frame is allowed to be sent to each slot.
 And if the stations are unable to send data to the beginning of the slot,
the station will have to wait until the beginning of the slot for the next
time. However, there is still a possibility of collision if two stations try
to send at the beginning of the same time slot .
2. CSMA(carrier sense multiple access)
It is a carrier sense multiple access based on media access protocol to
sense the traffic on a channel (idle or busy) before transmitting the data.
It means that if the channel is idle, the station can send data to the
channel. Otherwise, it must wait until the channel becomes idle. Hence,
it reduces the chances of a collision on a transmission medium.
CSMA Access Modes
I. 1-Persistent: In the 1-Persistent mode of CSMA that defines each
node, first sense the shared channel and if the channel is idle, it
immediately sends the data. Else it must wait and keep track of the
status of the channel to be idle and broadcast the frame
unconditionally as soon as the channel is idle.
II . Non-Persistent: It is the access mode of CSMA that defines before
transmitting the data, each node must sense the channel, and if the
channel is inactive, it immediately sends the data. Otherwise, the station
must wait for a random time (not continuously), and when the channel
is found to be idle, it transmits the frames.
III. P-Persistent: It is the combination of 1-Persistent and Non-
persistent modes. The P-Persistent mode defines that each node senses
the channel, and if the channel is inactive, it sends a frame with
a P probability. If the data is not transmitted, it waits for a (q = 1-p
probability) random time and resumes the frame with the next time
slot.
2.1. CSMA/ CD
It is a carrier sense multiple access/ collision detection network
protocol to transmit data frames. The CSMA/CD protocol works with a
medium access control layer. Therefore, it first senses the shared
channel before broadcasting the frames, and if the channel is idle, it
transmits a frame to check whether the transmission was successful.
If the frame is successfully received, the station sends another frame. If
any collision is detected in the CSMA/CD, the station sends a jam/ stop
signal to the shared channel to terminate data transmission. After that, it
waits for a random time before sending a frame to a channel.
2.2. CSMA/ CA
It is a carrier sense multiple access/collision avoidance network
protocol for carrier transmission of data frames.
 It is a protocol that works with a medium access control layer. When a
data frame is sent to a channel, it receives an acknowledgment to check
whether the channel is clear. If the station receives only a single (own)
acknowledgments, that means the data frame has been successfully
transmitted to the receiver.
 But if it gets two signals (its own and one more in which the collision
of frames), a collision of the frame occurs in the shared channel.
Detects the collision of the frame when a sender receives an
acknowledgment signal.
Following are the methods used in the CSMA/ CA to avoid the collision:
• Interframe space: In this method, the station waits for the channel to become
idle, and if it gets the channel is idle, it does not immediately send the data.
Instead of this, it waits for some time, and this time period is called
the Interframe space or IFS. However, the IFS time is often used to define the
priority of the station.
• Contention window: In the Contention window, the total time is divided into
different slots. When the station/ sender is ready to transmit the data frame, it
chooses a random slot number of slots as wait time. If the channel is still busy, it
does not restart the entire process, except that it restarts the timer only to send data
packets when the channel is inactive.
• Acknowledgment: In the acknowledgment method, the sender station sends the
data frame to the shared channel if the acknowledgment is not received ahead of
time.
collision free protocols
when more than one station tries to transmit simultaneously via a shared
channel, the transmitted data is garbled. This event is called collision. The
Medium Access Control (MAC) layer of the OSI model is responsible for
handling collision of frames.
 Collision – free protocols are devised so that collisions do not occur.
Protocols like CSMA/CD and CSMA/CA the possibility of collisions once
the transmission channel is acquired by any station. However, collision can
still occur during the contention period if more than one stations starts to
transmit at the same time. Collision – free protocols resolves collision in the
contention period and so the possibilities of collisions are eliminated.
Types of Collision – free Protocols
1. Bit-Map Protocol
 Bit-map protocol is a collision free protocol that operates in the Medium Access
Control (MAC) layer of the OSI model. It resolves any possibility of collisions
while multiple stations are contending for acquiring a shared channel for
transmission.
In this protocol, if a station wishes to transmit, it broadcasts itself before the
actual transmission is called Reservation Protocols. because they reserve channel
ownership in advance and prevent collisions.
Working Principle
In this protocol, the contention period is divided into N slots, where N is the total
number of stations sharing the channel. If a station has a frame to send, it sets the
corresponding bit in the slot.
Suppose that there are 10 stations. So the number of contention slots will be
10. If the stations 2, 3, 8 and 9 wish to transmit, they will set the
corresponding slots to 1
 Once each station announces itself, one of them gets the channel
based upon any agreed criteria.
 Generally, transmission is done in the order of the slot numbers.
Each station has complete knowledge whether every other station
wants to transmit or not, before transmission starts. So, all
possibilities of collisions are eliminated.
2. Binary Countdown
Binary countdown protocol is a collision free protocol that operates in the MAC
layer of the OSI model.
When more than one station tries to transmit simultaneously via a shared channel,
the transmitted data is garbled, an event called collision.
Collision free protocols resolves channel access while the stations are contending
for the shared channel, thus eliminating any possibilities of collisions.
A problem with the basic bit-map protocol is that overhead is 1 contention
bit slot per station. We can do better than that by using binary station
addreddes.
Working Principle of Binary Countdown
In a binary countdown protocol, each station is assigned a binary
address. The binary addresses are bit strings of equal lengths. When a
station wants to transmit, it broadcasts its address to all the stations in
the channel, one bit at a time starting with the highest order bit.
In order to decide which station gets the channel access, the addresses
of the stations which are broadcasted are ORed. The higher numbered
station gets the channel access.
Example
Suppose that six stations contend for channel access which have the addresses:
1011, 0010, 0101, 1100, 1001 and 1101.
The iterative steps are −
All stations broadcast their most significant bit, i.e. 1, 0, 0, 1, 1, 1. Stations
0010 and 0101 sees 1 bit in other stations, and so they give up competing for the
channel.
The stations 1011, 1100, 1001 and 1101 continue. They broadcast their next bit,
i.e. 0, 1, 0, 1. Stations 1011 and 1001 sees 1 bit in other stations, and so they give
up competing for the channel.
The stations 1100 and 1101 continue. They broadcast their next bit, i.e. 0, 0. Since
both of them have same bit value, both of them broadcast their next bit.
The stations 1100 and 1101 broadcast their least significant bit, i.e. 0 and 1. Since
station 1101 has 1 while the other 0, station 1101 gets the access to the channel.
After station 1101 has completed frame transmission, or there is a time-out, the
next contention cycle starts.
The procedure is illustrated as follows −
Wireless LANs
The 802.11 Protocol Stack
• The protocols used by all the 802 variants, including Ethernet,
have a certain commonality of structure.
• The physical layer corresponds to the OSI physical layer fairly
well, but the data link layer in all the 802 protocols is split into
two or more sublayers.
• In 802.11, the MAC (Medium Access Control) sublayer
determines how the channel is allocated, that is, who gets to
transmit next.
• Above it is the LLC (Logical Link Control) sublayer, whose job it
is to hide the differences between the different 802 variants and
make them indistinguishable as far as the network layer is
concerned.
The 802.11 Physical Layer
As we know that physical layer is responsible for converting data stream into signals, the bits of 802.11 networks can be
converted to radio waves or infrared waves.
• These are six different specifications of IEEE 802.11. These implementations, except the first one, operate in industrial,
scientific and medical (ISM) band. These three banks are unlicensed and their ranges are
1.902-928 MHz
2.2.400-4.835 GHz
3.5.725-5.850 GHz
The different implementations of IEE802.11 are given below:
1. IEEE 802.11 infrared
• It uses diffused (not line of sight) infrared light in the range of 800 to 950 nm.
• It allows two different speeds: I Mbps and 2Mbps.
• For a I-Mbps data rate, 4 bits of data are encoded into 16 bit code. This 16 bit code contains fifteen as and a
single 1.
• For a 2-Mbps data rate, a 2 bit code is encoded into 4 bit code. This 4 bit code contains three Os and a
single 1.
• The modulation technique used is pulse position modulation (PPM) i.e. for converting digital signal to analog.
2. IEEE 802.11 FHSS

• IEEE 802.11 uses Frequency Hoping Spread Spectrum (FHSS) method for signal generation.
• This method uses 2.4 GHz ISM band. This band is divided into 79 subbands of 1MHz with some guard
bands.
• In this method, at one moment data is sent by using one carrier frequency and then by some other carrier
frequency at next moment. After this, an idle time is there in communication. This cycle is repeated after
regular intervals.
• A pseudo random number generator selects the hopping sequence.
• The allowed data rates are 1 or 2 Mbps.
• This method uses frequency shift keying (two level or four level) for modulation i.e. for converting digital
signal to analogy.
3. IEEE 802.11 DSSS
• This method uses Direct Sequence Spread Spectrum (DSSS) method for signal generation. Each bit is
transmitted as 11 chips using a Barker sequence.
• DSSS uses the 2.4-GHz ISM band.
• It also allows the data rates of 1 or 2 Mbps.
• It uses phase shift keying (PSK) technique at 1 M baud for converting digital signal to analog signal.

4. IEEE 802.11a OFDM


• This method uses Orthogonal Frequency Division Multiplexing (OFDM) for signal generation.
• This method is capable of delivering data upto 18 or 54 Mbps.
• In OFDM all the subbands are used by one source at a given time.
• It uses 5 GHz ISM band.
• This band is divided into 52 subbands, with 48 subbands for data and 4 subbands for control information.
• If phase shift keying (PSK) is used for modulation then data rate is 18 Mbps. If quadrature amplitude
modulation (QAM) is used, the data rate can be 54 Mbps.
5. IEEE 802.11b HR-OSSS
• It uses High Rate Direct Sequence Spread Spectrum method for signal generation.
• HR-DSSS is similar to DSSS except for encoding method.
• Here, 4 or 8 bits are encoded into a special symbol called complementary code key (CCK).
• It uses 2.4 GHz ISM band.
• It supports four data rates: 1,2,5.5 and 11 Mbps.
• 1 Mbps and 2 Mbps data rates uses phase shift modulation.
• The 5.5. Mbps version uses BPSK and transmits at 1.375 Mbaud/s with 4-bit CCK encoding.
• The 11 Mbps version uses QPSK and transmits at 1.375 Mbps with 8-bit CCK encoding.
6. IEEE 802.11g OFDM
• It uses OFDM modulation technique.
• It uses 2.4 GHz ISM band.
• It supports the data rates of 22 or 54 Mbps.
• It is backward compatible with 802.11 b
The 802.11 MAC Sublayer Protocol
802.11 supports two modes of operation.
The first, called DCF (Distributed Coordination Function), does not use any kind of central
control (in that respect, similar to Ethernet).
The other, called PCF (Point Coordination Function), uses the base station to control all
activity in its cell.
All implementations must support DCF but PCF is optional.
Distributed Coordination Function:
When DCF is employed, 802.11 uses a protocol called CSMA/CA (CSMA with Collision
Avoidance). In this protocol, both physical channel sensing and virtual channel sensing
are used. Two methods of operation are supported by CSMA/CA. In the first method,
when a station wants to transmit, it senses the channel. If it is idle, it just starts
transmitting. It does not sense the channel while transmitting but emits its entire frame,
which may well be destroyed at the receiver due to interference there. If the channel is
busy, the sender defers until it goes idle and then starts transmitting.
If a collision occurs, the colliding stations wait a random time, using the Ethernet binary
exponential backoff algorithm, and then try again later.

The other mode of CSMA/CA operation is based on MACAW and uses virtual channel
sensing.
If A wants to send to B. C is a station within range of A (and possibly within range of B, but
that does not matter). D is a station within range of B but not within range of A.
When a station wants to transmit, it senses the channel to see whether it is
free or not.
2. If the channel is not free the station waits for back off time.
3. If the station finds a channel to be idle, the station waits for a period of time
called distributed interframe space (DIFS).
4. The station then sends control frame called request to send (RTS) as
shown in figure.
5. The destination station receives the frame and waits for a short period of
time called short interframe space (SIFS).
6. The destination station then sends a control frame called clear to send
(CTS) to the source station. This frame indicates that the destination station is
ready to receive data.
7. The sender then waits for SIFS time and sends data.
8. The destination waits for SIFS time and sends acknowledgement for the
received frame.
Collision avoidance
• 802.11 standard uses Network Allocation Vector (NAV) for collision avoidance.
• The procedure used in NAV is explained below:
1. Whenever a station sends an RTS frame, it includes the duration of time for which the station will occupy
the channel.
2. All other stations that are affected by the transmission creates a timer caned network allocation vector
(NAV).
3. This NAV (created by other stations) specifies for how much time these stations must not check the
channel.
4. Each station before sensing the channel, check its NAV to see if has expired or not.
5. If its NA V has expired, the station can send data, otherwise it has to wait.
• There can also be a collision during handshaking i.e. when RTS or CTS control frames are exchanged
between the sender and receiver. In this case following procedure is used for collision avoidance:
1. When two or more stations send RTS to a station at same time, their control frames collide.
2. If CTS frame is not received by the sender, it assumes that there has been a collision.
3. In such a case sender, waits for back off time and retransmits RTS.
2. Point Coordination Function
• PCF method is used in infrastructure network. In this Access point is used to control the network activity.
• It is implemented on top of the DCF and IS used for time sensitive transmissions.
• PCF uses centralized, contention free polling access method.
• The AP performs polling for stations that wants to transmit data. The various stations are polled one after the
other.
• To give priority to PCF over DCF, another interframe space called PIFS is defined. PIFS (PCF IFS) is shorter than
DIFS.
• If at the same time, a station is using DCF and AP is using PCF, then AP is given priority over the station.
• Due to this priority of PCF over DCF, stations that only use DCF may not gain access to the channel.
• To overcome this problem, a repetition interval is defined that is repeated continuously. This repetition interval
starts with a special control frame called beacon frame.
• When a station hears beacon frame, it start their NAV for the duration of the period of the repetition interval.
The 802.11 Frame Structure:

The 802.11 standard defines three different classes of frames on the wire: data, control,
and management. Each of these has a header with a variety of fields used within the MAC
sublayer.

The MAC layer frame consists of nine fields.


1. Frame Control (FC). This is 2 byte field and defines the type of frame and some control information. This
field contains several different subfields.
2. D. It stands for duration and is of 2 bytes. This field defines the duration for which the frame and its
acknowledgement will occupy the channel. It is also used to set the value of NA V for other stations.
3. Addresses. There are 4 address fields of 6 bytes length. These four addresses represent source,
destination, source base station and destination base station.
4. Sequence Control (SC). This 2 byte field defines the sequence number of frame to be used in flow control.
5. Frame body. This field can be between 0 and 2312 bytes. It contains the information.
6. FCS. This field is 4 bytes long and contains ‘cRC-32 error detection sequence.
IEEE 802.11 Frame types
There are three different types of frames:
1. Management frame
2. Control frame
3. Data frame
1. Management frame. These are used for initial communication between stations and access points.
2. Control frame. These are used for accessing the channel and acknowledging frames. The control frames
are RTS and CTS.
3. Data frame. These are used for carrying data and control information.
802.11 Addressing
• There are four different addressing cases depending upon the value of To DS And from DS subfields of
FC field.
• Each flag can be 0 or 1, resulting in 4 different situations.
1. If To DS = 0 and From DS = 0, it indicates that frame is not going to distribution system and is not
coming from a distribution system. The frame is going from one station in a BSS to another.
2. If To DS = 0 and From DS = 1, it indicates that the frame is coming from a distribution system. The
frame is coming from an AP and is going to a station. The address 3 contains original sender of the frame
(in another BSS).
3. If To DS = 1 and From DS = 0, it indicates that the frame is going to a distribution system. The frame is
going from a station to an AP. The address 3 field contains the final destination of the frame.
4. If To DS = 1 and From DS = 1,it indicates that frame is going from one AP to another AP in a wireless
distributed system.
The table below specifies the addresses of all four cases.
Data link layer Swtching
• Network switching is the process of forwarding data frames or packets
from one port to another leading to data transmission from source to
destination. Data link layer is the second layer of the Open System
Interconnections (OSI) model whose function is to divide the stream of
bits from physical layer into data frames and transmit the frames
according to switching requirements. Switching in data link layer is
done by network devices called bridges.
Bridges
• A data link layer bridge connects multiple LANs (local area networks)
together to form a larger LAN. This process of aggregating networks is
called network bridging. A bridge connects the different components
so that they appear as parts of a single network.
Uses of Bridges
• Before getting into the technology of bridges, let us take a look at some
common situations in which bridges are used.
Example: First, many universities and corporate departments have their
own LANs to connect their own personal computers, servers, and devices
such as printers. Since the goals of the various departments differ,
different departments may set up different LANs, without regard to what
other departments are doing. later, though, there is a need for
interaction, so bridges are needed. In this example, multiple LANs come
into existence due to the autonomy of their owners.
Switching by Bridges
• When a data frame arrives at a particular port of a bridge, the bridge examines the
frame’s data link address, or more specifically, the MAC address. If the
destination address as well as the required switching is valid, the bridge sends the
frame to the destined port. Otherwise, the frame is discarded.
• The bridge is not responsible for end to end data transfer. It is concerned with
transmitting the data frame from one hop to the next. Hence, they do not examine
the payload field of the frame. Due to this, they can help in switching any kind of
packets from the network layer above.
• Bridges also connect virtual LANs (VLANs) to make a larger VLAN.
• If any segment of the bridged network is wireless, a wireless bridge is used to
perform the switching.
There are three main ways for bridging −
• simple bridging
• multi-port bridging
• learning or transparent bridging
Switching techniques
• In large networks, there can be multiple paths from sender to receiver. The switching
technique will decide the best route for data transmission.
• Switching technique is used to connect the systems for making one-to-one
communication.
1. Circuit Switching
• Circuit switching is a switching technique that establishes a dedicated path between
sender and receiver.
• In the Circuit Switching Technique, once the connection is established then the dedicated
path will remain to exist until the connection is terminated.
• Circuit switching in a network operates in a similar way as the telephone works.
• A complete end-to-end path must exist before the communication takes place.
• In case of circuit switching technique, when any user wants to send the data, voice,
video, a request signal is sent to the receiver then the receiver sends back the
acknowledgment to ensure the availability of the dedicated path. After receiving the
acknowledgment, dedicated path transfers the data.
• Circuit switching is used in public telephone network. It is used for voice transmission.
• Fixed data can be transferred at a time in circuit switching technology.
Circuits can be permanent or temporary. Applications which use circuit
switching may have to go through three phases:
• Establish a circuit
• Transfer the data
• Disconnect the circuit
• Circuit switching was designed for voice applications. Telephone is the best
suitable example of circuit switching. Before a user can make a call, a virtual path
between caller and callee is established over the network.
2. Message Switching
• This technique was somewhere in middle of circuit switching and packet
switching. In message switching, the whole message is treated as a data unit and is
switching / transferred in its entirety.
• A switch working on message switching, first receives the whole message and
buffers it until there are resources available to transfer it to the next hop. If the
next hop is not having enough resource to accommodate large size message, the
message is stored and switch waits.
• This technique was considered substitute to circuit switching. As in
circuit switching the whole path is blocked for two entities only.
Message switching is replaced by packet switching. Message switching
has the following drawbacks:
• Every switch in transit path needs enough storage to accommodate
entire message.
• Because of store-and-forward technique and waits included until
resources are available, message switching is very slow.
• Message switching was not a solution for streaming media and real-
time applications.
3. Packet Switching
• Shortcomings of message switching gave birth to an idea of packet switching.
• The packet switching is a switching technique in which the message is sent in one
go, but it is divided into smaller pieces, and they are sent individually.
• The message splits into smaller pieces known as packets and packets are given a
unique number to identify their order at the receiving end.
• Every packet contains some information in its headers such as source address,
destination address and sequence number.
• Packets will travel across the network, taking the shortest path as possible.
• All the packets are reassembled at the receiving end in correct order.
• If any packet is missing or corrupted, then the message will be sent to resend the
message.
• If the correct order of the packets is reached, then the acknowledgment message
will be sent.
• The internet uses packet switching technique. Packet switching enables the user to
differentiate data streams based on priorities. Packets are stored and forwarded
according to their priority to provide quality of service.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy