Wa0013.
Wa0013.
Wa0013.
1. Character Count
The first framing method uses a field in the header to specify the number of
characters in the frame. When the data link layer at the destination sees the
character count, it knows how many characters follow and hence where the end of
the frame is.
The trouble with this algorithm is that the count can be garbled by a transmission
error.
2. Flag Byte with Character Stuffing(Byte stuffing)
Byte - Stuffing − A byte is stuffed in the message to differentiate from the
delimiter. Character stuffing is also known as byte stuffing or character-oriented
framing and is same as that of bit stuffing but byte stuffing actually operates on
bytes.
If the pattern of the flag byte is present in the message byte, there should be a
strategy so that the receiver does not consider the pattern as the end of the frame.
In character – oriented protocol, the mechanism adopted is byte stuffing.
In byte stuffing, a special byte called the escape character (ESC) is stuffed before
every byte in the message with the same pattern as the flag byte. If the ESC
sequence is found in the message byte, then another ESC byte is stuffed before it.
3. Starting and Ending Flags, with Bit Stuffing:
Bit - Stuffing − A pattern of bits of arbitrary length is stuffed in the message to
differentiate from the delimiter. This is also called bit - oriented framing.
•Frame Header − It contains the source and the destination addresses of the
frame.
•Payload field − It contains the message to be delivered.
•Trailer − It contains the error detection and error correction bits.
•Flags − A bit pattern that defines the beginning and end bits in a frame. It is
generally of 8-bits. Most protocols use the 8-bit pattern 01111110 as flag.
4.Physical layer coding violations method
Physical layer coding violations method of framing is only applicable to networks
in which the encoding on the physical medium contains some redundancy.
Some LANs encode each bit of data by using two physical bits that Manchester
coding uses.
Here, Bit 1 is encoded into a high-low (10) pair and Bit 0 is encoded into a low-
high (01) pair.
The scheme means that every data bit has a transition in the middle, making it
easy for the receiver to locate the bit boundaries.
The combinations high-high and low-low are not used for data but are used for
delimiting frames in some protocols.
As a final note on framing, many data link protocols use combination of a
character count with one of the other methods for extra safety. When a frame
arrives, the count field is used to locate the end of the frame. Only if the
appropriate delimiter is present at that position and the checksum is correct is the
frame accepted as valid. Otherwise, the input stream is scanned for the next
delimiter
Error Detection
Error
A condition when the receiver’s information does not matches with the sender’s
information. During transmission, digital signals suffer from noise that can
introduce errors in the binary bits travelling from sender to receiver. That means a
0 bit may change to 1 or a 1 bit may change to 0.
Error Detecting Codes (Implemented either at Data link layer or Transport
Layer of OSI Model) Whenever a message is transmitted, it may get
scrambled by noise or data may get corrupted.
To avoid this, we use error-detecting codes which are additional data added
to a given digital message to help us detect if any error has occurred during
transmission of the message.
Basic approach used for error detection is the use of redundancy bits, where
additional bits are added to facilitate detection of errors.
Types of error:
There may be three types of errors:
1. Single bit error: In a frame, there is only one bit, anywhere though, which is
corrupt.
2. Multiple bits error: Frame is received with more than one bits in corrupted
state.
3. Burst error: Frame contains more than1 consecutive bits corrupted.
Some popular techniques for error detection are:
1. Simple Parity check
2. Two-dimensional Parity check
3. Checksum
4. Cyclic redundancy check (CRC)
1. Simple Parity check
Single Parity checking is the simple mechanism and inexpensive to detect the
errors.
In this technique, a redundant bit(either 0 or 1) is also known as a parity bit which
is appended at the end of the data unit so that the number of 1s becomes even.
Therefore, the total number of transmitted bits would be 9 bits.
If the number of 1s bits is odd, then parity bit 1 is appended and if the number of
1s bits is even, then parity bit 0 is appended at the end of the data unit.
At the receiving end, the parity bit is calculated from the received data bits and
compared with the received parity bit.
This technique generates the total number of 1s even, so it is known as even-
parity checking.
Then, following cases are possible-
• If total number of 1’s is even and even parity is used, then receiver assumes that
no error occurred.
• If total number of 1’s is even and odd parity is used, then receiver assumes
that error occurred.
• If total number of 1’s is odd and odd parity is used, then receiver assumes that no
error occurred.
• If total number of 1’s is odd and even parity is used, then receiver assumes
that error occurred.
Parity Check Example-
Consider the data unit to be transmitted is 1001001 and even parity is used
At Sender Side-
Total number of 1’s in the data unit is counted.
Total number of 1’s in the data unit = 3.
Clearly, even parity is used and total number of 1’s is odd.
So, parity bit = 1 is added to the data unit to make total number of 1’s even.
Then, the code word 10010011 is transmitted to the receiver.
At Receiver Side-
After receiving the code word, total number of 1’s in the code word is counted.
Consider receiver receives the correct code word = 10010011.
Even parity is used and total number of 1’s is even.
So, receiver assumes that no error occurred in the data during the transmission.
Advantage-
This technique is guaranteed to detect an odd number of bit errors (one, three, five
and so on).
If odd number of bits flip during transmission, then receiver can detect by
counting the number of 1’s.
Drawbacks Of Single Parity Checking
It can only detect single-bit errors which are very rare.
If two bits are interchanged, then it cannot detect the errors.
2. Two-dimensional Parity check
Performance can be improved by using Two-Dimensional Parity Check which
organizes the data in the form of a table.
Parity check bits are calculated for each row, which is equivalent to a simple
parity check bit.
Parity check bits are also calculated for all columns, then both are sent along with
the data. At the receiving end these are compared with the parity bits calculated on
the received data.
Parity check bits are calculated for each row, which is equivalent to a simple
parity check bit. Parity check bits are also calculated for all columns, then both are
sent along with the data. At the receiving end these are compared with the parity
bits calculated on the received data.
Drawbacks Of 2D Parity Check
If two bits in one data unit are corrupted and two bits exactly the same position in
another data unit are also corrupted, then 2D Parity checker will not be able to
detect the error.
This technique cannot be used to detect the 4-bit errors or more in some cases.
3. Checksum
In checksum error detection scheme, the data is divided into k segments each of m
bits.
In the sender’s end the segments are added using 1’s complement arithmetic to get
the sum. The sum is complemented to get the checksum.
The checksum segment is sent along with the data segments.
At the receiver’s end, all received segments are added using 1’s complement
arithmetic to get the sum. The sum is complemented.
If the result is zero, the received data is accepted; otherwise discarded.
4. Cyclic redundancy check (CRC)
Unlike checksum scheme, which is based on addition, CRC is based on binary
division.
In CRC, a sequence of redundant bits, called cyclic redundancy check bits, are
appended to the end of data unit so that the resulting data unit becomes exactly
divisible by a second, predetermined binary number.
At the destination, the incoming data unit is divided by the same number. If at this
step there is no remainder, the data unit is assumed to be correct and is therefore
accepted.
A remainder indicates that the data unit has been damaged in transit and therefore
must be rejected.
• The generator polynomial G(x) = x3 + 1 is encoded as 1001.
• Clearly, the generator polynomial consists of 4 bits.
• So, a string of 3 zeroes is appended to the bit stream to be transmitted.
• The resulting bit stream is 1010000000.
At sender side CRC=011
• The code word to be transmitted is obtained by replacing the last 3 zeroes of
10011101000 with the CRC.
• Thus, the code word transmitted to the receiver = 1010000011.
Now, At receiver side:
• Receiver receives the bit stream = 1010000011.
• Receiver performs the binary division with the same generator polynomial .
From here,
• The remainder obtained on division is a zero value.
• This indicates to the receiver that is no error occurred in the data during the
transmission.
• Therefore, receiver accept the data.
Error Correction
Error Correction codes are used to detect and correct the errors when data is
transmitted from the sender to the receiver.
Error Correction can be handled in two ways:
• Backward Error Correction When the receiver detects an error in the data
received, it requests back the sender to retransmit the data unit.
• Forward Error Correction When the receiver detects some error in the data
received, it executes error-correcting code, which helps it to auto-recover and to
correct some kinds of errors.
A single additional bit can detect the error, but cannot correct it.
For correcting the errors, one has to know the exact position of the error. For
example, If we want to calculate a single-bit error, the error correction code will
determine which one of seven bits is in error. To achieve this, we have to add
some additional redundant bits.
Suppose r is the number of redundant bits and d is the total number of the data
bits. The number of redundant bits r can be calculated by using the formula:
2r >=d+r+1
The value of r is calculated by using the above formula. For example, if the value
of d is 4, then the possible smallest value that satisfies the above relation would be
3.
Error Correction Techniques:
1. Hamming Code:
• Parity bits: The bit which is appended to the original data of binary bits so that
the total number of 1s is even or odd.
• Even parity: To check for even parity, if the total number of 1s is even, then the
value of the parity bit is 0. If the total number of 1s occurrences is odd, then the
value of the parity bit is 1.
• Odd Parity: To check for odd parity, if the total number of 1s is even, then the
value of parity bit is 1. If the total number of 1s is odd, then the value of parity bit
is 0.
• Relationship b/w Error position & binary number.
Suppose the 4th bit is changed from 0 to 1 at the receiving end, then parity
bits are recalculated.
R1 bit
• The bit positions of the r1 bit are 1,3,5,7
• We observe from the above figure that the binary representation of r1 is
1100. Now, we perform the even-parity check, the total number of 1s
appearing in the r1 bit is an even number. Therefore, the value of r1 is 0.
R2 bit
• The bit positions of r2 bit are 2,3,6,7.
We observe from the above figure that the binary representation of r2 is 1001.
Now, we perform the even-parity check, the total number of 1s appearing in
the r2 bit is an even number. Therefore, the value of r2 is 0.
R4 bit
• The bit positions of r4 bit are 4,5,6,7.
b. Then, the receiver sends the acknowledgment for the 0th frame.
c. The receiver then slides the window over and sends the next frame in the queue.
d. Accordingly, the receiver sends the acknowledgement for the 1st frame, and
upon receiving that, the sender slides the window again and sends the next
frame. This process keeps on happening until all the frames are sent successfully.
When the timer expires, the sender resends all outstanding frames. For
example, suppose the sender has already sent frame 6, but the timer for frame
3 expires. This means that frame 3 has not been acknowledged; the sender goes
back and sends frames 3,4,5, and 6 again. That is why the protocol is called Go-
Back-N ARQ.
ADVANTAGES
The sender can send MANY FRAMES at a time.
Timer can be set for agroup of frames.
Efficiency is more.
Waiting time is low.
We can alter the size of the sender window
DISADVANTAGES
Buffer requirement
Transmitter needs to store the last N packets
Scheme is inefficient when delay is large and data transmission rate is high
Unnecessary Retransmission of many error-free packets
3. A protocol using Selective Repeat
Selective-repeat Automatic Repeat Request (ARQ) is one of the techniques where a data link layer
may deploy to control errors.
Techniques to control ARQ
Generally, there are three types of techniques which control the errors by Automatic Repeat
Request (ARQ) they are −
• Stop-and-wait ARQ
• Go-Back-N ARQ
• Selective Repeat ARQ
Requirements for Error Control
There are some requirements for error control mechanisms and they are as follows −
• Error detection − The sender and receiver, or any must ascertain that there is some error in
the transit.
• Positive ACK − Whenever a receiver receives a correct frame, it should acknowledge it.
• Negative ACK − Whenever the receiver receives a damaged frame or a duplicate frame, it
sends a NACK back to the sender and sender must retransmit the correct frame.
• Retransmission − The sender always maintains a clock and sets a timeout period. If an
ACK of data-frame previously transmitted does not arrive before the timeout, the sender
retransmits the frame, thinking that the frame or it’s ACK is lost in transit
It is used for error detection and control in the data link layer.
In the selective repeat, the sender sends several frames specified by a window size
even without the need to wait for individual acknowledgement from the receiver
as in Go-Back-N ARQ. In selective repeat protocol, the retransmitted frame is
received out of sequence.
In Selective Repeat ARQ only the lost or error frames are retransmitted, whereas
correct frames are received and buffered.
The receiver while keeping track of sequence numbers buffers the frames in
memory and sends NACK for only frames which are missing or damaged. The
sender will send/retransmit a packet for which NACK is received.
Explanation
Step 1 − Frame 0 sends from sender to receiver and set timer.
Step 2 − Without waiting for acknowledgement from the receiver another frame,
Frame1 is sent by sender by setting the timer for it.
Step 3 − In the same way frame2 is also sent to the receiver by setting the timer
without waiting for previous acknowledgement.
Step 4 − Whenever sender receives the ACK0 from receiver, within the frame 0
timer then it is closed and sent to the next frame, frame 3.
Step 5 − whenever the sender receives the ACK1 from the receiver, within the
frame 1 timer then it is closed and sent to the next frame, frame 4.
Step 6 − If the sender doesn’t receive the ACK2 from the receiver within the time
slot, it declares timeout for frame 2 and resends the frame 2 again, because it
thought the frame2 may be lost or damaged.
Example data link protocols.
1. High-level Data Link Control (HDLC) protocol
It is derived from SDLC(Synchronous Data Link Control) earlier it was used
in IBM and it was standardized by ISO organization with some modifications
i.e HDLC protocol.
HDLC (High-Level Data Link Control) is a bit-oriented protocol that is used
for communication over the point-to-point and multipoint links.
This protocol implements the mechanism of ARQ(Automatic Repeat Request).
With the help of the HDLC protocol, full-duplex communication is possible.
HDLC is the most widely used protocol and offers reliability, efficiency, and a
high level of Flexibility. Because it provides both flow control and error
control using the techniques either selective repeat or go-back-N that is
depending on the network.
Types of HDLC Frames:
There are three types of HDLC frames. The type of frame is determined by the
control field of the frame −
I. Information frame:
I-frames or Information frames carry user data from the network layer. They
also include flow and error control information that is piggybacked on user data.
The first bit of control field of I-frame is 0.
II. Supervisory Frame:
S-frames or Supervisory frames do not contain information field. They are
used for flow and error control when piggybacking is not required. The first
two bits of control field of S-frame is 10.
The control field executes control functions such as acknowledgement of
frames, request for re-transmission, and requests for limited suspension of
frames being sent.
III. Unnumbered Frame
This control field format can also be used for control purposes. It can implement link
initialization, link disconnection and other link control services.
It may contain an information field, if required. The first two bits of control field of U-frame
is 11.
It is a data link layer protocol that resides in the layer 2 of the OSI model
It is used to encapsulate the layer 3 protocols and all the information available in
the payload in order to be transmitted across the serial links. The PPP protocol can
be used on synchronous link like ISDN as well as asynchronous link like dial-up.
It is mainly used for the communication between the two devices.
• It is a byte-oriented protocol as it provides the frames as a collection of bytes or
characters. It is a WAN (Wide Area Network) protocol as it runs over the internet
link which means between two routers, internet is widely used.
Services provided by PPP
• It defines the format of frames through which the transmission occurs.
• It defines the link establishment process. If user establishes a link with a server,
then "how this link establishes" is done by the PPP protocol.
• It defines data exchange process, i.e., how data will be exchanged, the rate of the
exchange.
• The main feature of the PPP protocol is the encapsulation. It defines how network
layer data and information in the payload are encapsulated in the data link frame.
• It defines the authentication process between the two devices. The authentication
between the two devices, handshaking and how the password will be exchanged
between two devices are decided by the PPP protocol.
• It does not support flow control mechanism.
Frame format of PPP protocol
• The frame format of PPP protocol contains the following fields:
•Flag: The flag field is used to indicate the start and end of the frame. The flag
field is a 1-byte field that appears at the beginning and the ending of the frame.
The pattern of the flag is similar to the bit pattern in HDLC, i.e., 01111110.
•Address: It is a 1-byte field that contains the constant value which is
11111111. These 8 ones represent a broadcast message.
Control: It is a 1-byte field which is set through the constant value, i.e.,
11000000. It is not a required field as PPP does not support the flow control and a
very limited error control mechanism. The control field is a mandatory field where
protocol supports flow and error control mechanism.
Payload: The payload field carries either user data or other information. The
maximum length of the payload field is 1500 bytes.
FCS − It is a 2 byte or 4 bytes frame check sequence for error detection. The
standard code used is CRC (cyclic redundancy code)
Medium Access Control sub layer(MACL)
About MACL:
MAC is a sublayer of the data link layer(DLL) in the seven layer OSI network
reference model.
MAC is responsible for the transmission of data packets to and from the
network interface card(NIC), and to and from another remotely shared
channel.
The basic function of MAC is to provide an addressing mechanism and
channel access. So that each node available on a network can communicate with
each other nodes available on the same or other networks.
The channel allocation problems
In a broadcast network, the single broadcast channel is to be allocated to one
transmitting user at a time. When multiple users use a shared network and want to
access the same network. Then channel allocation problem in computer networks
occurs.
So, to allocate the same channel between multiple users, some techniques are
used, which are called channel allocation techniques in computer networks.
The allocation depends upon the traffic. If the traffic increases, more channels are
allocated, otherwise fewer channels are allocated to the users.
This technique optimizes bandwidth usage and provides fast data transmission.
The following are the assumptions in dynamic channel allocation:
1. Station Model:
The model consists of N independent stations (eg. Computers, pc, mobiles..etc), each with a
program or user that generates frames for transmission. Stations are sometimes called terminals.
Once a frame has been generated , the station is blocked and does nothing until the frame has
been successfully transmitted.
2. Single channel assumption:
A single channel is available for all communication. All stations can transmit on it and all can
receive from it.
3. Collision assumption:
If frames are transmitted at the same time by two or more stations, then the collision occurs. and
both frames must re transmitted.
4. Time assumption
B) Slotted time: Time is divided into discrete slots. If a slot does not contain any
frame, it is called an idle slot; if it contains a single frame, then the transmission
is successful; if it contains more than one frames, then a collision is said to
occur.
5. A) carrier sense: The stations may or may not be capable of detecting whether the
channel is in use before sending the frames. In algorithms which are based upon carrier
sense, a station sends frame only when it senses that the channel is not busy.
B) no carrier sense: In algorithms based upon no carrier sense, the stations transmit a
frame when it is available and later are informed whether successful transmission had
occurred or not.
Advantages
• Dynamic channel allocation schemes allots channels as needed. This results in optimum
utilization of network resources. There are less chances of denial of services and call
blocking in case of voice transmission. These schemes adjust bandwidth allotment
according to traffic volume, and so are particularly suitable for bursty traffic.
Disadvantages
• Dynamic channel allocation schemes increases the computational as well as storage load
on the system.
Multiple access protocols
The data link layer is used in a computer network to transmit the data
between two devices or nodes. It divides the layer into parts such
as data link control(logical link control layer) and the multiple
access resolution/protocol(media access layer).
The upper layer(LLC) has the responsibility to flow control and the
error control in the data link layer, and hence it is termed as logical
of data link control. Whereas the lower sub-layer(MAC) is used to
reduce the collision and handle multiple access on a channel. Hence
it is termed as media access control or the multiple access resolutions.
What is a multiple access protocol?
When a sender and receiver have a dedicated link to transmit
data packets, the data link control is enough to handle the
channel.
Suppose there is no dedicated path to communicate or transfer
the data between two devices. In that case, multiple stations
access the channel and simultaneously transmits the data over the
channel. It may create collision and cross talk. Hence, the
multiple access protocol is required to reduce the collision
and avoid crosstalk between the channels.
Random Access Protocol(it is a sub part of the multiple access
protocol)
In this protocol, all the station has the equal priority to send the data over a
channel. In random access protocol, one or more stations cannot depend on
another station or any station control another station.
Depending on the channel's state (idle or busy), each station transmits the
data frame. However, if more than one station sends the data over a channel,
there may be a collision or data conflict. Due to the collision, the data frame
packets may be lost or changed. And hence, it does not receive by the
receiver end.
• Following are the different methods of random-access protocols for
broadcasting frames on the channel.
1. Aloha
2. CSMA(carrier sense multiple access)
3. CSMA/CD(carrier sense multiple access/ collision detection)
4. CSMA/CA(carrier sense multiple access/collision avoidance)
1. ALOHA
Aloha is designed for wireless LAN (Local Area Network) but can also be used
in a shared medium to transmit data. In aloha, any station can transmit data to a
channel at any time. It does not require any carrier sensing.
Using this method, any station can transmit data across a network
simultaneously when a data frameset is available for transmission.
Aloha is the random access protocol having two categories that are pure aloha
and slotted aloha.
a. Pure ALOHA
In pure ALOHA, the stations transmit frames whenever they have data to send.
When two or more stations transmit simultaneously, there is collision and the
frames are destroyed.
In pure ALOHA, whenever any station transmits a frame, it expects the
acknowledgement from the receiver.
If acknowledgement is not received within specified time, the station assumes that
the frame (or acknowledgement) has been destroyed.
If the frame is destroyed because of collision the station waits for a random
amount of time called back-off time(Tb) and sends it again. This waiting time
must be random otherwise same frames will collide again and again.
Therefore pure ALOHA dictates that when time-out period passes, each station
must wait for a random amount of time before re-sending its frame. This
randomness will help avoid more collisions.
Since different stations wait for different amount of time, the probability of
further collision decreases.
The throughput of pure aloha is maximized when frames are of uniform
length(means fixed size).
In fig there are four stations that .contended with one another for access
to shared channel. All these stations are transmitting frames. Some of
these frames collide because multiple frames are in contention for the
shared channel. Only two frames, frame 1.1 and frame 2.2 survive. All
other frames are destroyed.
Whenever two frames try to occupy the channel at the same time, there
will be a collision and both will be damaged. If first bit of a new frame
overlaps with just the last bit of a frame almost finished, both frames
will be totally destroyed and both will have to be retransmitted.
b. Slotted ALOHA
The slotted Aloha is designed to overcome the pure Aloha's efficiency
because pure Aloha has a very high possibility of frame
hitting(collision).
In slotted Aloha, the shared channel is divided into a fixed time
interval called slots. So that, if a station wants to send a frame to a
shared channel, the frame can only be sent at the beginning of the slot,
and only one frame is allowed to be sent to each slot.
And if the stations are unable to send data to the beginning of the slot,
the station will have to wait until the beginning of the slot for the next
time. However, there is still a possibility of collision if two stations try
to send at the beginning of the same time slot .
2. CSMA(carrier sense multiple access)
It is a carrier sense multiple access based on media access protocol to
sense the traffic on a channel (idle or busy) before transmitting the data.
It means that if the channel is idle, the station can send data to the
channel. Otherwise, it must wait until the channel becomes idle. Hence,
it reduces the chances of a collision on a transmission medium.
CSMA Access Modes
I. 1-Persistent: In the 1-Persistent mode of CSMA that defines each
node, first sense the shared channel and if the channel is idle, it
immediately sends the data. Else it must wait and keep track of the
status of the channel to be idle and broadcast the frame
unconditionally as soon as the channel is idle.
II . Non-Persistent: It is the access mode of CSMA that defines before
transmitting the data, each node must sense the channel, and if the
channel is inactive, it immediately sends the data. Otherwise, the station
must wait for a random time (not continuously), and when the channel
is found to be idle, it transmits the frames.
III. P-Persistent: It is the combination of 1-Persistent and Non-
persistent modes. The P-Persistent mode defines that each node senses
the channel, and if the channel is inactive, it sends a frame with
a P probability. If the data is not transmitted, it waits for a (q = 1-p
probability) random time and resumes the frame with the next time
slot.
2.1. CSMA/ CD
It is a carrier sense multiple access/ collision detection network
protocol to transmit data frames. The CSMA/CD protocol works with a
medium access control layer. Therefore, it first senses the shared
channel before broadcasting the frames, and if the channel is idle, it
transmits a frame to check whether the transmission was successful.
If the frame is successfully received, the station sends another frame. If
any collision is detected in the CSMA/CD, the station sends a jam/ stop
signal to the shared channel to terminate data transmission. After that, it
waits for a random time before sending a frame to a channel.
2.2. CSMA/ CA
It is a carrier sense multiple access/collision avoidance network
protocol for carrier transmission of data frames.
It is a protocol that works with a medium access control layer. When a
data frame is sent to a channel, it receives an acknowledgment to check
whether the channel is clear. If the station receives only a single (own)
acknowledgments, that means the data frame has been successfully
transmitted to the receiver.
But if it gets two signals (its own and one more in which the collision
of frames), a collision of the frame occurs in the shared channel.
Detects the collision of the frame when a sender receives an
acknowledgment signal.
Following are the methods used in the CSMA/ CA to avoid the collision:
• Interframe space: In this method, the station waits for the channel to become
idle, and if it gets the channel is idle, it does not immediately send the data.
Instead of this, it waits for some time, and this time period is called
the Interframe space or IFS. However, the IFS time is often used to define the
priority of the station.
• Contention window: In the Contention window, the total time is divided into
different slots. When the station/ sender is ready to transmit the data frame, it
chooses a random slot number of slots as wait time. If the channel is still busy, it
does not restart the entire process, except that it restarts the timer only to send data
packets when the channel is inactive.
• Acknowledgment: In the acknowledgment method, the sender station sends the
data frame to the shared channel if the acknowledgment is not received ahead of
time.
collision free protocols
when more than one station tries to transmit simultaneously via a shared
channel, the transmitted data is garbled. This event is called collision. The
Medium Access Control (MAC) layer of the OSI model is responsible for
handling collision of frames.
Collision – free protocols are devised so that collisions do not occur.
Protocols like CSMA/CD and CSMA/CA the possibility of collisions once
the transmission channel is acquired by any station. However, collision can
still occur during the contention period if more than one stations starts to
transmit at the same time. Collision – free protocols resolves collision in the
contention period and so the possibilities of collisions are eliminated.
Types of Collision – free Protocols
1. Bit-Map Protocol
Bit-map protocol is a collision free protocol that operates in the Medium Access
Control (MAC) layer of the OSI model. It resolves any possibility of collisions
while multiple stations are contending for acquiring a shared channel for
transmission.
In this protocol, if a station wishes to transmit, it broadcasts itself before the
actual transmission is called Reservation Protocols. because they reserve channel
ownership in advance and prevent collisions.
Working Principle
In this protocol, the contention period is divided into N slots, where N is the total
number of stations sharing the channel. If a station has a frame to send, it sets the
corresponding bit in the slot.
Suppose that there are 10 stations. So the number of contention slots will be
10. If the stations 2, 3, 8 and 9 wish to transmit, they will set the
corresponding slots to 1
Once each station announces itself, one of them gets the channel
based upon any agreed criteria.
Generally, transmission is done in the order of the slot numbers.
Each station has complete knowledge whether every other station
wants to transmit or not, before transmission starts. So, all
possibilities of collisions are eliminated.
2. Binary Countdown
Binary countdown protocol is a collision free protocol that operates in the MAC
layer of the OSI model.
When more than one station tries to transmit simultaneously via a shared channel,
the transmitted data is garbled, an event called collision.
Collision free protocols resolves channel access while the stations are contending
for the shared channel, thus eliminating any possibilities of collisions.
A problem with the basic bit-map protocol is that overhead is 1 contention
bit slot per station. We can do better than that by using binary station
addreddes.
Working Principle of Binary Countdown
In a binary countdown protocol, each station is assigned a binary
address. The binary addresses are bit strings of equal lengths. When a
station wants to transmit, it broadcasts its address to all the stations in
the channel, one bit at a time starting with the highest order bit.
In order to decide which station gets the channel access, the addresses
of the stations which are broadcasted are ORed. The higher numbered
station gets the channel access.
Example
Suppose that six stations contend for channel access which have the addresses:
1011, 0010, 0101, 1100, 1001 and 1101.
The iterative steps are −
All stations broadcast their most significant bit, i.e. 1, 0, 0, 1, 1, 1. Stations
0010 and 0101 sees 1 bit in other stations, and so they give up competing for the
channel.
The stations 1011, 1100, 1001 and 1101 continue. They broadcast their next bit,
i.e. 0, 1, 0, 1. Stations 1011 and 1001 sees 1 bit in other stations, and so they give
up competing for the channel.
The stations 1100 and 1101 continue. They broadcast their next bit, i.e. 0, 0. Since
both of them have same bit value, both of them broadcast their next bit.
The stations 1100 and 1101 broadcast their least significant bit, i.e. 0 and 1. Since
station 1101 has 1 while the other 0, station 1101 gets the access to the channel.
After station 1101 has completed frame transmission, or there is a time-out, the
next contention cycle starts.
The procedure is illustrated as follows −
Wireless LANs
The 802.11 Protocol Stack
• The protocols used by all the 802 variants, including Ethernet,
have a certain commonality of structure.
• The physical layer corresponds to the OSI physical layer fairly
well, but the data link layer in all the 802 protocols is split into
two or more sublayers.
• In 802.11, the MAC (Medium Access Control) sublayer
determines how the channel is allocated, that is, who gets to
transmit next.
• Above it is the LLC (Logical Link Control) sublayer, whose job it
is to hide the differences between the different 802 variants and
make them indistinguishable as far as the network layer is
concerned.
The 802.11 Physical Layer
As we know that physical layer is responsible for converting data stream into signals, the bits of 802.11 networks can be
converted to radio waves or infrared waves.
• These are six different specifications of IEEE 802.11. These implementations, except the first one, operate in industrial,
scientific and medical (ISM) band. These three banks are unlicensed and their ranges are
1.902-928 MHz
2.2.400-4.835 GHz
3.5.725-5.850 GHz
The different implementations of IEE802.11 are given below:
1. IEEE 802.11 infrared
• It uses diffused (not line of sight) infrared light in the range of 800 to 950 nm.
• It allows two different speeds: I Mbps and 2Mbps.
• For a I-Mbps data rate, 4 bits of data are encoded into 16 bit code. This 16 bit code contains fifteen as and a
single 1.
• For a 2-Mbps data rate, a 2 bit code is encoded into 4 bit code. This 4 bit code contains three Os and a
single 1.
• The modulation technique used is pulse position modulation (PPM) i.e. for converting digital signal to analog.
2. IEEE 802.11 FHSS
• IEEE 802.11 uses Frequency Hoping Spread Spectrum (FHSS) method for signal generation.
• This method uses 2.4 GHz ISM band. This band is divided into 79 subbands of 1MHz with some guard
bands.
• In this method, at one moment data is sent by using one carrier frequency and then by some other carrier
frequency at next moment. After this, an idle time is there in communication. This cycle is repeated after
regular intervals.
• A pseudo random number generator selects the hopping sequence.
• The allowed data rates are 1 or 2 Mbps.
• This method uses frequency shift keying (two level or four level) for modulation i.e. for converting digital
signal to analogy.
3. IEEE 802.11 DSSS
• This method uses Direct Sequence Spread Spectrum (DSSS) method for signal generation. Each bit is
transmitted as 11 chips using a Barker sequence.
• DSSS uses the 2.4-GHz ISM band.
• It also allows the data rates of 1 or 2 Mbps.
• It uses phase shift keying (PSK) technique at 1 M baud for converting digital signal to analog signal.
The other mode of CSMA/CA operation is based on MACAW and uses virtual channel
sensing.
If A wants to send to B. C is a station within range of A (and possibly within range of B, but
that does not matter). D is a station within range of B but not within range of A.
When a station wants to transmit, it senses the channel to see whether it is
free or not.
2. If the channel is not free the station waits for back off time.
3. If the station finds a channel to be idle, the station waits for a period of time
called distributed interframe space (DIFS).
4. The station then sends control frame called request to send (RTS) as
shown in figure.
5. The destination station receives the frame and waits for a short period of
time called short interframe space (SIFS).
6. The destination station then sends a control frame called clear to send
(CTS) to the source station. This frame indicates that the destination station is
ready to receive data.
7. The sender then waits for SIFS time and sends data.
8. The destination waits for SIFS time and sends acknowledgement for the
received frame.
Collision avoidance
• 802.11 standard uses Network Allocation Vector (NAV) for collision avoidance.
• The procedure used in NAV is explained below:
1. Whenever a station sends an RTS frame, it includes the duration of time for which the station will occupy
the channel.
2. All other stations that are affected by the transmission creates a timer caned network allocation vector
(NAV).
3. This NAV (created by other stations) specifies for how much time these stations must not check the
channel.
4. Each station before sensing the channel, check its NAV to see if has expired or not.
5. If its NA V has expired, the station can send data, otherwise it has to wait.
• There can also be a collision during handshaking i.e. when RTS or CTS control frames are exchanged
between the sender and receiver. In this case following procedure is used for collision avoidance:
1. When two or more stations send RTS to a station at same time, their control frames collide.
2. If CTS frame is not received by the sender, it assumes that there has been a collision.
3. In such a case sender, waits for back off time and retransmits RTS.
2. Point Coordination Function
• PCF method is used in infrastructure network. In this Access point is used to control the network activity.
• It is implemented on top of the DCF and IS used for time sensitive transmissions.
• PCF uses centralized, contention free polling access method.
• The AP performs polling for stations that wants to transmit data. The various stations are polled one after the
other.
• To give priority to PCF over DCF, another interframe space called PIFS is defined. PIFS (PCF IFS) is shorter than
DIFS.
• If at the same time, a station is using DCF and AP is using PCF, then AP is given priority over the station.
• Due to this priority of PCF over DCF, stations that only use DCF may not gain access to the channel.
• To overcome this problem, a repetition interval is defined that is repeated continuously. This repetition interval
starts with a special control frame called beacon frame.
• When a station hears beacon frame, it start their NAV for the duration of the period of the repetition interval.
The 802.11 Frame Structure:
The 802.11 standard defines three different classes of frames on the wire: data, control,
and management. Each of these has a header with a variety of fields used within the MAC
sublayer.
Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.
Alternative Proxies: