0% found this document useful (0 votes)
65 views37 pages

Unit-2 Important Question

Headers and trailers contain error-checking data and are added and removed during communication between layers. CRC uses a generator polynomial to detect errors by calculating a remainder. A 7-bit hamming code with error was provided, and the error was found to be at position 1 by comparing calculated and received parity bits. A (11,5) hamming code can detect up to 2 errors and correct 1 error. The maximum errors corrected in a 1200 bit frame with hamming distance of 4 is 4. 50 extra bits are added for CRC when transmitting 10 frames of 1000 bits each with a CRC of 5 bits.

Uploaded by

Shivam Garg
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
65 views37 pages

Unit-2 Important Question

Headers and trailers contain error-checking data and are added and removed during communication between layers. CRC uses a generator polynomial to detect errors by calculating a remainder. A 7-bit hamming code with error was provided, and the error was found to be at position 1 by comparing calculated and received parity bits. A (11,5) hamming code can detect up to 2 errors and correct 1 error. The maximum errors corrected in a 1200 bit frame with hamming distance of 4 is 4. 50 extra bits are added for CRC when transmitting 10 frames of 1000 bits each with a CRC of 5 bits.

Uploaded by

Shivam Garg
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 37

UNIT-2

5 MARK QUESTIONS
1.What are header and trailers and how do they get added and removed?
Headers and trailers are the concepts of OSI model. Headers are information structures
which identifies the information that follows, such as a block of bytes in communication.
Trailer is the information which occupies several bytes at the end of the block of the data
being transmitted. They contain error-checking data which is useful for confirming the
accuracy and status of the transmission.

During communication of data the sender appends the header and passes it to the lower
layer while the receiver removes header and passes it to upper layer. Headers are added at
layer 6,5,4,3 & 2 while Trailer is added at layer 2.
2.State the requirements of CRC.
CRC or Cyclic Redundancy Check is a method of detecting accidental changes/errors in
the communication channel.
CRC uses Generator Polynomial which is available on both sender and receiver side.
An example generator polynomial is of the form like x 3 + x + 1. This generator
polynomial represents key 1011. Another example is x 2 + 1 that represents key 101.
n : Number of bits in data to be sent
from sender side.
k : Number of bits in the key obtained
from generator polynomial.
Requirements of CRC:
Specification of a CRC code requires definition of a so-called generator polynomial. This
polynomial becomes the divisor in a polynomial long division, which takes the message
as the dividend and in which the quotient is discarded and the remainder becomes the
result.
3.If a 7-bit hamming code received as 1110101.show that the code word has error. Also
rectify error in this code
To check for errors in the received code, we need to determine the parity bits for the
received code and compare them with the calculated parity bits. The parity bits are
calculated for each bit position in the code, where the position number is a power of 2
(i.e., 1, 2, 4). The parity bit is set to 1 if the sum of the data bits with that position in their
binary representation is odd; otherwise, the parity bit is set to 0.
Let's first write the received code in binary with the positions labeled:
1234567
1110101
The parity bits are calculated as follows:
Parity bit for position 1: Data bits 1, 3, 5, 7 (odd parity) = 1
Parity bit for position 2: Data bits 2, 3, 6, 7 (even parity) = 0
Parity bit for position 4: Data bits 4, 5, 6, 7 (even parity) = 1
Therefore, the calculated code with parity bits is:
1234567
1110101
1 0 1 1 1 0 1 (calculated parity bits)
We can see that the calculated parity bit for position 1 is different from the received parity
bit. This means that there is an error in the received code at position 1.
To correct the error, we need to flip the bit at position 1 in the received code, which gives
us the corrected code:
1234567
0110101
Therefore, there was an error in the received code at position 1, and the corrected code is
0110101.
4. If the minimum hamming distance is 9 then how many errors can be detected and
corrected by using Hamming code?

5.
6.For (11,5)Hamming code ,how many error can be detected and corrected?

here n=11 and k=5

7. In a communication system, data is transmitted in frames of 1200 bits each. The


system uses a Hamming code for error correction with a Hamming distance of 4.
Determine the maximum number of errors that can be corrected by this code per frame.

8. Consider a communication system transmitting data in frames of 1000 bits each. The
system uses a CRC (Cyclic Redundancy Check) for error detection and correction. The
CRC polynomial used is x5+x3+1. If the original message is 10,000 bits long and is
divided into 10 frames for transmission, how many extra bits are added for CRC?
Answer:
To calculate the number of extra bits added for CRC, we first need to determine the size of
the CRC code generated by the polynomial x^5 + x^3 + 1.

The degree of the polynomial is 5, so the CRC code will be 5 bits long.

Now, since the system is transmitting data in frames of 1000 bits each, and each frame
requires CRC, we need to calculate the total number of CRC bits for all 10 frames.

Total CRC bits = Number of frames × CRC bits per frame

= 10 frames × 5 bits per frame

= 50 bits

Therefore, 50 extra bits are added for CRC in the transmission of 10 frames.

9.In CRC method, assume that given frame for transmission is 1101011011 and the
generator polynomial is X4+X+1. Find the encoded word sent from sender side. [GATE]

10.Compare ALOHA with slotted ALOHA.


Difference between Pure Aloha and Slotted Aloha

The following table highlights the important differences between Pure Aloha and Slotted
Aloha.
Key Pure Aloha Slotted Aloha

Time Slot In Pure Aloha, any station can In Slotted Aloha, any station can
transmit data at any time. transmit data only at the beginning of a
time slot.

Time In Pure Aloha, time is continuous In Slotted Aloha, time is discrete and is
and is not globally synchronized. globally synchronized.

Vulnerable The vulnerable time or susceptible In Slotted Aloha, the vulnerable time is
time time in Pure Aloha is equal to equal to (Tt).
(2×Tt).

Maximum Maximum efficiency = 18.4%. Maximum efficiency = 36.8%.


efficiency

Number of Does not reduce the number of Slotted Aloha reduces the number of
collisions collisions. collisions to half, thus doubles the
efficiency.

11. What is piggybacking?

Piggybacking is a process of attaching the acknowledgment with the data packet to


be sent. Piggybacking concept is explained below:

Suppose there is two-way communication between two devices A and B. When the
data frame is sent by A to B, then device B will not send the acknowledgment to A
until B does not have the next frame to transmit. And the delayed acknowledgment is
sent by the B with the data frame. The method of attaching the delayed
acknowledgment with sending the data frame is known as piggybacking. Refer to the
below image for the piggybacking
12.Consider the use of 10 K-bit size frames on a 10 Mbps satellite channel with 270
ms delay. What is the link utilization for stop-and-wait ARQ technique assuming P=10 -
3
?

We know link utilisation of a network = 1 / (1 + 2a) where

a = propogation time / transmission time

In calculating transmission time as we know in stop n wait ARQ , only 1 frame can be sent at
a time hence the data size is taken of 1 frame only.

So transmission time = Data size / Data rate

= 10 * 103 bits / 10 * 106 bps

= 1 ms

Propogation delay given = 270 ms

So the value of a as mentioned earlier = 270 / 1 = 270

So link utilisation = 1 / ( 1 + 2 * 270 )

= 1 / (541)

= 0.19 %

13. Measurement of slotted ALOHA channel with infinite number of


users show that the l0 percent of slots are idle.
(i) What is the channel toad?
(ii) What is the throughput?
(a) What is the channel load, G?
Ans: When a slot is idle, there is 0 frame generated in that frame time.

Therefore P[succ]=0.1.
P[succ]=e-G=0.1;

-G=ln(0.1);

G=2.303.

(b) What is the throughput?


Ans: S=Ge-G=2.303*0.1=0.2303.

(c) Is the channel underloaded or overloaded?


Ans: When G=1, the slotted Aloha obtains the optimal throughput. G>1, we have
too many generated frame in a slot. It is a overloaded situatoin. Here we have
G=2.303; S=0.2303<Smax=0.368. G>S. Therefore the channel is overloaded.

14. Explain transmission delay in flow control.

Transmission time(Td)

The time taken by the sender to send all the bits in a frame onto the wire is called
transmission delay. This is calculated by dividing the data size(D) by the
bandwidth(B) of the channel on which the data has to be sent.

Td = D / B

15.Write a note on round trip time (RTT) in networking.

Round-trip time (RTT) is the duration in milliseconds (ms) it takes for a network
request to go from a starting point to a destination and back again to the starting
point.RTT is the total time it takes for the request to travel over the network and for
the response to travel back. You can typically measure RTT in milliseconds. A
lower RTT improves the experience of using an application and makes the
application more responsive.

16. Define the relationship between transmission delay and propagation delay, if the
efficiency is at least 50% in STOP N WAIT protocol.
Check class notes
17. Calculate the total number of transmissions that are required to send 10 data
packets through GBN-3 and every 5th packet is lost.
Check class notes
17. Find out window size and minimum sequence number in sliding window protocol,
if Transmission delay (Tt)= 1 ms, Propagation delay (Tp)= 24.5 ms. (ms= milliseconds).
Check class notes

10 MARKS QUESTIONS
1.Discuss the issues in the data link layer and about its protocol on the basis of
layering principle.

The data link layer in the OSI (Open System Interconnections) Model, is in between the
physical layer and the network layer. This layer converts the raw transmission facility
provided by the physical layer to a reliable and error-free link.

The main functions and the design issues of this layer are

Providing services to the network layer


Framing
Error Control
Flow Control
Services to the Network Layer

In the OSI Model, each layer uses the services of the layer below it and provides services
to the layer above it. The data link layer uses the services offered by the physical layer.
The primary function of this layer is to provide a well defined service interface to
network layer above it.

The types of services provided can be of three types −

Unacknowledged connectionless service


Acknowledged connectionless service
Acknowledged connection - oriented service
Framing

The data link layer encapsulates each data packet from the network layer into frames that
are then transmitted.

A frame has three parts, namely −

Frame Header
Payload field that contains the data packet from network layer
Trailer

Error Control

The data link layer ensures error free link for data transmission. The issues it caters to
with respect to error control are −

Dealing with transmission errors


Sending acknowledgement frames in reliable connections
Retransmitting lost frames
Identifying duplicate frames and deleting them
Controlling access to shared channels in case of broadcasting
Flow Control

The data link layer regulates flow control so that a fast sender does not drown a slow
receiver. When the sender sends frames at very high speeds, a slow receiver may not be
able to handle it. There will be frame losses even if the transmission is error-free. The two
common approaches for flow control are −

Feedback based flow control


Rate based flow control
2.Sender’s data D=100100, CRC generator polynomial= x3 + x2 + 1. Apply CRC
algorithm and perform calculations both at sender and receiver end.
Data word to be sent - 100100
Key - 1101 [ Or generator polynomial x 3 + x2 + 1]
Sender Side:

Therefore, the remainder is 001 and hence the encoded


data sent is 100100001.

Receiver Side:
Code word received at the receiver side 100100001

Therefore, the remainder is all zeros. Hence, the data received has no error.
3. Discuss framing in data communication systems. Explain how framing helps in the
reliable transmission of data and discuss at least two common framing techniques used
in practice.

Frames are the units of digital transmission, particularly in computer networks and
telecommunications. Frames are comparable to the packets of energy called photons in the
case of light energy. Frame is continuously used in Time Division Multiplexing process.
Framing is a point-to-point connection between two computers or devices consisting of a
wire in which data is transmitted as a stream of bits. However, these bits must be framed
into discernible blocks of information. Framing is a function of the data link layer. It
provides a way for a sender to transmit a set of bits that are meaningful to the receiver.
Ethernet, token ring, frame relay, and other data link layer technologies have their own
frame structures. Frames have headers that contain information such as error-checking
codes.

At the data link layer, it extracts the message from the sender and provides it to the receiver
by providing the sender’s and receiver’s addresses. The advantage of using frames is that
data is broken up into recoverable chunks that can easily be checked for corruption.
Types of framing
There are two types of framing:
1. Fixed-size: The frame is of fixed size and there is no need to provide boundaries to the
frame, the length of the frame itself acts as a delimiter.
 Drawback: It suffers from internal fragmentation if the data size is less than the
frame size
 Solution: Padding
2. Variable size: In this, there is a need to define the end of the frame as well as the
beginning of the next frame to distinguish. This can be done in two ways:
1. Length field – We can introduce a length field in the frame to indicate the
length of the frame. Used in Ethernet(802.3). The problem with this is that
sometimes the length field might get corrupted.
2. End Delimiter (ED) – We can introduce an ED(pattern) to indicate the end of
the frame. Used in Token Ring. The problem with this is that ED can occur in
the data. This can be solved by:
1. Character/Byte Stuffing: Used when frames consist of characters. If data
contains ED then, a byte is stuffed into data to differentiate it from ED.
Let ED = “$” –> if data contains ‘$’ anywhere, it can be escaped using ‘\O’
character.
–> if data contains ‘\O$’ then, use ‘\O\O\O$'($ is escaped using \O and \O is
escaped using \O).

Disadvantage – It is very costly and obsolete method.


2. Bit Stuffing: Let ED = 01111 and if data = 01111
–> Sender stuffs a bit to break the pattern i.e. here appends a 0 in data = 0111 01.
–> Receiver receives the frame.
–> If data contains 011101, receiver removes the 0 and reads the data.
Examples:
 If Data –> 011100011110 and ED –> 0111 then, find data after bit stuffing.
--> 011010001101100
 If Data –> 110001001 and ED –> 1000 then, find data after bit stuffing?
--> 11001010011
4.Discuss the importance of error detection and correction mechanisms in data
communication systems.

Data-link layer uses some error control mechanism to ensure that frames (data bit streams)
are transmitted with certain level of accuracy. But to understand how errors is controlled, it is
essential to know what types of errors may occur.

Types of Errors

There may be three types of errors:

 Single bit error

In a frame, there is only one bit, anywhere though, which is corrupt.


 Multiple bits error
Frame is received with more than one bits in corrupted state.
 Burst error

Frame contains more than1 consecutive bits corrupted.

Error control mechanism may involve two possible ways:

 Error detection
 Error correction
Error Detection

Errors in the received frames are detected by means of Parity Check and Cyclic Redundancy
Check (CRC). In both cases, few extra bits are sent along with actual data to confirm that bits
received at other end are same as they were sent. If the counter-check at receiver’ end fails,
the bits are considered corrupted.

Parity Check

One extra bit is sent along with the original bits to make number of 1s either even in case of
even parity, or odd in case of odd parity.

The sender while creating a frame counts the number of 1s in it. For example, if even parity is
used and number of 1s is even then one bit with value 0 is added. This way number of 1s
remains even.If the number of 1s is odd, to make it even a bit with value 1 is added.

The receiver simply counts the number of 1s in a frame. If the count of 1s is even and even
parity is used, the frame is considered to be not-corrupted and is accepted. If the count of 1s
is odd and odd parity is used, the frame is still not corrupted.

If a single bit flips in transit, the receiver can detect it by counting the number of 1s. But
when more than one bits are erro neous, then it is very hard for the receiver to detect the
error.

Cyclic Redundancy Check (CRC)

CRC is a different approach to detect if the received frame contains valid data. This technique
involves binary division of the data bits being sent. The divisor is generated using
polynomials. The sender performs a division operation on the bits being sent and calculates
the remainder. Before sending the actual bits, the sender adds the remainder at the end of the
actual bits. Actual data bits plus the remainder is called a codeword. The sender transmits
data bits as codewords.

At the other end, the receiver performs division operation on codewords using the same CRC
divisor. If the remainder contains all zeros the data bits are accepted, otherwise it is
considered as there some data corruption occurred in transit.

Error Correction

In the digital world, error correction can be done in two ways:

 Backward Error Correction When the receiver detects an error in the data received,
it requests back the sender to retransmit the data unit.
 Forward Error Correction When the receiver detects some error in the data
received, it executes error-correcting code, which helps it to auto-recover and to
correct some kinds of errors.

The first one, Backward Error Correction, is simple and can only be efficiently used where
retransmitting is not expensive. For example, fiber optics. But in case of wireless
transmission retransmitting may cost too much. In the latter case, Forward Error Correction is
used.

To correct the error in data frame, the receiver must know exactly which bit in the frame is
corrupted. To locate the bit in error, redundant bits are used as parity bits for error
detection.For example, we take ASCII words (7 bits data), then there could be 8 kind of
information we need: first seven bits to tell us which bit is error and one more bit to tell that
there is no error.

For m data bits, r redundant bits are used. r bits can provide 2r combinations of information.
In m+r bit codeword, there is possibility that the r bits themselves may get corrupted. So the
number of r bits used must inform about m+r bit locations plus no-error information, i.e.
m+r+1.

5.Differentiate between cyclic redundancy check (CRC) and checksum methods for
error detection.

Checksum:
Checksum is a widely used method for the detection of errors in data. This method is more
reliable than other methods of detection of errors. This approach uses Checksum
Generator on Sender side and Checksum Checker on Receiver side.
CRC:
CRC or Cyclic Redundancy Check is the error detection method to detect the errors and is
used by upper layer protocols. It contains Polynomial Generator on both sender and
receiver side. The polynomial generator is of the type x 3+x2+x+1.
Difference between Checksum and CRC :
S.No
. Checksum CRC

It is not a thorough concept for detection It is a thorough concept for detection


1. and reporting of errors. and reporting of errors.

It is capable of detecting single bit change It is capable of detecting double bits


2. in the data. error.

This method is developed after the CRC


3. method. It is the oldest method.

It follows a complex computation


4. Errors can be easily detected. method for error detection.

It can compute less number of errors than Due to complex computation, it can
5. CRC. detect more errors.

6. It is based on addition approach. It is based on hash approach.

It is widely used in data validation during It is widely used in analog


7. implementation of software. transmission for data validation.
6.Brief about how line coding implemented in FDDI and describe its format.
Fiber Distributed Data Interface (FDDI) is a set of ANSI and ISO standards for transmission
of data in local area network (LAN) over fiber optic cables. It is applicable in large LANs
that can extend up to 200 kilometers in diameter.
Features
 FDDI uses optical fiber as its physical medium.
 It operates in the physical and medium access control (MAC layer) of the Open
Systems Interconnection (OSI) network model.
 It provides high data rate of 100 Mbps and can support thousands of users.
 It is used in LANs up to 200 kilometers for long distance voice and multimedia
communication.
 It uses ring based token passing mechanism and is derived from IEEE 802.4 token bus
standard.
 It contains two token rings, a primary ring for data and token transmission and a
secondary ring that provides backup if the primary ring fails.
 FDDI technology can also be used as a backbone for a wide area network (WAN).

The following diagram shows FDDI −

Frame Format

The frame format of FDDI is similar to that of token bus as shown in the following diagram −
The fields of an FDDI frame are −

 Preamble: 1 byte for synchronization.


 Start Delimiter: 1 byte that marks the beginning of the frame.
 Frame Control: 1 byte that specifies whether this is a data frame or control frame.
 Destination Address: 2-6 bytes that specifies address of destination station.
 Source Address: 2-6 bytes that specifies address of source station.
 Payload: A variable length field that carries the data from the network layer.
 Checksum: 4 bytes frame check sequence for error detection.
 End Delimiter: 1 byte that marks the end of the frame.
7.Discuss different carrier sense protocols. How are they different than
collisions protocols?
The random access MAC protocols are: ALOHA (asynchronous,
slotted), carrier-sense multiple-access (CSMA) (CSMA/collision-
detection (CD), CSMA/collision-avoidance (CA), non-persistent, and
p-persistent). The maximum throughput of slotted ALOHA protocol is
about 36% of the data rate of the channel.

CSMA is a mechanism that senses the state of the shared channel to prevent or recover data
packets from a collision. It is also used to control the flow of data packets over the network
so that the packets are not get lost, and data integrity is maintained. In CSMA, when two or
more data packets are sent at the same time on a shared channel, the chances of collision
occurred. Due to the collision, the receiver does not get any information regarding the
sender's data packets. And the lost information needs to be resented so that the receiver can
get it. Therefore we need to sense the channel before transmitting data packets on a network.
It is divided into two parts, CSMA CA (Collision Avoidance) and CSMA CD (Collision
Detection).
CSMA CD

The Carrier Sense Multiple Access/ Collision Detection protocol is used to detect a
collision in the media access control (MAC) layer. Once the collision was detected, the
CSMA CD immediately stopped the transmission by sending the signal so that the sender
does not waste all the time to send the data packet. Suppose a collision is detected from each
station while broadcasting the packets. In that case, the CSMA CD immediately sends a jam
signal to stop transmission and waits for a random time context before transmitting another
data packet. If the channel is found free, it immediately sends the data and returns it.

Advantage and Disadvantage of CSMA CD

Advantages of CSMA CD:

1. It is used for collision detection on a shared channel within a very short time.
2. CSMA CD is better than CSMA for collision detection.
3. CSMA CD is used to avoid any form of waste transmission.
4. When necessary, it is used to use or share the same amount of bandwidth at each
station.
5. It has lower CSMA CD overhead as compared to the CSMA CA.

Disadvantage of CSMA CD

1. It is not suitable for long-distance networks because as the distance increases, CSMA
CD' efficiency decreases.
2. It can detect collision only up to 2500 meters, and beyond this range, it cannot detect
collisions.
3. When multiple devices are added to a CSMA CD, collision detection performance is
reduced.
CSMA/CA

CSMA stands for Carrier Sense Multiple Access with Collision Avoidance. It means that it
is a network protocol that uses to avoid a collision rather than allowing it to occur, and it does
not deal with the recovery of packets after a collision. It is similar to the CSMA CD protocol
that operates in the media access control layer. In CSMA CA, whenever a station sends a data
frame to a channel, it checks whether it is in use. If the shared channel is busy, the station
waits until the channel enters idle mode. Hence, we can say that it reduces the chances of
collisions and makes better use of the medium to send data packets more efficiently.

Advantage and Disadvantage of CSMA CA

Advantage of CSMA CA

1. When the size of data packets is large, the chances of collision in CSMA CA is less.
2. It controls the data packets and sends the data when the receiver wants to send them.
3. It is used to prevent collision rather than collision detection on the shared channel.
4. CSMA CA avoids wasted transmission of data over the channel.
5. It is best suited for wireless transmission in a network.
6. It avoids unnecessary data traffic on the network with the help of the RTS/ CTS
extension.

The disadvantage of CSMA CA

1. Sometime CSMA/CA takes much waiting time as usual to transmit the data packet.
2. It consumes more bandwidth by each station.
3. Its efficiency is less than a CSMA CD.

Difference between CSMA CA and CSMA CD

S. CSMA CD CSMA CA
No

1. It is the type of CSMA to detect the collision on a It is the type of CSMA to avoid collision o
shared channel.

2. It is the collision detection protocol. It is the collision avoidance protocol.

3. It is used in 802.3 Ethernet network cable. It is used in the 802.11 Ethernet network.

4. It works in wired networks. It works in wireless networks.


5. It is effective after collision detection on a network. It is effective before collision detection on

6. Whenever a data packet conflicts in a shared channel, Whereas the CSMA CA waits until the
it resends the data frame. does not recover after a collision.

7. It minimizes the recovery time. It minimizes the risk of collision.

8. The efficiency of CSMA CD is high as compared to The efficiency of CSMA CA is similar to


CSMA.

9. It is more popular than the CSMA CA protocol. It is less popular than CSMA C

8.Write short notes on following:


i. Stop and Wait ARQ
ii. Sliding Window Protocol
iii. Go Back N ARQ

 i) Used in Connection-oriented communication.


 It offers error and flows control
 It is used in Data Link and Transport Layers
 Stop and Wait for ARQ mainly implements the Sliding Window Protocol
concept with Window Size 1

Useful Terms:

 Propagation Delay: Amount of time taken by a packet to make a physical


journey from one router to another router.
Propagation Delay = (Distance between routers) / (Velocity of propagation)
 RoundTripTime (RTT) = Amount of time taken by a packet to reach the receiver
+ Time taken by the Acknowledgement to reach the sender
 TimeOut (TO) = 2* RTT
 Time To Live (TTL) = 2* TimeOut. (Maximum TTL is 255 seconds)
Simple Stop and Wait
Sender:
Rule 1) Send one data packet at a time.
Rule 2) Send the next packet only after receiving acknowledgement for the previous.
Receiver:

Rule 1) Send acknowledgement after receiving and consuming a data packet.


Rule 2) After consuming packet acknowledgement need to be sent (Flow Control)

ii)
Sliding Window Protocol

The sliding window is a technique for sending multiple frames at a time. It controls the data
packets between the two devices where reliable and gradual delivery of data frames is
needed. It is also used in TCP (Transmission Control Protocol).

In this technique, each frame has sent from the sequence number. The sequence numbers are
used to find the missing data in the receiver end. The purpose of the sliding window
technique is to avoid duplicate data, so it uses the sequence number.

Types of Sliding Window Protocol

Sliding window protocol has two types:

1. Go-Back-N ARQ
2. Selective Repeat ARQ
iii)Go-Back-N ARQ

Go-Back-N ARQ protocol is also known as Go-Back-N Automatic Repeat Request. It is a


data link layer protocol that uses a sliding window method. In this, if any frame is corrupted
or lost, all subsequent frames have to be sent again.

Backward Skip 10sPlay VideoForward Skip 10s

The size of the sender window is N in this protocol. For example, Go-Back-8, the size of the
sender window, will be 8. The receiver window size is always 1.

If the receiver receives a corrupted frame, it cancels it. The receiver does not accept a
corrupted frame. When the timer expires, the sender sends the correct frame again. The
design of the Go-Back-N ARQ protocol is shown below.

The example of Go-Back-N ARQ is shown below in the figure.


Selective Repeat ARQ

Selective Repeat ARQ is also known as the Selective Repeat Automatic Repeat Request. It is
a data link layer protocol that uses a sliding window method. The Go-back-N ARQ protocol
works well if it has fewer errors. But if there is a lot of error in the frame, lots of bandwidth
loss in sending the frames again. So, we use the Selective Repeat ARQ protocol. In this
protocol, the size of the sender window is always equal to the size of the receiver window.
The size of the sliding window is always greater than 1.

If the receiver receives a corrupt frame, it does not directly discard it. It sends a negative
acknowledgment to the sender. The sender sends that frame again as soon as on the receiving
negative acknowledgment. There is no waiting for any time-out to send that frame. The
design of the Selective Repeat ARQ protocol is shown below.
9. Assume we want to send a data from S to R and there are 2 routers in between.
What will be the total time taken if total number of packets are 5. Data is like:
Tp=0 ms, Data size=1000 bytes, BW=1 mbps, Header of the packet=100 bytes.
10.Explain CSMA/CD in detail.
check class notes
11.

Refer class Notes.

12.
Refer Class Notes.
13.What is looping problem in Switches?Explain Spanning Tree Algorithm to solve
it using suitable Example?

Networks are built using multiple, interconnecting switches that connect devices and transfer
data. However, if two switches aren't connected properly, something called a switching loop
is created. To prevent this from happening, it's important to know why and how they occur.

In a typical local area network (LAN), it's common for multiple switches to be interconnected
for redundancy, meaning more than one path is possible between two switches. Redundancy
is a safety measure that ensures the network won't fail completely if a link breaks. However,
with interconnected switches comes a potential problem: a Layer 2 switching loops.

A switching loop, or bridge loop, occurs when more than one path exists between the source
and destination devices. As broadcast packets are sent by switches through every port, the
switch repeatedly sends broadcast messages, flooding the network and creating a broadcast
storm.

When switching loops start, they don't stop; there's no time-to-live (TTL) value on the
broadcast packets, meaning they'll keep bouncing around forever between two switches. And
herein lies the real problem—as the loop continues, so does the build up blocking traffic
between switches.
Switches determine where a packet goes based on the destination MAC address; every device
has a unique MAC address, so every packet is directed to a single place. When multiple MAC
addresses broadcast to all devices in the network, it can become problematic. This is
especially true for switch loops, where all broadcasts and multicasts repeat around the looped
network path in rapid succession, very quickly bringing down the network.

If a broadcast packet is sent out over a network with a loop, it will continue to rebroadcast the
message as it loops around the network. As more traffic packets pass through the network,
they're added to the loop; soon, the network is unable to communicate at all because it's
spending all of its time sending data packets through the loops.

Fortunately, there's a way to prevent this from happening using the Spanning Tree Protocol.

Preventing switching loops with the Spanning Tree Protocol


The Spanning Tree Protocol (STP) is a networking standard designed specifically to prevent
loops in Layer 2 switching, and to select the fastest network path if there are redundant links
in the network.

Here's how STP works:

 First, all of the switches in the STP domain elect a root bridge, or root switch. The
root bridge acts as a point of reference for every other switch in the network. The root
bridge's ports remain in forwarding mode, and there can only be one root bridge in
any network using STP.
 On all of the other switches, the interface closest to the root switch is the one
designated as the root port. The root port allows traffic to traverse that particular
interface, while other ports on this switch that allow traffic are called designated ports.
 If multiple ports are connected to the same switch or LAN segment, the switch selects
the port with the shortest path and marks it as the designated port.
 Once the root port and designated ports are selected, the switch blocks all remaining
ports to remove any possible loop from the network.

14.Differentiate between Token Ring And Ethernet.

S.
No. TOKEN RING ETHERNET

While Ethernet uses CSMA/CD(Carrier-


In the token ring, the token passing
1. Sense Multiple Access/Collision Detection)
mechanism is used.
mechanism.

Token ring is defined by IEEE Whereas Ethernet is defined by IEEE 802.3


2.
802.5 standard. standard.

3. Token ring is deterministic. While it is non-deterministic.

A token ring is a Star shaped


4. While Ethernet is a Bus shaped topology.
topology.

The token ring handles priority in


5. which some nodes may give While Ethernet does not employ priority.
priority to the token.

Token ring costs more than While Ethernet cost seventy percent less
6.
Ethernet. than token ring.
S.
No. TOKEN RING ETHERNET

In the token ring telephone wire is While in Ethernet coaxial cable(wire) is


7.
used. used.

The token ring contains routing While Ethernet does not contain routing
8.
information. information.

15. What is the token ring and its frame format?

Token Ring protocol is a communication protocol used in Local Area Network (LAN). In a
token ring protocol, the topology of the network is used to define the order in which
stations send. The stations are connected to one another in a single ring. It uses a special
three-byte frame called a “token” that travels around a ring. It makes use of Token
Passing controlled access mechanism. Frames are also transmitted in the direction of the
token. This way they will circulate around the ring and reach the station which is the
destination.

Token Ring Frame format:


 Start frame delimiter (SFD) – Alerts each station for the arrival of token(or
data frame) or start of the frame. It is used to synchronize clocks.

 Access control (AC) –

Priority bits and reservation bits help in implementing priority. Priority bits =
reservation bits = 3. Eg:- server is given priority = 7 and client is given priority
= 0.
Token bit is used to indicate presence of token frame. If token bit = 1 –> token
frame and if token bit = 0 –> not a token frame.
Monitor bit helps in solving orphan packet problem. It is covered by CRC as
monitor are powerful machines which can recalculate CRC when modifying
monitor bit. If monitor bit = 1 –> stamped by monitor and if monitor bit = 0 –>
not yet stamped by monitor.

 Frame control (FC) – First 2 bits indicates whether the frame contains data or
control information. In control frames, this byte specifies the type of control
information.

 Destination address (DA) and Source address (SA) – consist of two 6-byte
fields which is used to indicate MAC address of source and destination.

 Data – Data length can vary from 0 to maximum token holding time (THT)
according to token reservation strategy adopted. Token ring imposes no lower
bound on size of data i.e. an advantage over Ethernet.

 Cyclic redundancy check (CRC) – 32 bit CRC which is used to check for
errors in the frame, i.e., whether the frame is corrupted or not. If the frame is
corrupted, then its discarded.

 End delimiter (ED) – It is used to mark the end of frame. In Ethernet, length
field is used for this purpose. It also contains bits to indicate a damaged frame
and identify the frame that is the last in a logical sequence.

 Frame status (FS) – It Is a 1-byte field terminating a data frame.

It makes use of 2 copies of AC bits are used as a error detection mechanism


(100% redundancy) as CRC does not cover FS byte so that destination does not
have to recalculate CRC when modifying AC bits.
16.In a CSMA / CD network running at 1 Gbps over 1 km cable with no repeaters, the
signal speed in the cable is 200000 km/sec. What is minimum frame size?

Solution-
Given-
 Bandwidth = 1 Gbps
 Distance = 1 km
 Speed = 200000 km/sec

Calculating Propagation Delay-

Propagation delay (Tp)


= Distance / Propagation speed
= 1 km / (200000 km/sec)
= 0.5 x 10-5 sec
= 5 x 10-6 sec
Calculating Minimum Frame Size-

Minimum frame size


= 2 x Propagation delay x Bandwidth
= 2 x 5 x 10-6 sec x 109 bits per sec
= 10000 bits.
17. On a wireless link, the probability of packet error is 0.2. A stop and wait protocol is
used to transfer data across the link. The channel condition is assumed to be
independent from transmission to transmission. What is the average number of
transmission attempts required to transfer 100 packets?

Method-01:

Given-
 Probabilityof packet error = 0.2
 We have to transfer 100 packets

Now,
 When we transfer 100 packets, number of packets in which error will occur = 0.2 x
100 = 20.
 Then, these 20 packets will have to be retransmitted.
 When we retransmit 20 packets, number of packets in which error will occur = 0.2 x
20 = 4.
 Then, these 4 packets will have to be retransmitted.
 When we retransmit 4 packets, number of packets in which error will occur = 0.2 x 4
= 0.8 ≅ 1.
 Then, this 1 packet will have to be retransmitted.

From here, average number of transmission attempts required = 100 + 20 + 4 + 1 = 125.

Method-02:

REMEMBER

If there are n packets to be transmitted and p is the probability of packet error, then-
Number of transmission attempts required
= n + np + np2 + np3 + …… + ∞
= n / (1-p)

Substituting the given values, we get-


Average number of transmission attempts required = 100 / (1-0.2) = 125.
18. A 3000 km long trunk operates at 1.536 Mbps and is used to transmit 64 byte frames
and uses sliding window protocol. If the propagation speed is 6 μsec / km, how many
bits should the sequence number field be?

Solution-

Given-
 Distance = 3000 km
 Bandwidth = 1.536 Mbps
 Packet size = 64 bytes
 Propagation speed = 6 μsec / km

Calculating Transmission Delay-

Transmission delay (Tt)


= Packet size / Bandwidth
= 64 bytes / 1.536 Mbps
= (64 x 8 bits) / (1.536 x 106 bits per sec)
= 333.33 μsec

Calculating Propagation Delay-

For 1 km, propagation delay = 6 μsec


For 3000 km, propagation delay = 3000 x 6 μsec = 18000 μsec

Calculating Value Of ‘a’-

a = T p / Tt
a = 18000 μsec / 333.33 μsec
a = 54

Calculating Bits Required in Sequence Number Field-

Bits required in sequence number field


= ⌈log2(1+2a)⌉
= ⌈log2(1 + 2 x 54)⌉
= ⌈log2(109)⌉
= ⌈6.76⌉
= 7 bits

Thus,
 Minimum number of bits required in sequence number field = 7
 With 7 bits, number of sequence numbers possible = 128
 We use only (1+2a) = 109 sequence numbers and rest remains unused.
19.
Q. Explain Congestion Control Techniques in details.
Congestion control refers to the techniques used to control or prevent congestion.
Congestion control techniques can be broadly classified into two categories:
Open Loop Congestion Control
Open loop congestion control policies are applied to prevent congestion before it happens.
The congestion control is handled either by the source or the destination.
Policies adopted by open loop congestion control –

1. Retransmission Policy :
It is the policy in which retransmission of the packets are taken care of. If the
sender feels that a sent packet is lost or corrupted, the packet needs to be
retransmitted. This transmission may increase the congestion in the network.
To prevent congestion, retransmission timers must be designed to prevent
congestion and also able to optimize efficiency.

2. Window Policy :
The type of window at the sender’s side may also affect the congestion. Several
packets in the Go-back-n window are re-sent, although some packets may be
received successfully at the receiver side. This duplication may increase the
congestion in the network and make it worse.
Therefore, Selective repeat window should be adopted as it sends the specific
packet that may have been lost.

3. Discarding Policy :
A good discarding policy adopted by the routers is that the routers may prevent
congestion and at the same time partially discard the corrupted or less sensitive
packages and also be able to maintain the quality of a message.
In case of audio file transmission, routers can discard less sensitive packets to
prevent congestion and also maintain the quality of the audio file.

4. Acknowledgment Policy :
Since acknowledgements are also the part of the load in the network, the
acknowledgment policy imposed by the receiver may also affect congestion.
Several approaches can be used to prevent congestion related to
acknowledgment.
The receiver should send acknowledgement for N packets rather than sending
acknowledgement for a single packet. The receiver should send an
acknowledgment only if it has to send a packet or a timer expires.

5. Admission Policy :
In admission policy a mechanism should be used to prevent congestion.
Switches in a flow should first check the resource requirement of a network flow
before transmitting it further. If there is a chance of a congestion or there is a
congestion in the network, router should deny establishing a virtual network
connection to prevent further congestion.
All the above policies are adopted to prevent congestion before it happens in the network.

Closed Loop Congestion Control


Closed loop congestion control techniques are used to treat or alleviate congestion after it
happens. Several techniques are used by different protocols; some of them are:

1. Backpressure :
Backpressure is a technique in which a congested node stops receiving packets from
upstream node. This may cause the upstream node or nodes to become congested and reject
receiving data from above nodes. Backpressure is a node-to-node congestion control
technique that propagate in the opposite direction of data flow. The backpressure technique
can be applied only to virtual circuit where each node has information of its above upstream
node.

In above diagram the 3rd node is congested and stops receiving packets as a result
2nd node may be get congested due to slowing down of the output data flow. Similarly 1st
node may get congested and inform the source to slow down.

2. Choke Packet Technique :


Choke packet technique is applicable to both virtual networks as well as datagram subnets.
A choke packet is a packet sent by a node to the source to inform it of congestion. Each
router monitors its resources and the utilization at each of its output lines. Whenever the
resource utilization exceeds the threshold value which is set by the administrator, the router
directly sends a choke packet to the source giving it a feedback to reduce the traffic. The
intermediate nodes through which the packets has traveled are not warned about
congestion.
3. Implicit Signaling :
In implicit signaling, there is no communication between the congested nodes and the
source. The source guesses that there is congestion in a network. For example when sender
sends several packets and there is no acknowledgment for a while, one assumption is that
there is a congestion.

4. Explicit Signaling :
In explicit signaling, if a node experiences congestion it can explicitly sends a packet to the
source or destination to inform about congestion. The difference between choke packet and
explicit signaling is that the signal is included in the packets that carry data rather than
creating a different packet as in case of choke packet technique.
Explicit signaling can occur in either forward or backward direction.
 Forward Signaling : In forward signaling, a signal is sent in the direction of the
congestion. The destination is warned about congestion. The receiver in this case
adopt policies to prevent further congestion.
 Backward Signaling : In backward signaling, a signal is sent in the opposite
direction of the congestion. The source is warned about congestion and it needs
to slow down.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy