Unit-2 Important Question
Unit-2 Important Question
5 MARK QUESTIONS
1.What are header and trailers and how do they get added and removed?
Headers and trailers are the concepts of OSI model. Headers are information structures
which identifies the information that follows, such as a block of bytes in communication.
Trailer is the information which occupies several bytes at the end of the block of the data
being transmitted. They contain error-checking data which is useful for confirming the
accuracy and status of the transmission.
During communication of data the sender appends the header and passes it to the lower
layer while the receiver removes header and passes it to upper layer. Headers are added at
layer 6,5,4,3 & 2 while Trailer is added at layer 2.
2.State the requirements of CRC.
CRC or Cyclic Redundancy Check is a method of detecting accidental changes/errors in
the communication channel.
CRC uses Generator Polynomial which is available on both sender and receiver side.
An example generator polynomial is of the form like x 3 + x + 1. This generator
polynomial represents key 1011. Another example is x 2 + 1 that represents key 101.
n : Number of bits in data to be sent
from sender side.
k : Number of bits in the key obtained
from generator polynomial.
Requirements of CRC:
Specification of a CRC code requires definition of a so-called generator polynomial. This
polynomial becomes the divisor in a polynomial long division, which takes the message
as the dividend and in which the quotient is discarded and the remainder becomes the
result.
3.If a 7-bit hamming code received as 1110101.show that the code word has error. Also
rectify error in this code
To check for errors in the received code, we need to determine the parity bits for the
received code and compare them with the calculated parity bits. The parity bits are
calculated for each bit position in the code, where the position number is a power of 2
(i.e., 1, 2, 4). The parity bit is set to 1 if the sum of the data bits with that position in their
binary representation is odd; otherwise, the parity bit is set to 0.
Let's first write the received code in binary with the positions labeled:
1234567
1110101
The parity bits are calculated as follows:
Parity bit for position 1: Data bits 1, 3, 5, 7 (odd parity) = 1
Parity bit for position 2: Data bits 2, 3, 6, 7 (even parity) = 0
Parity bit for position 4: Data bits 4, 5, 6, 7 (even parity) = 1
Therefore, the calculated code with parity bits is:
1234567
1110101
1 0 1 1 1 0 1 (calculated parity bits)
We can see that the calculated parity bit for position 1 is different from the received parity
bit. This means that there is an error in the received code at position 1.
To correct the error, we need to flip the bit at position 1 in the received code, which gives
us the corrected code:
1234567
0110101
Therefore, there was an error in the received code at position 1, and the corrected code is
0110101.
4. If the minimum hamming distance is 9 then how many errors can be detected and
corrected by using Hamming code?
5.
6.For (11,5)Hamming code ,how many error can be detected and corrected?
8. Consider a communication system transmitting data in frames of 1000 bits each. The
system uses a CRC (Cyclic Redundancy Check) for error detection and correction. The
CRC polynomial used is x5+x3+1. If the original message is 10,000 bits long and is
divided into 10 frames for transmission, how many extra bits are added for CRC?
Answer:
To calculate the number of extra bits added for CRC, we first need to determine the size of
the CRC code generated by the polynomial x^5 + x^3 + 1.
The degree of the polynomial is 5, so the CRC code will be 5 bits long.
Now, since the system is transmitting data in frames of 1000 bits each, and each frame
requires CRC, we need to calculate the total number of CRC bits for all 10 frames.
= 50 bits
Therefore, 50 extra bits are added for CRC in the transmission of 10 frames.
9.In CRC method, assume that given frame for transmission is 1101011011 and the
generator polynomial is X4+X+1. Find the encoded word sent from sender side. [GATE]
The following table highlights the important differences between Pure Aloha and Slotted
Aloha.
Key Pure Aloha Slotted Aloha
Time Slot In Pure Aloha, any station can In Slotted Aloha, any station can
transmit data at any time. transmit data only at the beginning of a
time slot.
Time In Pure Aloha, time is continuous In Slotted Aloha, time is discrete and is
and is not globally synchronized. globally synchronized.
Vulnerable The vulnerable time or susceptible In Slotted Aloha, the vulnerable time is
time time in Pure Aloha is equal to equal to (Tt).
(2×Tt).
Number of Does not reduce the number of Slotted Aloha reduces the number of
collisions collisions. collisions to half, thus doubles the
efficiency.
Suppose there is two-way communication between two devices A and B. When the
data frame is sent by A to B, then device B will not send the acknowledgment to A
until B does not have the next frame to transmit. And the delayed acknowledgment is
sent by the B with the data frame. The method of attaching the delayed
acknowledgment with sending the data frame is known as piggybacking. Refer to the
below image for the piggybacking
12.Consider the use of 10 K-bit size frames on a 10 Mbps satellite channel with 270
ms delay. What is the link utilization for stop-and-wait ARQ technique assuming P=10 -
3
?
In calculating transmission time as we know in stop n wait ARQ , only 1 frame can be sent at
a time hence the data size is taken of 1 frame only.
= 1 ms
= 1 / (541)
= 0.19 %
Therefore P[succ]=0.1.
P[succ]=e-G=0.1;
-G=ln(0.1);
G=2.303.
Transmission time(Td)
The time taken by the sender to send all the bits in a frame onto the wire is called
transmission delay. This is calculated by dividing the data size(D) by the
bandwidth(B) of the channel on which the data has to be sent.
Td = D / B
Round-trip time (RTT) is the duration in milliseconds (ms) it takes for a network
request to go from a starting point to a destination and back again to the starting
point.RTT is the total time it takes for the request to travel over the network and for
the response to travel back. You can typically measure RTT in milliseconds. A
lower RTT improves the experience of using an application and makes the
application more responsive.
16. Define the relationship between transmission delay and propagation delay, if the
efficiency is at least 50% in STOP N WAIT protocol.
Check class notes
17. Calculate the total number of transmissions that are required to send 10 data
packets through GBN-3 and every 5th packet is lost.
Check class notes
17. Find out window size and minimum sequence number in sliding window protocol,
if Transmission delay (Tt)= 1 ms, Propagation delay (Tp)= 24.5 ms. (ms= milliseconds).
Check class notes
10 MARKS QUESTIONS
1.Discuss the issues in the data link layer and about its protocol on the basis of
layering principle.
The data link layer in the OSI (Open System Interconnections) Model, is in between the
physical layer and the network layer. This layer converts the raw transmission facility
provided by the physical layer to a reliable and error-free link.
The main functions and the design issues of this layer are
In the OSI Model, each layer uses the services of the layer below it and provides services
to the layer above it. The data link layer uses the services offered by the physical layer.
The primary function of this layer is to provide a well defined service interface to
network layer above it.
The data link layer encapsulates each data packet from the network layer into frames that
are then transmitted.
Frame Header
Payload field that contains the data packet from network layer
Trailer
Error Control
The data link layer ensures error free link for data transmission. The issues it caters to
with respect to error control are −
The data link layer regulates flow control so that a fast sender does not drown a slow
receiver. When the sender sends frames at very high speeds, a slow receiver may not be
able to handle it. There will be frame losses even if the transmission is error-free. The two
common approaches for flow control are −
Receiver Side:
Code word received at the receiver side 100100001
Therefore, the remainder is all zeros. Hence, the data received has no error.
3. Discuss framing in data communication systems. Explain how framing helps in the
reliable transmission of data and discuss at least two common framing techniques used
in practice.
Frames are the units of digital transmission, particularly in computer networks and
telecommunications. Frames are comparable to the packets of energy called photons in the
case of light energy. Frame is continuously used in Time Division Multiplexing process.
Framing is a point-to-point connection between two computers or devices consisting of a
wire in which data is transmitted as a stream of bits. However, these bits must be framed
into discernible blocks of information. Framing is a function of the data link layer. It
provides a way for a sender to transmit a set of bits that are meaningful to the receiver.
Ethernet, token ring, frame relay, and other data link layer technologies have their own
frame structures. Frames have headers that contain information such as error-checking
codes.
At the data link layer, it extracts the message from the sender and provides it to the receiver
by providing the sender’s and receiver’s addresses. The advantage of using frames is that
data is broken up into recoverable chunks that can easily be checked for corruption.
Types of framing
There are two types of framing:
1. Fixed-size: The frame is of fixed size and there is no need to provide boundaries to the
frame, the length of the frame itself acts as a delimiter.
Drawback: It suffers from internal fragmentation if the data size is less than the
frame size
Solution: Padding
2. Variable size: In this, there is a need to define the end of the frame as well as the
beginning of the next frame to distinguish. This can be done in two ways:
1. Length field – We can introduce a length field in the frame to indicate the
length of the frame. Used in Ethernet(802.3). The problem with this is that
sometimes the length field might get corrupted.
2. End Delimiter (ED) – We can introduce an ED(pattern) to indicate the end of
the frame. Used in Token Ring. The problem with this is that ED can occur in
the data. This can be solved by:
1. Character/Byte Stuffing: Used when frames consist of characters. If data
contains ED then, a byte is stuffed into data to differentiate it from ED.
Let ED = “$” –> if data contains ‘$’ anywhere, it can be escaped using ‘\O’
character.
–> if data contains ‘\O$’ then, use ‘\O\O\O$'($ is escaped using \O and \O is
escaped using \O).
Data-link layer uses some error control mechanism to ensure that frames (data bit streams)
are transmitted with certain level of accuracy. But to understand how errors is controlled, it is
essential to know what types of errors may occur.
Types of Errors
Error detection
Error correction
Error Detection
Errors in the received frames are detected by means of Parity Check and Cyclic Redundancy
Check (CRC). In both cases, few extra bits are sent along with actual data to confirm that bits
received at other end are same as they were sent. If the counter-check at receiver’ end fails,
the bits are considered corrupted.
Parity Check
One extra bit is sent along with the original bits to make number of 1s either even in case of
even parity, or odd in case of odd parity.
The sender while creating a frame counts the number of 1s in it. For example, if even parity is
used and number of 1s is even then one bit with value 0 is added. This way number of 1s
remains even.If the number of 1s is odd, to make it even a bit with value 1 is added.
The receiver simply counts the number of 1s in a frame. If the count of 1s is even and even
parity is used, the frame is considered to be not-corrupted and is accepted. If the count of 1s
is odd and odd parity is used, the frame is still not corrupted.
If a single bit flips in transit, the receiver can detect it by counting the number of 1s. But
when more than one bits are erro neous, then it is very hard for the receiver to detect the
error.
CRC is a different approach to detect if the received frame contains valid data. This technique
involves binary division of the data bits being sent. The divisor is generated using
polynomials. The sender performs a division operation on the bits being sent and calculates
the remainder. Before sending the actual bits, the sender adds the remainder at the end of the
actual bits. Actual data bits plus the remainder is called a codeword. The sender transmits
data bits as codewords.
At the other end, the receiver performs division operation on codewords using the same CRC
divisor. If the remainder contains all zeros the data bits are accepted, otherwise it is
considered as there some data corruption occurred in transit.
Error Correction
Backward Error Correction When the receiver detects an error in the data received,
it requests back the sender to retransmit the data unit.
Forward Error Correction When the receiver detects some error in the data
received, it executes error-correcting code, which helps it to auto-recover and to
correct some kinds of errors.
The first one, Backward Error Correction, is simple and can only be efficiently used where
retransmitting is not expensive. For example, fiber optics. But in case of wireless
transmission retransmitting may cost too much. In the latter case, Forward Error Correction is
used.
To correct the error in data frame, the receiver must know exactly which bit in the frame is
corrupted. To locate the bit in error, redundant bits are used as parity bits for error
detection.For example, we take ASCII words (7 bits data), then there could be 8 kind of
information we need: first seven bits to tell us which bit is error and one more bit to tell that
there is no error.
For m data bits, r redundant bits are used. r bits can provide 2r combinations of information.
In m+r bit codeword, there is possibility that the r bits themselves may get corrupted. So the
number of r bits used must inform about m+r bit locations plus no-error information, i.e.
m+r+1.
5.Differentiate between cyclic redundancy check (CRC) and checksum methods for
error detection.
Checksum:
Checksum is a widely used method for the detection of errors in data. This method is more
reliable than other methods of detection of errors. This approach uses Checksum
Generator on Sender side and Checksum Checker on Receiver side.
CRC:
CRC or Cyclic Redundancy Check is the error detection method to detect the errors and is
used by upper layer protocols. It contains Polynomial Generator on both sender and
receiver side. The polynomial generator is of the type x 3+x2+x+1.
Difference between Checksum and CRC :
S.No
. Checksum CRC
It can compute less number of errors than Due to complex computation, it can
5. CRC. detect more errors.
Frame Format
The frame format of FDDI is similar to that of token bus as shown in the following diagram −
The fields of an FDDI frame are −
CSMA is a mechanism that senses the state of the shared channel to prevent or recover data
packets from a collision. It is also used to control the flow of data packets over the network
so that the packets are not get lost, and data integrity is maintained. In CSMA, when two or
more data packets are sent at the same time on a shared channel, the chances of collision
occurred. Due to the collision, the receiver does not get any information regarding the
sender's data packets. And the lost information needs to be resented so that the receiver can
get it. Therefore we need to sense the channel before transmitting data packets on a network.
It is divided into two parts, CSMA CA (Collision Avoidance) and CSMA CD (Collision
Detection).
CSMA CD
The Carrier Sense Multiple Access/ Collision Detection protocol is used to detect a
collision in the media access control (MAC) layer. Once the collision was detected, the
CSMA CD immediately stopped the transmission by sending the signal so that the sender
does not waste all the time to send the data packet. Suppose a collision is detected from each
station while broadcasting the packets. In that case, the CSMA CD immediately sends a jam
signal to stop transmission and waits for a random time context before transmitting another
data packet. If the channel is found free, it immediately sends the data and returns it.
1. It is used for collision detection on a shared channel within a very short time.
2. CSMA CD is better than CSMA for collision detection.
3. CSMA CD is used to avoid any form of waste transmission.
4. When necessary, it is used to use or share the same amount of bandwidth at each
station.
5. It has lower CSMA CD overhead as compared to the CSMA CA.
Disadvantage of CSMA CD
1. It is not suitable for long-distance networks because as the distance increases, CSMA
CD' efficiency decreases.
2. It can detect collision only up to 2500 meters, and beyond this range, it cannot detect
collisions.
3. When multiple devices are added to a CSMA CD, collision detection performance is
reduced.
CSMA/CA
CSMA stands for Carrier Sense Multiple Access with Collision Avoidance. It means that it
is a network protocol that uses to avoid a collision rather than allowing it to occur, and it does
not deal with the recovery of packets after a collision. It is similar to the CSMA CD protocol
that operates in the media access control layer. In CSMA CA, whenever a station sends a data
frame to a channel, it checks whether it is in use. If the shared channel is busy, the station
waits until the channel enters idle mode. Hence, we can say that it reduces the chances of
collisions and makes better use of the medium to send data packets more efficiently.
Advantage of CSMA CA
1. When the size of data packets is large, the chances of collision in CSMA CA is less.
2. It controls the data packets and sends the data when the receiver wants to send them.
3. It is used to prevent collision rather than collision detection on the shared channel.
4. CSMA CA avoids wasted transmission of data over the channel.
5. It is best suited for wireless transmission in a network.
6. It avoids unnecessary data traffic on the network with the help of the RTS/ CTS
extension.
1. Sometime CSMA/CA takes much waiting time as usual to transmit the data packet.
2. It consumes more bandwidth by each station.
3. Its efficiency is less than a CSMA CD.
S. CSMA CD CSMA CA
No
1. It is the type of CSMA to detect the collision on a It is the type of CSMA to avoid collision o
shared channel.
3. It is used in 802.3 Ethernet network cable. It is used in the 802.11 Ethernet network.
6. Whenever a data packet conflicts in a shared channel, Whereas the CSMA CA waits until the
it resends the data frame. does not recover after a collision.
9. It is more popular than the CSMA CA protocol. It is less popular than CSMA C
Useful Terms:
ii)
Sliding Window Protocol
The sliding window is a technique for sending multiple frames at a time. It controls the data
packets between the two devices where reliable and gradual delivery of data frames is
needed. It is also used in TCP (Transmission Control Protocol).
In this technique, each frame has sent from the sequence number. The sequence numbers are
used to find the missing data in the receiver end. The purpose of the sliding window
technique is to avoid duplicate data, so it uses the sequence number.
1. Go-Back-N ARQ
2. Selective Repeat ARQ
iii)Go-Back-N ARQ
The size of the sender window is N in this protocol. For example, Go-Back-8, the size of the
sender window, will be 8. The receiver window size is always 1.
If the receiver receives a corrupted frame, it cancels it. The receiver does not accept a
corrupted frame. When the timer expires, the sender sends the correct frame again. The
design of the Go-Back-N ARQ protocol is shown below.
Selective Repeat ARQ is also known as the Selective Repeat Automatic Repeat Request. It is
a data link layer protocol that uses a sliding window method. The Go-back-N ARQ protocol
works well if it has fewer errors. But if there is a lot of error in the frame, lots of bandwidth
loss in sending the frames again. So, we use the Selective Repeat ARQ protocol. In this
protocol, the size of the sender window is always equal to the size of the receiver window.
The size of the sliding window is always greater than 1.
If the receiver receives a corrupt frame, it does not directly discard it. It sends a negative
acknowledgment to the sender. The sender sends that frame again as soon as on the receiving
negative acknowledgment. There is no waiting for any time-out to send that frame. The
design of the Selective Repeat ARQ protocol is shown below.
9. Assume we want to send a data from S to R and there are 2 routers in between.
What will be the total time taken if total number of packets are 5. Data is like:
Tp=0 ms, Data size=1000 bytes, BW=1 mbps, Header of the packet=100 bytes.
10.Explain CSMA/CD in detail.
check class notes
11.
12.
Refer Class Notes.
13.What is looping problem in Switches?Explain Spanning Tree Algorithm to solve
it using suitable Example?
Networks are built using multiple, interconnecting switches that connect devices and transfer
data. However, if two switches aren't connected properly, something called a switching loop
is created. To prevent this from happening, it's important to know why and how they occur.
In a typical local area network (LAN), it's common for multiple switches to be interconnected
for redundancy, meaning more than one path is possible between two switches. Redundancy
is a safety measure that ensures the network won't fail completely if a link breaks. However,
with interconnected switches comes a potential problem: a Layer 2 switching loops.
A switching loop, or bridge loop, occurs when more than one path exists between the source
and destination devices. As broadcast packets are sent by switches through every port, the
switch repeatedly sends broadcast messages, flooding the network and creating a broadcast
storm.
When switching loops start, they don't stop; there's no time-to-live (TTL) value on the
broadcast packets, meaning they'll keep bouncing around forever between two switches. And
herein lies the real problem—as the loop continues, so does the build up blocking traffic
between switches.
Switches determine where a packet goes based on the destination MAC address; every device
has a unique MAC address, so every packet is directed to a single place. When multiple MAC
addresses broadcast to all devices in the network, it can become problematic. This is
especially true for switch loops, where all broadcasts and multicasts repeat around the looped
network path in rapid succession, very quickly bringing down the network.
If a broadcast packet is sent out over a network with a loop, it will continue to rebroadcast the
message as it loops around the network. As more traffic packets pass through the network,
they're added to the loop; soon, the network is unable to communicate at all because it's
spending all of its time sending data packets through the loops.
Fortunately, there's a way to prevent this from happening using the Spanning Tree Protocol.
First, all of the switches in the STP domain elect a root bridge, or root switch. The
root bridge acts as a point of reference for every other switch in the network. The root
bridge's ports remain in forwarding mode, and there can only be one root bridge in
any network using STP.
On all of the other switches, the interface closest to the root switch is the one
designated as the root port. The root port allows traffic to traverse that particular
interface, while other ports on this switch that allow traffic are called designated ports.
If multiple ports are connected to the same switch or LAN segment, the switch selects
the port with the shortest path and marks it as the designated port.
Once the root port and designated ports are selected, the switch blocks all remaining
ports to remove any possible loop from the network.
S.
No. TOKEN RING ETHERNET
Token ring costs more than While Ethernet cost seventy percent less
6.
Ethernet. than token ring.
S.
No. TOKEN RING ETHERNET
The token ring contains routing While Ethernet does not contain routing
8.
information. information.
Token Ring protocol is a communication protocol used in Local Area Network (LAN). In a
token ring protocol, the topology of the network is used to define the order in which
stations send. The stations are connected to one another in a single ring. It uses a special
three-byte frame called a “token” that travels around a ring. It makes use of Token
Passing controlled access mechanism. Frames are also transmitted in the direction of the
token. This way they will circulate around the ring and reach the station which is the
destination.
Priority bits and reservation bits help in implementing priority. Priority bits =
reservation bits = 3. Eg:- server is given priority = 7 and client is given priority
= 0.
Token bit is used to indicate presence of token frame. If token bit = 1 –> token
frame and if token bit = 0 –> not a token frame.
Monitor bit helps in solving orphan packet problem. It is covered by CRC as
monitor are powerful machines which can recalculate CRC when modifying
monitor bit. If monitor bit = 1 –> stamped by monitor and if monitor bit = 0 –>
not yet stamped by monitor.
Frame control (FC) – First 2 bits indicates whether the frame contains data or
control information. In control frames, this byte specifies the type of control
information.
Destination address (DA) and Source address (SA) – consist of two 6-byte
fields which is used to indicate MAC address of source and destination.
Data – Data length can vary from 0 to maximum token holding time (THT)
according to token reservation strategy adopted. Token ring imposes no lower
bound on size of data i.e. an advantage over Ethernet.
Cyclic redundancy check (CRC) – 32 bit CRC which is used to check for
errors in the frame, i.e., whether the frame is corrupted or not. If the frame is
corrupted, then its discarded.
End delimiter (ED) – It is used to mark the end of frame. In Ethernet, length
field is used for this purpose. It also contains bits to indicate a damaged frame
and identify the frame that is the last in a logical sequence.
Solution-
Given-
Bandwidth = 1 Gbps
Distance = 1 km
Speed = 200000 km/sec
Method-01:
Given-
Probabilityof packet error = 0.2
We have to transfer 100 packets
Now,
When we transfer 100 packets, number of packets in which error will occur = 0.2 x
100 = 20.
Then, these 20 packets will have to be retransmitted.
When we retransmit 20 packets, number of packets in which error will occur = 0.2 x
20 = 4.
Then, these 4 packets will have to be retransmitted.
When we retransmit 4 packets, number of packets in which error will occur = 0.2 x 4
= 0.8 ≅ 1.
Then, this 1 packet will have to be retransmitted.
Method-02:
REMEMBER
If there are n packets to be transmitted and p is the probability of packet error, then-
Number of transmission attempts required
= n + np + np2 + np3 + …… + ∞
= n / (1-p)
Solution-
Given-
Distance = 3000 km
Bandwidth = 1.536 Mbps
Packet size = 64 bytes
Propagation speed = 6 μsec / km
a = T p / Tt
a = 18000 μsec / 333.33 μsec
a = 54
Thus,
Minimum number of bits required in sequence number field = 7
With 7 bits, number of sequence numbers possible = 128
We use only (1+2a) = 109 sequence numbers and rest remains unused.
19.
Q. Explain Congestion Control Techniques in details.
Congestion control refers to the techniques used to control or prevent congestion.
Congestion control techniques can be broadly classified into two categories:
Open Loop Congestion Control
Open loop congestion control policies are applied to prevent congestion before it happens.
The congestion control is handled either by the source or the destination.
Policies adopted by open loop congestion control –
1. Retransmission Policy :
It is the policy in which retransmission of the packets are taken care of. If the
sender feels that a sent packet is lost or corrupted, the packet needs to be
retransmitted. This transmission may increase the congestion in the network.
To prevent congestion, retransmission timers must be designed to prevent
congestion and also able to optimize efficiency.
2. Window Policy :
The type of window at the sender’s side may also affect the congestion. Several
packets in the Go-back-n window are re-sent, although some packets may be
received successfully at the receiver side. This duplication may increase the
congestion in the network and make it worse.
Therefore, Selective repeat window should be adopted as it sends the specific
packet that may have been lost.
3. Discarding Policy :
A good discarding policy adopted by the routers is that the routers may prevent
congestion and at the same time partially discard the corrupted or less sensitive
packages and also be able to maintain the quality of a message.
In case of audio file transmission, routers can discard less sensitive packets to
prevent congestion and also maintain the quality of the audio file.
4. Acknowledgment Policy :
Since acknowledgements are also the part of the load in the network, the
acknowledgment policy imposed by the receiver may also affect congestion.
Several approaches can be used to prevent congestion related to
acknowledgment.
The receiver should send acknowledgement for N packets rather than sending
acknowledgement for a single packet. The receiver should send an
acknowledgment only if it has to send a packet or a timer expires.
5. Admission Policy :
In admission policy a mechanism should be used to prevent congestion.
Switches in a flow should first check the resource requirement of a network flow
before transmitting it further. If there is a chance of a congestion or there is a
congestion in the network, router should deny establishing a virtual network
connection to prevent further congestion.
All the above policies are adopted to prevent congestion before it happens in the network.
1. Backpressure :
Backpressure is a technique in which a congested node stops receiving packets from
upstream node. This may cause the upstream node or nodes to become congested and reject
receiving data from above nodes. Backpressure is a node-to-node congestion control
technique that propagate in the opposite direction of data flow. The backpressure technique
can be applied only to virtual circuit where each node has information of its above upstream
node.
In above diagram the 3rd node is congested and stops receiving packets as a result
2nd node may be get congested due to slowing down of the output data flow. Similarly 1st
node may get congested and inform the source to slow down.
4. Explicit Signaling :
In explicit signaling, if a node experiences congestion it can explicitly sends a packet to the
source or destination to inform about congestion. The difference between choke packet and
explicit signaling is that the signal is included in the packets that carry data rather than
creating a different packet as in case of choke packet technique.
Explicit signaling can occur in either forward or backward direction.
Forward Signaling : In forward signaling, a signal is sent in the direction of the
congestion. The destination is warned about congestion. The receiver in this case
adopt policies to prevent further congestion.
Backward Signaling : In backward signaling, a signal is sent in the opposite
direction of the congestion. The source is warned about congestion and it needs
to slow down.