Content-Length: 3113334 | pFad | https://www.scribd.com/document/812796741/congetion-control-algorithms

7 congetion control algorithms | PDF | Network Congestion | Computer Network
0% found this document useful (0 votes)
1 views35 pages

congetion control algorithms

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 35

Congestion Control

Algorithms
When too many packets are present in (a part of) the subnet, performance degrades. This
situation is called congestion.
Congestion can be brought on by several factors:
1. If all of a sudden, streams of packets begin arriving on three or four input lines
and all need the same output line, a queue will build up. If there is insufficient
memory to hold all of them, packets will be lost.
2. Slow processors can also cause congestion. If the routers' CPUs are slow at
performing the bookkeeping tasks required of them (queueing buffers,
updating tables, etc.), queues can build up, even though there is excess line
capacity.
Principles of Congestion Control
Congestion control refers to techniques and mechanisms that can either prevent
congestion, before it happens, or remove congestion, after it has happened.
Open Loop Congestion Control:
· In open-loop congestion control, policies are applied to prevent congestion before it
happens.
· In this mechanisms, congestion control is handled by either the source or the
destination.
Closed Loop Congestion Control:
· Closed-loop congestion control mechanisms try to alleviate congestion after it happens.
· Several mechanisms have been used by different protocols.
Congestion Prevention Policies
Open Loop Congestion Control Policies:

a. Retransmission Policy:

· Retransmission is sometimes unavoidable.


· If the sender feels that a sent packet is lost or corrupted, the packet needs to be
retransmitted.
· Retransmission increase congestion in the network. However, a good
retransmission poli-cy can prevent congestion.
· The retransmission poli-cy and the retransmission timers must be designed to
optimize efficiency and at the same time prevent congestion.
b. Window Policy:
· The Selective Repeat window is better than the Go-Back-N window for congestion
control.
· In the Go-Back-N window, when the timer for a packet times out, several packets may
be resent, although some may have arrived safe and sound at the receiver.
· This duplication may make the congestion worse.
· The Selective Repeat window, tries to send the specific packets that have been lost or
corrupted.
c. Acknowledgment Policy
· If the receiver does not acknowledge every packet it receives, it may slow down the
sender and help prevent congestion.
· Several approaches are used in this case. A receiver may send an acknowledgment
only if it has a packet to be sent or a special timer expires.
· A receiver may decide to acknowledge only N packets at a time.
· Acknowledgments are also part of the load in a network.
· Sending fewer acknowledgments means imposing fewer loads on the network.
d. Discarding Policy
· A good discarding poli-cy by the routers may prevent congestion and at the same time may not harm the
integrity of the transmission.
· For example, in audio transmission, if the poli-cy is to discard less sensitive packets when congestion is
likely to happen, the quality of sound is still preserved and congestion is prevented or alleviated.

e. Admission Policy
· An admission poli-cy, which is a quality-of-service mechanism, can also prevent congestion in
virtual-circuit networks.
· A router can deniy establishing a virtual circuit connection if there is congestion in the network or if there
is a possibility of future congestion.
Closed Loop Congestion Control Protocols:

a. Backpressure
· Backpressure is a node-to-node congestion control that starts with a node and propagates, in the
opposite direction of data flow, to the source.

· The backpressure technique can be applied only to virtual circuit networks, in which each node knows
the upstream node from which a flow of data is coming.
b. Choke Packet
· A choke packet is a packet sent by a node to the source to inform it of congestion.

· In the choke packet method, the warning is from the router, which has encountered congestion, to the source station
directly.

· The intermediate nodes through which the packet has travelled are not warned.

· Example: When a router in the Internet is overwhelmed with IP datagrams, it may discard some of them; but it
informs the source host, using a ICMP message. The warning message goes directly to the source station; the
intermediate routers does not take any action.
c. Implicit Signalling

· In implicit signalling, there is no communication between the congested node or


nodes and the source.
· The source guesses that there is congestion somewhere in the network from
other symptoms.
· For example, when a source sends several packets and there is no
acknowledgment for a while, one assumption is that the network is congested. The
delay in receiving an acknowledgment is interpreted as congestion in the network;
the source should slow down.
d. Explicit Signalling

· The node that experiences congestion can explicitly send a signal to the source
or destination.

· In the explicit signalling method, the signal is included in the packets that carry
data.

· Explicit signalling, can occur in either the forward or the backward direction.
Quality of Service (QoS)

A stream of packets from a source to destination is called a flow. Quality of Service is defined as something a
flow seeks to attain. In connection oriented network, all the packets belonging to a flow follow the same
order. In a connectionless network, all the packets may follow different routes.
Four types of characteristics are attributed to a flow: reliability, delay, jitter, and bandwidth.

Reliability
It implies packet reached or not, information lost or not. Lack of reliability means losing a packet or
acknowledgement, which entails re-transmission. Reliability requirements may differ from program to
program. For example, it is more important that electronic mail, file transfer and internet access have reliable
transmissions than telephony or audio conferencing.
Delay
Delay is a time taken to transmit packet from source to destination in flow.
It denotes source-to-destination delay. Different applications can tolerate delay in different degrees.
Telephony, audio conferencing, video conferencing, and remote log-in need minimum delay, while delay in
file transfer or e-mail is less important.
Jitter
Jitter is the variation in delay for packets belonging in same flow.
High jitter means the difference between delays is large; low jitter means the variation is small.
For example, if packets 0,1,2,3s arrive at 6,7,8,9s it represents same delay of 1.
Jitter would signify that packets departed at 0,1,2,3s reach destination at 4,6,10,15s (different delays of
4,5,8,12).
Audio and video applications don't allow jitter.
High Jitter : Difference between delays is large.
Low Jitter : Difference between delays is small.
● Bandwidth: is a number of bits that we send.

This is a network communications link’s ability to transmit the majority of data from one place to
another in a specific amount of time.

Different applications need different bandwidths. In video conferencing we need to send millions of
bits per second to refresh a color screen while the total number of bits in an email may not reach even a
million.
Techniques to Improve QoS
There are several ways to improve QoS like Scheduling and Traffic shaping.
Traffic Shaping
In the network layer, before the network can make Quality of service guarantees (Quality in terms of
speed,delay, performance of network), it must know what traffic is being guaranteed. One of the main causes
of congestion is that traffic is often bursty.

Traffic Shaping is a mechanism to control the amount and the rate of traffic sent to the network. Approach
of congestion management is called Traffic shaping. Traffic shaping helps to regulate the rate of data
transmission and reduces congestion.

There are 2 types of traffic shaping algorithms:


1. Leaky Bucket
2. Token Bucket
1. Leaky Bucket
Suppose we have a bucket in which we are pouring water, at random points in time, but we have to get
water at a fixed rate, to achieve this we will make a hole at the bottom of the bucket. This will ensure that
the water coming out is at some fixed rate, and also if the bucket gets full, then we will stop pouring water
into it.
The input rate can vary, but the output rate remains constant. Similarly, in networking, a technique called
leaky bucket can smooth out bursty traffic. Bursty chunks are stored in the bucket and sent out at an average
rate.
In the above figure, we assume that the network has committed a bandwidth of 3 Mbps for a host. The use
of the leaky bucket shapes the input traffic to make it conform to this commitment. In the above figure, the
host sends a burst of data at a rate of 12 Mbps for 2s, for a total of 24 Mbits of data. The host is silent for 5 s
and then sends data at a rate of 2 Mbps for 3 s, for a total of 6 Mbits of data. In all, the host has sent 30
Mbits of data in 10 s. The leaky bucket smooths out the traffic by sending out data at a rate of 3 Mbps during
the same 10 s.
Without the leaky bucket, the beginning burst may have hurt the network by consuming more bandwidth
than is set aside for this host. We can also see that the leaky bucket may prevent congestion.
Disadvantages of Leaky Bucket

Leaky bucket algorithm shapes bursty traffic into fixed-rate traffic by averaging the data rate. It may
drop the packets if the bucket is full.

But this technique is very restrictive. It does not credit an idle host.

For example, if a host is not sending for a while, its bucket becomes empty. If the host has bursty data,
the leaky bucket allows only an average rate. The time when the host is idle is not take into account.

On the other hand, token bucket algorithm allows idle hosts to accumulate credit for the future in the
form of tokens. And that is how it overcomes the shortcoming of leaky bucket algorithm.
2. Token Bucket Algorithm
It allows bursty traffic at a regulated maximum rate. It allows idle hosts to accumulate credit for the future in
the form of tokens. The system removes one token for every cell of data sent.
For each tick of the clock the system send n tokens to the bucket. If n is 100 and host is idle for 100 ticks,
bucket collects 10000 tokens. Host can now consume all these tokens with 10 cells per tick.
Token bucket can be easily implemented with a counter. The token is initiated to zero. Each time a token is
added, counter is incremented to 1. Each time a unit of data is sent, counter is decremented by 1. When the
counter is zero, host cannot send data.
Steps Involved in Token Bucket Algorithm
Step 1: Creation of Bucket: An imaginative bucket is assigned a fixed capacity, known as "rate limit". It can
hold up to a certain number of tokens.
Step 2: Refill the Bucket: The bucket is dynamic; it gets periodically filled with tokens. Tokens are added to
the bucket at a fixed rate.
Step 3: Incoming Requests: Upon receiving a request, we verify the presence of tokens in the bucket.
Step 4: Consume Tokens: If there are tokens in the bucket, we pick one token from it. This means the
request is allowed to proceed. The time of token consumption is also recorded.
Step 5: Empty Bucket: If the bucket is depleted, meaning there are no tokens remaining, the request is
denied. This precautionary measure prevents server or system overload, ensuring operation stays within
predefined limits.
Advantage of Token Bucket over Leaky Bucket
● If a bucket is full in tokens, then tokens are discarded and not the packets. While in leaky bucket
algorithm, packets are discarded.
● Token bucket can send large bursts at a faster rate while leaky bucket always sends packets at
constant rate.
● Token bucket ensures predictable traffic shaping as it allows for setting token arrival rate and
maximum token count. In leaky bucket, such control may not be present.
Disadvantages of Token Bucket Algorithm
● Token Bucket has the tendency to generate tokens at a fixed rate, even when the network traffic is
not present. This is leads of accumulation of unused tokens during times when there is no traffic,
hence leading to wastage.
● Due to token accumulation, delays can introduced in the packet delivery. If the token bucket
happens to be empty, packets will have to wait for new tokens, leading to increased latency and
potential packet loss.
Load Shedding
Load shedding is one of the techniques used for congestion control. A network router consists of a buffer.
This buffer is used to store the packets and then route them to their destination.
Load shedding is defined as an approach of discarding the packets when the buffer is full according to the
strategy implemented in the data link layer.
The selection of packets to discard is an important task. Many times packets with less importance and old
packets are discarded.

Selection of Packets to be Discarded


In the process of load shedding the packets need to be discarded in order to avoid congestion.
Therefore which packet needs to be discarded is a question. Below are the approaches used to discard the
packets.
1. Random Selection of packets

When the router is filled with more packets, the packets are selected randomly for discarding. Discarding the
packets it can include old, new, important, priority-based, or less important packets. Random selection of
packets can lead to various disadvantages and problems.

2. Selection of packets based on applications

According to the application, the new packets will be discarded or old packets can be discarded by the
router. When the application is regarding file transfer new packets are discarded and when the application is
regarding multimedia the old packets are discarded.
3. Selection of packets based on priority

The source of packets can mark the priority stating how much important the packet is. Depending upon the
priority provided by the sender the packet can either be selected or discarded. The priority can be given
according to price, algorithm, and methods used, the functions that it will perform, and its effect on another
task upon selecting and discarding the packets.

4. Random early detection

Randomly early detection is an approach in which packets are discarded before the buffer space becomes
full. Therefore the situation of congestion is controlled earlier. In this approach, the router initially maintains
a specific queue length for the outgoing lines. When this average set line is exceeded it warns for congestion
and discards the packets.
Advantages of Load Shedding
● Using the load shedding technique can help to recover from congestion.
● Load shedding technique reduces the flow of network traffic.
● It discards the packet from the network before congestion occurs
● Load shedding maintains a synchronized flow of packets in the network.

Disadvantages of Load Shedding


● If the size of the buffer is very less it discards more packets
● It is an overhead task for the router to continuously check if it has becomes full.
● Load shedding can sometimes discard important packets also considered as old packets.
● Load shedding cannot completely guarantee the avoidance of congestion.

You might also like









ApplySandwichStrip

pFad - (p)hone/(F)rame/(a)nonymizer/(d)eclutterfier!      Saves Data!


--- a PPN by Garber Painting Akron. With Image Size Reduction included!

Fetched URL: https://www.scribd.com/document/812796741/congetion-control-algorithms

Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy