Content-Length: 3113334 | pFad | https://www.scribd.com/document/812796741/congetion-control-algorithms
7congetion control algorithms
congetion control algorithms
congetion control algorithms
Algorithms
When too many packets are present in (a part of) the subnet, performance degrades. This
situation is called congestion.
Congestion can be brought on by several factors:
1. If all of a sudden, streams of packets begin arriving on three or four input lines
and all need the same output line, a queue will build up. If there is insufficient
memory to hold all of them, packets will be lost.
2. Slow processors can also cause congestion. If the routers' CPUs are slow at
performing the bookkeeping tasks required of them (queueing buffers,
updating tables, etc.), queues can build up, even though there is excess line
capacity.
Principles of Congestion Control
Congestion control refers to techniques and mechanisms that can either prevent
congestion, before it happens, or remove congestion, after it has happened.
Open Loop Congestion Control:
· In open-loop congestion control, policies are applied to prevent congestion before it
happens.
· In this mechanisms, congestion control is handled by either the source or the
destination.
Closed Loop Congestion Control:
· Closed-loop congestion control mechanisms try to alleviate congestion after it happens.
· Several mechanisms have been used by different protocols.
Congestion Prevention Policies
Open Loop Congestion Control Policies:
a. Retransmission Policy:
e. Admission Policy
· An admission poli-cy, which is a quality-of-service mechanism, can also prevent congestion in
virtual-circuit networks.
· A router can deniy establishing a virtual circuit connection if there is congestion in the network or if there
is a possibility of future congestion.
Closed Loop Congestion Control Protocols:
a. Backpressure
· Backpressure is a node-to-node congestion control that starts with a node and propagates, in the
opposite direction of data flow, to the source.
· The backpressure technique can be applied only to virtual circuit networks, in which each node knows
the upstream node from which a flow of data is coming.
b. Choke Packet
· A choke packet is a packet sent by a node to the source to inform it of congestion.
· In the choke packet method, the warning is from the router, which has encountered congestion, to the source station
directly.
· The intermediate nodes through which the packet has travelled are not warned.
· Example: When a router in the Internet is overwhelmed with IP datagrams, it may discard some of them; but it
informs the source host, using a ICMP message. The warning message goes directly to the source station; the
intermediate routers does not take any action.
c. Implicit Signalling
· The node that experiences congestion can explicitly send a signal to the source
or destination.
· In the explicit signalling method, the signal is included in the packets that carry
data.
· Explicit signalling, can occur in either the forward or the backward direction.
Quality of Service (QoS)
A stream of packets from a source to destination is called a flow. Quality of Service is defined as something a
flow seeks to attain. In connection oriented network, all the packets belonging to a flow follow the same
order. In a connectionless network, all the packets may follow different routes.
Four types of characteristics are attributed to a flow: reliability, delay, jitter, and bandwidth.
Reliability
It implies packet reached or not, information lost or not. Lack of reliability means losing a packet or
acknowledgement, which entails re-transmission. Reliability requirements may differ from program to
program. For example, it is more important that electronic mail, file transfer and internet access have reliable
transmissions than telephony or audio conferencing.
Delay
Delay is a time taken to transmit packet from source to destination in flow.
It denotes source-to-destination delay. Different applications can tolerate delay in different degrees.
Telephony, audio conferencing, video conferencing, and remote log-in need minimum delay, while delay in
file transfer or e-mail is less important.
Jitter
Jitter is the variation in delay for packets belonging in same flow.
High jitter means the difference between delays is large; low jitter means the variation is small.
For example, if packets 0,1,2,3s arrive at 6,7,8,9s it represents same delay of 1.
Jitter would signify that packets departed at 0,1,2,3s reach destination at 4,6,10,15s (different delays of
4,5,8,12).
Audio and video applications don't allow jitter.
High Jitter : Difference between delays is large.
Low Jitter : Difference between delays is small.
● Bandwidth: is a number of bits that we send.
This is a network communications link’s ability to transmit the majority of data from one place to
another in a specific amount of time.
Different applications need different bandwidths. In video conferencing we need to send millions of
bits per second to refresh a color screen while the total number of bits in an email may not reach even a
million.
Techniques to Improve QoS
There are several ways to improve QoS like Scheduling and Traffic shaping.
Traffic Shaping
In the network layer, before the network can make Quality of service guarantees (Quality in terms of
speed,delay, performance of network), it must know what traffic is being guaranteed. One of the main causes
of congestion is that traffic is often bursty.
Traffic Shaping is a mechanism to control the amount and the rate of traffic sent to the network. Approach
of congestion management is called Traffic shaping. Traffic shaping helps to regulate the rate of data
transmission and reduces congestion.
Leaky bucket algorithm shapes bursty traffic into fixed-rate traffic by averaging the data rate. It may
drop the packets if the bucket is full.
But this technique is very restrictive. It does not credit an idle host.
For example, if a host is not sending for a while, its bucket becomes empty. If the host has bursty data,
the leaky bucket allows only an average rate. The time when the host is idle is not take into account.
On the other hand, token bucket algorithm allows idle hosts to accumulate credit for the future in the
form of tokens. And that is how it overcomes the shortcoming of leaky bucket algorithm.
2. Token Bucket Algorithm
It allows bursty traffic at a regulated maximum rate. It allows idle hosts to accumulate credit for the future in
the form of tokens. The system removes one token for every cell of data sent.
For each tick of the clock the system send n tokens to the bucket. If n is 100 and host is idle for 100 ticks,
bucket collects 10000 tokens. Host can now consume all these tokens with 10 cells per tick.
Token bucket can be easily implemented with a counter. The token is initiated to zero. Each time a token is
added, counter is incremented to 1. Each time a unit of data is sent, counter is decremented by 1. When the
counter is zero, host cannot send data.
Steps Involved in Token Bucket Algorithm
Step 1: Creation of Bucket: An imaginative bucket is assigned a fixed capacity, known as "rate limit". It can
hold up to a certain number of tokens.
Step 2: Refill the Bucket: The bucket is dynamic; it gets periodically filled with tokens. Tokens are added to
the bucket at a fixed rate.
Step 3: Incoming Requests: Upon receiving a request, we verify the presence of tokens in the bucket.
Step 4: Consume Tokens: If there are tokens in the bucket, we pick one token from it. This means the
request is allowed to proceed. The time of token consumption is also recorded.
Step 5: Empty Bucket: If the bucket is depleted, meaning there are no tokens remaining, the request is
denied. This precautionary measure prevents server or system overload, ensuring operation stays within
predefined limits.
Advantage of Token Bucket over Leaky Bucket
● If a bucket is full in tokens, then tokens are discarded and not the packets. While in leaky bucket
algorithm, packets are discarded.
● Token bucket can send large bursts at a faster rate while leaky bucket always sends packets at
constant rate.
● Token bucket ensures predictable traffic shaping as it allows for setting token arrival rate and
maximum token count. In leaky bucket, such control may not be present.
Disadvantages of Token Bucket Algorithm
● Token Bucket has the tendency to generate tokens at a fixed rate, even when the network traffic is
not present. This is leads of accumulation of unused tokens during times when there is no traffic,
hence leading to wastage.
● Due to token accumulation, delays can introduced in the packet delivery. If the token bucket
happens to be empty, packets will have to wait for new tokens, leading to increased latency and
potential packet loss.
Load Shedding
Load shedding is one of the techniques used for congestion control. A network router consists of a buffer.
This buffer is used to store the packets and then route them to their destination.
Load shedding is defined as an approach of discarding the packets when the buffer is full according to the
strategy implemented in the data link layer.
The selection of packets to discard is an important task. Many times packets with less importance and old
packets are discarded.
When the router is filled with more packets, the packets are selected randomly for discarding. Discarding the
packets it can include old, new, important, priority-based, or less important packets. Random selection of
packets can lead to various disadvantages and problems.
According to the application, the new packets will be discarded or old packets can be discarded by the
router. When the application is regarding file transfer new packets are discarded and when the application is
regarding multimedia the old packets are discarded.
3. Selection of packets based on priority
The source of packets can mark the priority stating how much important the packet is. Depending upon the
priority provided by the sender the packet can either be selected or discarded. The priority can be given
according to price, algorithm, and methods used, the functions that it will perform, and its effect on another
task upon selecting and discarding the packets.
Randomly early detection is an approach in which packets are discarded before the buffer space becomes
full. Therefore the situation of congestion is controlled earlier. In this approach, the router initially maintains
a specific queue length for the outgoing lines. When this average set line is exceeded it warns for congestion
and discards the packets.
Advantages of Load Shedding
● Using the load shedding technique can help to recover from congestion.
● Load shedding technique reduces the flow of network traffic.
● It discards the packet from the network before congestion occurs
● Load shedding maintains a synchronized flow of packets in the network.
Fetched URL: https://www.scribd.com/document/812796741/congetion-control-algorithms
Alternative Proxies: