0% found this document useful (0 votes)
2 views

R22 CCN - Unit 4

The document provides an overview of the transport layer in computer communication networks, detailing its functions such as end-to-end connectivity, error recovery, and connection management. It discusses two main protocols: UDP, which is connectionless and unreliable, and TCP, which is connection-oriented and ensures reliable delivery through mechanisms like flow control and error checking. Additionally, it covers topics like port addressing, multiplexing, and the importance of quality of service (QoS).

Uploaded by

saikumarvit
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

R22 CCN - Unit 4

The document provides an overview of the transport layer in computer communication networks, detailing its functions such as end-to-end connectivity, error recovery, and connection management. It discusses two main protocols: UDP, which is connectionless and unreliable, and TCP, which is connection-oriented and ensures reliable delivery through mechanisms like flow control and error checking. Additionally, it covers topics like port addressing, multiplexing, and the importance of quality of service (QoS).

Uploaded by

saikumarvit
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 97

COMPUTER COMMUNNICATION

NETWORKS

UNIT - IV

TRANSPORT LAYER
CONTENTS
 Overview of Transport layer
 UDP
 Reliable byte stream (TCP)
 Connection management
 Flow control
 Retransmission
 TCP Congestion control
 Congestion avoidance
 Quality of Service (QoS)
 QoS Techniques
TRANSPORT LAYER
 Relies on Network layer and serves the Application layer
 End-to-End connectivity
 Port addressing
 Segmentation and reassembly
 Connection control
 Error recovery
 E.g., UDP, SCTP and TCP
TRANSPORT LAYER DUTIES
 Provides logical communication between application processes running on different
hosts.
 Packetizing
 Sender side: breaks application messages into segments, passes them to
network layer.
 Transport layer at the receiving host deliver data to the receiving process.
 Connection control
 Connection-oriented
 Connectionless
 Reliability
 Flow control
 Error control
 Addressing
 Port numbers to identify which network application
PROCESS-TO-PROCESS DELIVERY
 Client-Server Paradigm
 Addressing
 Multiplexing and Demultiplexing
 Connectionless/Connection-Oriented
 Reliable/Unreliable
TYPES OF DATA DELIVERIES
PORT NUMBERS
IANA RANGES
 Internet Assigned Number Authority (IANA) has divided the port numbers
into three ranges:
 Well Known – controlled and assigned by IANA and are given to servers.
 Registered – not assigned or controlled by IANA and can only be
registered. Prevents duplication.
 Dynamic – neither controlled nor registered by IANA and can be used by
any process.
IP ADDRESSES VS PORT NUMBERS
SOCKET ADDRESS
 Process to process delivery needs two identifiers, IP address and port
number and this combination is called socket address.
 The combination of an IP address and a port number is called a
‘socket address’.
MULTIPLEXING & DEMULTIPLEXING
 Multiplexing
 Sender side: there may be several processes that need to send packet.
 Many-to-one relationship: multiplexing
 Accepts messages from different processes
 Differentiates messages by their port numbers
 Adds header to each message and passes packet to network layer
MULTIPLEXING & DEMULTIPLEXING
 Demultiplexing
 Receiver side: there may be several processes that can receive user
datagrams.
 One-to-many relationship: demultiplexing
 Receives user datagram from network layer
 Checks errors in user datagram and drops the header
 Delivers the message to the appropriate process based on the port number
CONNECTIONLESS VS CONNECTION ORIENTED

 Connectionless – Packets are sent without the need for connection


establishment or connection release. Packets are not numbered, may be
delayed or lost or may arrive out of order. No acknowledgement either.
Ex.:- User Datagram Protocol (UDP)

 Connection-Oriented – Connection is established after which data


transfer takes place and then connection is released.
Ex.:- Transmission Control Protocol (TCP), Stream Control Transmission
Protocol (SCTP)
CONNECTION ESTABLISHMENT
CONNECTION TERMINATION
RELIABLE VS UNRELIABLE
 Reliable transport protocol – Implements flow and error control. Hence a
slower and complex service. Reliable protocol like TCP, SCTP can be used.
 Unreliable transport protocol – Nature of service does not demand flow
and error control (Real Time applications). Unreliable protocol like UDP
can be used.
 Reliability at Data Link Layer is node-node, but we need to provide end to
end reliability which is taken care at transport layer.
POSITION OF UDP, TCP, AND SCTP IN
TCP/IP SUITE
USER DATAGRAM PROTOCOL (UDP)
 UDP is a connectionless, unreliable protocol that has no flow and error
control.

 It does not add anything to the services of IP, except to provide process-
to-process communication instead of host-to-host communication.

 It uses port numbers to multiplex data from the application layer. Limited
error checking and overhead.

 The calculation of checksum and its inclusion in the user datagram are
optional.

 UDP is a convenient transport layer protocol for applications that provide


flow and error control. It is also used by multimedia applications.
WHY WOULD ANYONE USE UDP?
 Finer control over what data is sent and when
 As soon as an application process writes into the socket

 … UDP will package the data and send the packet

 No delay for connection establishment


 UDP just blasts away without any formal preliminaries

 … which avoids introducing any unnecessary delays

 No connection state
 No allocation of buffers, parameters, sequence numbers, etc.

 … making it easier to handle many active clients at once

 Small packet header overhead


 UDP header is only eight-bytes long
Well-known ports used by UDP
Port Protocol Description
7 Echo Echoes a received datagram back to the sender
9 Discard Discards any datagram that is received
11 Users Active users
13 Daytime Returns the date and the time
17 Quote Returns a quote of the day
19 Chargen Returns a string of characters
53 Nameserver Domain Name Service
67 BOOTPs Server port to download bootstrap information
68 BOOTPc Client port to download bootstrap information
69 TFTP Trivial File Transfer Protocol
111 RPC Remote Procedure Call
123 NTP Network Time Protocol
161 SNMP Simple Network Management Protocol
162 SNMP Simple Network Management Protocol (trap)
USER DATAGRAM FORMAT
 UDP packets (user datagrams) have fixed header size of 8 bytes.
 Source/Destination port numbers – used by the process running on the
systems which can be temporary (client) assigned by the transport layer
software running on the system or well known (server).
 Length – defines the total length i.e., header plus data. As UDP packet is
encapsulated in IP packet, this field is not required as we have total length
field in IP packet format.
 Checksum – detects errors over entire user datagram. Its inclusion is
optional.
CHECKSUM
 Checksum calculation
 Checksum: an optional field
 All 0s: to indicate the checksum has not been computed

 All 1s (negative zero): computed checksum is zero

 UDP pseudo-header
 Source and Destination IP address

 PROTO: to ensure the packet belongs to UDP, not to TCP; UDP = 17

 UDP length: length of UDP datagram, not including pseudo-header

 * Not transmitted with UDP datagram


PSEUDOHEADER FOR CHECKSUM CALCULATION
UDP OPERATION
 Connectionless Service
 Each datagram sent by UDP is an independent datagram

 UDP cannot chop a stream of data into different related user datagrams

 Each request must be small enough to fit into one user datagram

 Only processes sending short message should use UDP

 Flow and Error Control


 No flow control

 No error control except for checksum


+ The sender does not know whether the message has been lost
+ When the receiver detects an error through checksum, it discards it silently
 The process using UDP should provide flow and error control by itself
UDP OPERATION (CONTD.)
 Encapsulation & Decapsulation
 To send a message from one process to another, the UDP protocol
encapsulates and decapsulates message in an IP datagram.
UDP OPERATION (CONTD.)
 Queuing
 Queues are opened for server/client processes

 2 queues for each process

 Incoming queue: receive messages

 Outgoing queue: send messages

 The queues function as long as the process is running

 The queues are destroyed when the process terminates


UDP OPERATION (CONTD.)
 Queues on the client side
 The client process requests a port number from the operating system.

 The process opens incoming and outgoing queues with the requested
port number.
 Queues on the server side
 The server asks for incoming and outgoing queues using its well-known
port number.
 Outgoing queue overflow
 The operating system asks the server/client to wait before sending any
more messages.
 Incoming queue overflow
 UDP discards the datagram and asks the ICMP protocol to send port
unreachable message to the datagram sender.
 No incoming queue created for port number specified in the arrived
datagram.
USES OF UDP
 UDP is suitable for a process that requires simple request-response
communication with little concern for flow and error control.
 UDP is suitable for a process with internal flow and error control
mechanisms. Ex: TFTP
 UDP is suitable for multicasting.
 UDP is used for management processes such as SNMP.
 UDP is used for route updating protocols such as RIP.
TCP - TRANSMISSION CONTROL PROTOCOL
 Port Numbers
 Services
 Sequence Numbers
 Segments
 Connection
 Transition Diagram
 Flow and Error Control
 Silly Window Syndrome
TCP - TRANSMISSION CONTROL PROTOCOL
 Connection oriented
 Explicit set-up of virtual path and tear-down
 Stream-of-bytes service
 Sends and receives a stream of bytes, not messages
 Reliable, in-order delivery
 Checksums to detect corrupted data
 Acknowledgments & retransmissions for reliable delivery
 Sequence numbers to detect losses and reorder data
 Flow control
 Prevent overflow of the receiver’s buffer space
 Congestion control
 Adapt to network congestion for the greater good
TCP SUPPORT FOR RELIABLE DELIVERY
 Checksum
 Used to detect corrupted data at the receiver
 …leading the receiver to drop the packet
 Sequence numbers
 Used to detect missing data
 ... and for putting the data back in order
 Retransmission
 Sender retransmits lost or corrupted data
 Timeout based on estimates of round-trip time
 Fast retransmit algorithm for rapid retransmission
Well-known ports used by TCP
Port Protocol Description
7 Echo Echoes a received datagram back to the sender
9 Discard Discards any datagram that is received
11 Users Active users
13 Daytime Returns the date and the time
17 Quote Returns a quote of the day
19 Chargen Returns a string of characters
20 FTP, Data File Transfer Protocol (data connection)
21 FTP, Control File Transfer Protocol (control connection)
23 TELNET Terminal Network
25 SMTP Simple Mail Transfer Protocol
53 DNS Domain Name Server
67 BOOTP Bootstrap Protocol
79 Finger Finger
80 HTTP Hypertext Transfer Protocol
111 RPC Remote Procedure Call
STREAM DELIVERY
SENDING AND RECEIVING BUFFERS
TCP SEGMENTS
EXAMPLE
 Imagine a TCP connection is transferring a file of 6000 bytes. The first byte
is numbered 10010. What are the sequence numbers for each segment if
data are sent in five segments with the first four segments carrying 1000
bytes and the last segment carrying 2000 bytes?

 Solution
 The following shows the sequence number for each segment:
Segment 1 ==> Sequence number: 10,010 (range: 10,010 to 11,009)
Segment 2 ==> Sequence number: 11,010 (range: 11,010 to 12,009)
Segment 3 ==> Sequence number: 12,010 (range: 12,010 to 13,009)
Segment 4 ==> Sequence number: 13,010 (range: 13,010 to 14,009)
Segment 5 ==> Sequence number: 14,010 (range: 14,010 to 16,009)
TCP: FEATURES CONSIDERED
 The bytes of data being transferred in each connection are numbered by
TCP. The numbering starts with a randomly generated number.

 The value in the sequence number field of a segment defines the number of
the first data byte contained in that segment.

 The value of the acknowledgment field in a segment defines the number of


the next byte a party expects to receive.

 The acknowledgment number is cumulative.


TCP SEGMENT FORMAT
CONTROL FIELD

Flag Description

URG The value of the urgent pointer field is valid.

ACK The value of the acknowledgment field is valid.

PSH Push the data.

RST The connection must be reset.

SYN Synchronize sequence numbers during connection.

FIN Terminate the connection.


THREE-STEP CONNECTION ESTABLISHMENT
A B

 Three-way handshake to establish connection


 Host A sends a SYN (open) to the host B
 Host B returns a SYN and Acknowledgment (SYN + ACK)
 Host A sends an ACK to acknowledge the SYN + ACK
THREE-STEP CONNECTION TERMINATION
FOUR-STEP CONNECTION TERMINATION
Half Close Type of Connection Termination
STATES FOR TCP

State Description

CLOSED There is no connection.


LISTEN The server is waiting for calls from the client.
SYN-SENT A connection request is sent; waiting for acknowledgment.
SYN-RCVD A connection request is received.
ESTABLISHED Connection is established.
FIN-WAIT-1 The application has requested the closing of the connection.
FIN-WAIT-2 The other side has accepted the closing of the connection.
TIME-WAIT Waiting for retransmitted segments to die.
CLOSE-WAIT The server is waiting for the application to close.
LAST-ACK The server is waiting for the last acknowledgment.
STATE TRANSITION DIAGRAM
SLIDING WINDOW
 The sliding window protocol used by TCP is something between the
Go-Back-N and Selective Repeat sliding window.
A sliding window is used to make
transmission more efficient as well as to
control the flow of data so that the
destination does not become
overwhelmed with data. TCP’s sliding
windows are byte-oriented and of variable
size.
SENDER BUFFER AND SENDER WINDOW
 Sender Buffer

 Sender Window
SLIDING THE SENDER WINDOW
EXPANDING & SHRINKING THE SENDER WINDOW
 Expanding the sender window

 Shrinking the sender window


RECEIVER WINDOW
In TCP, the sender window size is totally
controlled by the receiver window value
(the number of empty locations in the
receiver buffer). However, the actual
window size can be smaller if there is
congestion in the network.
SOME POINTS ABOUT TCP SLIDING WINDOWS

 The size of the window is the lesser of rwnd and cwnd.


 The source does not have to send a full window’s worth of data.
 The window can be opened or closed by the receiver, but
should not be shrunk.
 The size of the window can be increased or decreased by the
destination.
 The destination can send an acknowledgment at any time.
 The receiver can temporarily shut down the window.
LOST SEGMENT
ERROR CONTROL
 Checksum
 Acknowledgement
 Time-out: Retransmission
 Out-of-Order Segments
SILLY WINDOW SYNDROME
 The sliding window operation when the sending application program
creates data slowly, the receiving application program consumes data
slowly.
 If a server with this problem is unable to process all incoming data, it
requests that its clients reduce the amount of data they send at a time.
 If the server continues to be unable to process all incoming data, the
window becomes smaller and smaller, sometimes to the point that the
data transmitted is smaller than the packet header, making data
transmission extremely inefficient.
 This scenario is called the Silly Window Syndrome.
 The name of this problem is due to the window size shrinking to a "silly"
value.
TCP TIMERS
CONGESTION CONTROL
AND
QUALITY OF SERVICE
DATA TRAFFIC

 The main focus of congestion control and quality of service is data


traffic.

 In congestion control we try to avoid traffic congestion.

 In quality of service, we try to create an appropriate environment for the


traffic.
TRAFFIC DESCRIPTORS
 Traffic descriptors are qualitative values that represent a data flow.

 Average data rate is the no. of bits sent in a period of a time.


 Peak data rate defines the maximum data rate of the traffic.
 Maximum burst size is the maximum length of time the traffic is generated
at the peak rate.
 Effective bandwidth is the bandwidth the network needs to allocate for the
flow of traffic.
TYPES OF TRAFFIC PROFILE
 Constant-Bit-Rate (CBR)/Fixed Rate – In this traffic model, the data rate does
not change. The average data rate and the peak data rate are the same.

 Variable-Bit-Rate (VBR) – In this traffic model, the data flow changes in time.
The average data rate and the peak data rate are the different.

 Bursty – In the bursty data category,


the data rate changes suddenly in a
very short time. It may jump to zero,
it may also remain at a value for a
while. Bursty traffic is one of the
main causes of congestion in a
network.
CONGESTION
 Congestion occurs when load on the network is higher than the
network capacity.

 Reasons for congestion


 Retransmission on noisy media caused by verification
mechanisms (e.g. TCP).
 Simply too many concurrent requests (many users).

 “Bursty” applications may make many requests quickly.

 Congestion control uses mechanisms and techniques to control the


congestion and keep the load below the capacity.
QUEUES IN A ROUTER

 Congestion occurs because routers and switches have queues – buffers


that hold the packets before and after processing.

 Congestion occurs when


1. Packet arrival rate > packet processing rate → input queue longer
2. Packet departure rate < packet processing rate → output queue longer
NETWORK PERFORMANCE
 Two main factors
 Delay

 Throughput

 Delay is composed of propagation delay and processing delay.

 Delay increases dramatically once the load reaches the network capacity.

 Throughput is the number of packets passing through a network in a unit of


time.
PACKET DELAY AND THROUGHPUT AS
FUNCTIONS OF LOAD

 When the load reaches capacity, more and more packets are delayed. The
source, not receiving the ACKs, retransmits the packets, which makes the
delay and the congestion worse.

 When the load exceeds capacity, some packets are discarded because the
queue in routers are full. These packets needs to be retransmitted,
therefore the capacity decreases.
CONGESTION CONTROL
 Congestion control refers to techniques and mechanisms that can either:

 Prevent congestion, before it happens

 Remove congestion, after it has happened

 Congestion control mechanisms fall into two broad categories:

 Open-loop congestion control (prevention)

 Closed-loop congestion control (removal)


CONGESTION CONTROL
OPEN LOOP CONTROL
 Retransmission Policy: Retransmission may increase congestion. However,
good retransmission policy and retransmission timer can prevent
congestion.

 Window policy: Selective Repeat is better than Go-back-N because only


specific packets that have been lost or corrupted are resent.

 Acknowledgement policy: ACK packets also load the network. Need not
ACK every packet or N packets at a time.

 Discard policy: Prevent congestion and at the same time may not harm the
integrity of the transmission. Discarding less sensitive packets.

 Admission policy: Switch first checks the resource requirement of a flow


before admitting it to the network.
CLOSED-LOOP CONGESTION CONTROL
 Back pressure: Congested node informs the previous upstream router to
reduce the rate of outgoing packets if congested. Node-to-node congestion
control that starts with a node and propagates in opposite direction of data
flow to source.

 Choke packet: When a node experiences congestion, it sends a warning


packet to the source directly (intermediate nodes through which packet has
travelled are not warned). Choke packet is a packet sent by a node to
source to inform it of congestion.
CLOSED-LOOP CONGESTION CONTROL
 Implicit signaling: When the source sends several packets and no ACK is
received or received with delay, it assumes that the network is congested
and the source slows down its sending rate.
 Explicit signaling: Node that experiences congestion sends an explicit signal
to the source or destination. This signal is included in the packet that carry
data. It can occur in forward or backward direction.
 Backward signaling: A bit can be sent in a packet moving in the direction
opposite to congestion that warns the source to slow down to avoid
discarding.
 Forward signaling: A bit can be sent in a packet moving in the direction of
congestion that warns the destination about congestion.
CONGESTION CONTROL IN TCP
 Closed-loop, window-based congestion control.

 Sender window size is determined by the available buffer space in the


receiver.

 TCP assumes that the cause of a lost segment is due to congestion in the
network.

 If the cause of the lost segment is congestion, retransmission of the


segment does not remove the cause – it aggravates it.
 The sender has two pieces of information: the receiver-advertised window
size and the congestion window size.
 TCP Congestion window
 Actual window size = minimum (rwnd, cwnd)
(where rwnd = receiver window size, cwnd = congestion window size)
TCP CONGESTION POLICY
 TCP’s general policy for handling congestion is based on three phases:

 Slow Start

 Congestion Avoidance (predict congestion and reduce rate before


loss occurs)

 Congestion Detection
SLOW START: EXPONENTIAL INCREASE
 In the Slow-Start algorithm, the size of the congestion window
increases exponentially each time it receives an acknowledgement.

 The algorithm starts the congestion window with one transmission


unit, called MSS or Maximum Segment Size. MSS is determined
during connection establishment phase.

 The congestion window size increases by one MSS each time an ACK
is received.

 The window starts slowly and increases exponentially.

 To understand the slow start phase, assume segment number instead


of byte number, the sender window size is always equal to congestion
window (assuming rwnd is large), and each segment is acknowledged
individually.
SLOW START: EXPONENTIAL INCREASE
 The sender starts with cwnd = 1 MSS. The sender sends only one
segment.
 The cwnd increases by one MSS every time an ACK is received.
 If the ACKs are received with delay, cwnd increases at the rate less
than the power of two.
 There is a threshold of 65,535 bytes (ssthresh) to stop this phase.
SLOW START: EXPONENTIAL INCREASE

In the Slow-Start algorithm, the size of the congestion


window increases exponentially until it reaches a threshold.
CONGESTION AVOIDANCE: ADDITIVE INCREASE
 We must slow down the exponential growth in the slow start phase to
avoid congestion before it happens.

 When the size of the congestion window reaches the slow-start


threshold, the slow-start phase stops and the additive phase
(congestion avoidance) begins.

 In this algorithm, each time the whole window of segments is


acknowledged (one round), the size of cwnd increases by 1.

 The cwnd increases additively until congestion is detected.


CONGESTION AVOIDANCE: ADDITIVE INCREASE
CONGESTION DETECTION: MULTIPLICATIVE
DECREASE
 When congestion occurs, cwnd must be decreased.
 Usually, the sender guesses congestion by the need to retransmit a
segment; when timer expires or when three ACKs are received.
 If time out, there is a strong possibility of congestion, TCP reacts strongly:
- It sets the value of threshold to one-half of the current window size
(multiplicative decrease).
- It resets the cwnd to the size of one segment (one MSS).
- It starts the slow start phase again.
 If three ACKs are received, there is a weak possibility of congestion. A
segment may be dropped, but some segments may have arrived safely
since 3 ACKs are received. TCP needs to retransmit a specific packet (fast
retransmission or fast recovery). TCP has a weaker reaction:
- It sets the value of threshold to one-half of the current window size.
- It sets cwnd to the value of the threshold.
- It starts the congestion avoidance phase.
TCP CONGESTION POLICY SUMMARY
CONGESTION CONTROL IN FRAME RELAY
 Congestion in frame relay network decreases throughput and increases
delay.

 High throughput and low delay are main goals.

 Allows the transmission of bursty data, thus requires congestion


control.

 Frame relay protocol uses two bits in the frame to warn the source and
destination of the presence of congestion.

 Congestion Avoidance: BECN and FECN.


CONGESTION CONTROL IN FRAME RELAY
 Backward Explicit Congestion Notification (BECN) – warns the sender.
 Switch uses response frames from the receiver or predefined connection to
send special frames. Sender thereby reduces the data rate.

 Forward Explicit Congestion Notification (FECN) – warns the receiver.


 Receiver can delay the acknowledgement, thus forcing the sender to slow
down.
FOUR CASES OF CONGESTION
When two end points are communicating using Frame Relay network, four
situations may occur with regard to congestion
QUALITY OF SERVICE (QOS)

 Quality of service (QoS) is an internetworking issue that has


been discussed more than defined.

 Quality of service can be defined as something a flow seeks


to attain.
FLOW CHARACTERISTICS
 There are traditionally four types of characteristics that a flow may need.
 Reliability

 Lack of reliability means losing a packet or acknowledgement,


entails retransmission.
 E-mail, File transfer have reliable transmissions than telephony or
audio conferencing.
 Delay

 Applications can tolerate delay in different degrees.

 Telephony, audio and video conferencing needs minimum delay


whereas delay in e-mail or file transfer it is less important.
FLOW CHARACTERISTICS
 Jitter
 The variation in delay for packets belonging to the same flow.

 Audio and Video applications cannot tolerate jitters.

 Bandwidth
 Different applications need different bandwidths.

 Applications like video conference needs a large bandwidth


whereas an e-mail needs a small bandwidth.

Flow Classes
 Based on the flow characteristics, we can classify flows into groups,
with each group having similar levels of characteristics.
TECHNIQUES TO IMPROVE QOS
Four common methods used to improve the quality of service are:

 Scheduling: Packets from different flows arrive at the switch for processing.
 FIFO queuing, Priority queuing, and Weighted fair queuing

 Traffic Shaping: Controls the amount and rate of traffic sent to network.
 Leaky bucket, Token bucket

 Resource Reservation: Resources such as buffer, CPU time, bandwidth


must be reserved.

 Admission Control: Switch uses this mechanism to accept or reject a flow


based on predefined parameters called as flow specifications.
FIFO QUEUING
 Packets wait in the queue until the node is ready to process them.
 If the average arrival rate is higher than average processing rate, the queue
is filled up and new packets will be discarded.
PRIORITY QUEUING
 Packets are first assigned to priority class. Each priority class has its own
queue.
 The packets in the highest-priority queue are processed first.
 System does not stop serving a queue until it is empty. Better than FIFO.
 If there is a continuous flow in high priority queue, low priority queues will
never get the chance (starvation).
WEIGHTED FAIR QUEUING
 Packets are assigned to different classes and admitted to different queues.
 The queues are weighted based on the priority of the queues.
 The system processes packets in each queue in a round-robin fashion with
the number of packets selected from each queue based on the weight.
TRAFFIC SHAPING
 Traffic shaping is about regulating the average rate (and burstiness) of data
transmission.
 Traffic shaping controls the rate at which packets are sent (not just how
many).
 At connection set-up time, the sender and carrier negotiate a traffic pattern
(shape).

 Two traffic shaping algorithms are:


 Leaky Bucket

 Token Bucket
LEAKY BUCKET ALGORITHM
 A leaky bucket algorithm shapes bursty traffic into fixed-rate traffic by
averaging the data rate. It may drop the packets if the bucket is full.
 The leaky bucket enforces a constant output rate (average rate)
regardless of the burstiness of the input. Does nothing when input is
idle.
 It turns an uneven flow of packets from the host into an even flow of
packets onto the network.
 The host injects one packet per clock tick onto the network. This results
in a uniform flow of packets, smoothing out bursts and reducing
congestion.
 For fixed size packets, the above algorithm works well. For variable-
sized packets, it is often better to allow a fixed number of bytes per tick.
LEAKY BUCKET ALGORITHM
LEAKY BUCKET IMPLEMENTATION
 Algorithm for variable-length packets:
1. Initialize a counter to n at the tick of the clock.
2. If n is greater than the size of the packet, send the packet and
decrement the counter by the packet size. Repeat this step until n is
smaller than the packet size.
3. Reset the counter and go to step 1.
TOKEN BUCKET ALGORITHM
 In contrast to the Leaky Bucket, the Token Bucket Algorithm allows the
output rate to vary, depending on the size of the burst.

 It allows idle hosts to accumulate credit for the future in the form of
tokens. Idle hosts can capture and save up tokens (up to the max. size
of the bucket) in order to send larger bursts later.

 In the Token Bucket algorithm, the bucket holds tokens. For each tick of
clock, system sends ‘n’ tokens to bucket. For each byte of data sent,
system removes one token.
 If n=100, host is idle for 100 ticks, bucket collects 10,000 tokens.
 Host can now send 10,000 bytes of data in one tick or host can take
1000 ticks with 10 bytes per tick (i.e., host can send bursty data as long
as the bucket is not empty).
TOKEN BUCKET IMPLEMENTATION
 Counter initialized to ‘0’ first.
 Each time a token is taken, counter is incremented.
 Each time a data unit is sent, counter is decremented.
 When counter is ‘0’, host cannot send the data.
 The token bucket allows bursty traffic at a regulated maximum rate.
LEAKY BUCKET VS TOKEN BUCKET
 Leaky Bucket discards packets. Token Bucket does not discard packets
but discards tokens.

 With Token Bucket, a packet can only be transmitted if there are


enough tokens to cover its length in bytes.

 Leaky Bucket sends packets at an average rate. Token Bucket allows for
large bursts to be sent faster by speeding up the output.

 Token Bucket allows saving up tokens (permissions) to send large


bursts. Leaky Bucket does not allow saving.
RESOURCE RESERVATION
 Traffic shaping is more effective when all packets follow the same route.

 We can assign a specific route to a flow and then reserve resources along
that route.

 Three kinds of resources can be reserved:


 Bandwidth

 Buffer space: For a good quality of service, some buffers can be


reserved for a specific flow, so that flow does not have to compete for
buffers with other flows.
 CPU cycles
ADMISSION CONTROL
 We saw resource reservation, but how can the sender specify the required
resources ? Also, some applications are tolerant of occasional lapses in
QoS. Also, apps might not know what are its CPU requirements.

 Hence routers must convert a set of specifications to resource requirements


and then decide whether to accept or reject the flow.

 Steps to create flow specification:


1. Sender specifies the flow parameters it would like to use.
2. Each router examines it and modifies the parameters as needed.
3. When it gets to the other end, the parameters can be established.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy