R22 CCN - Unit 4
R22 CCN - Unit 4
NETWORKS
UNIT - IV
TRANSPORT LAYER
CONTENTS
Overview of Transport layer
UDP
Reliable byte stream (TCP)
Connection management
Flow control
Retransmission
TCP Congestion control
Congestion avoidance
Quality of Service (QoS)
QoS Techniques
TRANSPORT LAYER
Relies on Network layer and serves the Application layer
End-to-End connectivity
Port addressing
Segmentation and reassembly
Connection control
Error recovery
E.g., UDP, SCTP and TCP
TRANSPORT LAYER DUTIES
Provides logical communication between application processes running on different
hosts.
Packetizing
Sender side: breaks application messages into segments, passes them to
network layer.
Transport layer at the receiving host deliver data to the receiving process.
Connection control
Connection-oriented
Connectionless
Reliability
Flow control
Error control
Addressing
Port numbers to identify which network application
PROCESS-TO-PROCESS DELIVERY
Client-Server Paradigm
Addressing
Multiplexing and Demultiplexing
Connectionless/Connection-Oriented
Reliable/Unreliable
TYPES OF DATA DELIVERIES
PORT NUMBERS
IANA RANGES
Internet Assigned Number Authority (IANA) has divided the port numbers
into three ranges:
Well Known – controlled and assigned by IANA and are given to servers.
Registered – not assigned or controlled by IANA and can only be
registered. Prevents duplication.
Dynamic – neither controlled nor registered by IANA and can be used by
any process.
IP ADDRESSES VS PORT NUMBERS
SOCKET ADDRESS
Process to process delivery needs two identifiers, IP address and port
number and this combination is called socket address.
The combination of an IP address and a port number is called a
‘socket address’.
MULTIPLEXING & DEMULTIPLEXING
Multiplexing
Sender side: there may be several processes that need to send packet.
Many-to-one relationship: multiplexing
Accepts messages from different processes
Differentiates messages by their port numbers
Adds header to each message and passes packet to network layer
MULTIPLEXING & DEMULTIPLEXING
Demultiplexing
Receiver side: there may be several processes that can receive user
datagrams.
One-to-many relationship: demultiplexing
Receives user datagram from network layer
Checks errors in user datagram and drops the header
Delivers the message to the appropriate process based on the port number
CONNECTIONLESS VS CONNECTION ORIENTED
It does not add anything to the services of IP, except to provide process-
to-process communication instead of host-to-host communication.
It uses port numbers to multiplex data from the application layer. Limited
error checking and overhead.
The calculation of checksum and its inclusion in the user datagram are
optional.
No connection state
No allocation of buffers, parameters, sequence numbers, etc.
UDP pseudo-header
Source and Destination IP address
UDP cannot chop a stream of data into different related user datagrams
Each request must be small enough to fit into one user datagram
The process opens incoming and outgoing queues with the requested
port number.
Queues on the server side
The server asks for incoming and outgoing queues using its well-known
port number.
Outgoing queue overflow
The operating system asks the server/client to wait before sending any
more messages.
Incoming queue overflow
UDP discards the datagram and asks the ICMP protocol to send port
unreachable message to the datagram sender.
No incoming queue created for port number specified in the arrived
datagram.
USES OF UDP
UDP is suitable for a process that requires simple request-response
communication with little concern for flow and error control.
UDP is suitable for a process with internal flow and error control
mechanisms. Ex: TFTP
UDP is suitable for multicasting.
UDP is used for management processes such as SNMP.
UDP is used for route updating protocols such as RIP.
TCP - TRANSMISSION CONTROL PROTOCOL
Port Numbers
Services
Sequence Numbers
Segments
Connection
Transition Diagram
Flow and Error Control
Silly Window Syndrome
TCP - TRANSMISSION CONTROL PROTOCOL
Connection oriented
Explicit set-up of virtual path and tear-down
Stream-of-bytes service
Sends and receives a stream of bytes, not messages
Reliable, in-order delivery
Checksums to detect corrupted data
Acknowledgments & retransmissions for reliable delivery
Sequence numbers to detect losses and reorder data
Flow control
Prevent overflow of the receiver’s buffer space
Congestion control
Adapt to network congestion for the greater good
TCP SUPPORT FOR RELIABLE DELIVERY
Checksum
Used to detect corrupted data at the receiver
…leading the receiver to drop the packet
Sequence numbers
Used to detect missing data
... and for putting the data back in order
Retransmission
Sender retransmits lost or corrupted data
Timeout based on estimates of round-trip time
Fast retransmit algorithm for rapid retransmission
Well-known ports used by TCP
Port Protocol Description
7 Echo Echoes a received datagram back to the sender
9 Discard Discards any datagram that is received
11 Users Active users
13 Daytime Returns the date and the time
17 Quote Returns a quote of the day
19 Chargen Returns a string of characters
20 FTP, Data File Transfer Protocol (data connection)
21 FTP, Control File Transfer Protocol (control connection)
23 TELNET Terminal Network
25 SMTP Simple Mail Transfer Protocol
53 DNS Domain Name Server
67 BOOTP Bootstrap Protocol
79 Finger Finger
80 HTTP Hypertext Transfer Protocol
111 RPC Remote Procedure Call
STREAM DELIVERY
SENDING AND RECEIVING BUFFERS
TCP SEGMENTS
EXAMPLE
Imagine a TCP connection is transferring a file of 6000 bytes. The first byte
is numbered 10010. What are the sequence numbers for each segment if
data are sent in five segments with the first four segments carrying 1000
bytes and the last segment carrying 2000 bytes?
Solution
The following shows the sequence number for each segment:
Segment 1 ==> Sequence number: 10,010 (range: 10,010 to 11,009)
Segment 2 ==> Sequence number: 11,010 (range: 11,010 to 12,009)
Segment 3 ==> Sequence number: 12,010 (range: 12,010 to 13,009)
Segment 4 ==> Sequence number: 13,010 (range: 13,010 to 14,009)
Segment 5 ==> Sequence number: 14,010 (range: 14,010 to 16,009)
TCP: FEATURES CONSIDERED
The bytes of data being transferred in each connection are numbered by
TCP. The numbering starts with a randomly generated number.
The value in the sequence number field of a segment defines the number of
the first data byte contained in that segment.
Flag Description
State Description
Sender Window
SLIDING THE SENDER WINDOW
EXPANDING & SHRINKING THE SENDER WINDOW
Expanding the sender window
Variable-Bit-Rate (VBR) – In this traffic model, the data flow changes in time.
The average data rate and the peak data rate are the different.
Throughput
Delay increases dramatically once the load reaches the network capacity.
When the load reaches capacity, more and more packets are delayed. The
source, not receiving the ACKs, retransmits the packets, which makes the
delay and the congestion worse.
When the load exceeds capacity, some packets are discarded because the
queue in routers are full. These packets needs to be retransmitted,
therefore the capacity decreases.
CONGESTION CONTROL
Congestion control refers to techniques and mechanisms that can either:
Acknowledgement policy: ACK packets also load the network. Need not
ACK every packet or N packets at a time.
Discard policy: Prevent congestion and at the same time may not harm the
integrity of the transmission. Discarding less sensitive packets.
TCP assumes that the cause of a lost segment is due to congestion in the
network.
Slow Start
Congestion Detection
SLOW START: EXPONENTIAL INCREASE
In the Slow-Start algorithm, the size of the congestion window
increases exponentially each time it receives an acknowledgement.
The congestion window size increases by one MSS each time an ACK
is received.
Frame relay protocol uses two bits in the frame to warn the source and
destination of the presence of congestion.
Bandwidth
Different applications need different bandwidths.
Flow Classes
Based on the flow characteristics, we can classify flows into groups,
with each group having similar levels of characteristics.
TECHNIQUES TO IMPROVE QOS
Four common methods used to improve the quality of service are:
Scheduling: Packets from different flows arrive at the switch for processing.
FIFO queuing, Priority queuing, and Weighted fair queuing
Traffic Shaping: Controls the amount and rate of traffic sent to network.
Leaky bucket, Token bucket
Token Bucket
LEAKY BUCKET ALGORITHM
A leaky bucket algorithm shapes bursty traffic into fixed-rate traffic by
averaging the data rate. It may drop the packets if the bucket is full.
The leaky bucket enforces a constant output rate (average rate)
regardless of the burstiness of the input. Does nothing when input is
idle.
It turns an uneven flow of packets from the host into an even flow of
packets onto the network.
The host injects one packet per clock tick onto the network. This results
in a uniform flow of packets, smoothing out bursts and reducing
congestion.
For fixed size packets, the above algorithm works well. For variable-
sized packets, it is often better to allow a fixed number of bytes per tick.
LEAKY BUCKET ALGORITHM
LEAKY BUCKET IMPLEMENTATION
Algorithm for variable-length packets:
1. Initialize a counter to n at the tick of the clock.
2. If n is greater than the size of the packet, send the packet and
decrement the counter by the packet size. Repeat this step until n is
smaller than the packet size.
3. Reset the counter and go to step 1.
TOKEN BUCKET ALGORITHM
In contrast to the Leaky Bucket, the Token Bucket Algorithm allows the
output rate to vary, depending on the size of the burst.
It allows idle hosts to accumulate credit for the future in the form of
tokens. Idle hosts can capture and save up tokens (up to the max. size
of the bucket) in order to send larger bursts later.
In the Token Bucket algorithm, the bucket holds tokens. For each tick of
clock, system sends ‘n’ tokens to bucket. For each byte of data sent,
system removes one token.
If n=100, host is idle for 100 ticks, bucket collects 10,000 tokens.
Host can now send 10,000 bytes of data in one tick or host can take
1000 ticks with 10 bytes per tick (i.e., host can send bursty data as long
as the bucket is not empty).
TOKEN BUCKET IMPLEMENTATION
Counter initialized to ‘0’ first.
Each time a token is taken, counter is incremented.
Each time a data unit is sent, counter is decremented.
When counter is ‘0’, host cannot send the data.
The token bucket allows bursty traffic at a regulated maximum rate.
LEAKY BUCKET VS TOKEN BUCKET
Leaky Bucket discards packets. Token Bucket does not discard packets
but discards tokens.
Leaky Bucket sends packets at an average rate. Token Bucket allows for
large bursts to be sent faster by speeding up the output.
We can assign a specific route to a flow and then reserve resources along
that route.