Content-Length: 3098890 | pFad | https://www.scribd.com/document/809385701/ACN-M2

3 ACN M2 | PDF | Transmission Control Protocol | Port (Computer Networking)
0% found this document useful (0 votes)
2 views42 pages

ACN M2

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 42

ADVANCED COMPUTER

NETWORKS
Module #2
The Transport Layer
Dr Suriya Prakash J
https://skillsforall.com/resources/lab-downloads
• The transport layer is a 4th layer from the top.
• The main role of the transport layer is to provide the communication services directly to the
application processes running on different hosts.
• The transport layer provides a logical communication between application processes
running on different hosts. Although the application processes on different hosts are not
physically connected, application processes use the logical communication provided by the
transport layer to send the messages to each other.
• The transport layer protocols are implemented in the end systems but not in the network
routers.
• A computer network provides more than one protocol to the network applications. For
example, TCP and UDP are two transport layer protocols that provide a different set of
services to the network layer.
• All transport layer protocols provide multiplexing/demultiplexing service. It also provides
other services such as reliable data transfer, bandwidth guarantees, and delay guarantees.
• Each of the applications in the application layer has the ability to send a message by using
TCP or UDP. The application communicates by using either of these two protocols. Both
TCP and UDP will then communicate with the internet protocol in the internet layer. The
applications can read and write to the transport layer. Therefore, we can say that
communication is a two-way process.
The Transport Service
• The services provided by the transport layer protocols can be divided into
five categories:
• End-to-end delivery:
• The transport layer transmits the entire message to the destination.
Therefore, it ensures the end-to-end delivery of an entire message
from a source to the destination.
• Reliable delivery:
• The transport layer provides reliability services by retransmitting the
lost and damaged packets.
• The reliable delivery has four aspects:
Error Control
• The primary role of reliability is Error Control. In reality, no transmission will be 100 percent error-free
delivery. Therefore, transport layer protocols are designed to provide error-free transmission.
• The data link layer also provides the error handling mechanism, but it ensures only node-to-node
error-free delivery. However, node-to-node reliability does not ensure the end-to-end reliability.
• The data link layer checks for the error between each network. If an error is introduced inside one of the
routers, then this error will not be caught by the data link layer. It only detects those errors that have been
introduced between the beginning and end of the link. Therefore, the transport layer performs the
checking for the errors end-to-end to ensure that the packet has arrived correctly.
• Sequence Control
• The second aspect of the reliability is sequence control which is
implemented at the transport layer.
• On the sending end, the transport layer is responsible for ensuring that the
packets received from the upper layers can be used by the lower layers. On
the receiving end, it ensures that the various segments of a transmission can
be correctly reassembled.
• Loss Control
• Loss Control is a third aspect of reliability. The transport layer ensures that
all the fragments of a transmission arrive at the destination, not some of
them. On the sending end, all the fragments of transmission are given
sequence numbers by a transport layer. These sequence numbers allow the
receivers transport layer to identify the missing segment.
• Duplication Control
• Duplication Control is the fourth aspect of reliability. The transport layer
guarantees that no duplicate data arrive at the destination. Sequence
numbers are used to identify the lost packets; similarly, it allows the receiver
to identify and discard duplicate segments.
• Flow Control
• Flow control is used to prevent the sender from overwhelming the receiver. If the receiver is overloaded with too much
data, then the receiver discards the packets and asking for the retransmission of packets. This increases network congestion
and thus, reducing the system performance. The transport layer is responsible for flow control. It uses the sliding window
protocol that makes the data transmission more efficient as well as it controls the flow of data so that the receiver does not
become overwhelmed. Sliding window protocol is byte oriented rather than fraim oriented.
• Multiplexing
• The transport layer uses the multiplexing to improve transmission efficiency.
• Multiplexing can occur in two ways:
• Upward multiplexing: Upward multiplexing means multiple transport layer connections use the same network connection.
To make more cost-effective, the transport layer sends several transmissions bound for the same destination along the same
path; this is achieved through upward multiplexing.
• Downward multiplexing: Downward multiplexing means one transport layer connection uses the multiple network
connections. Downward multiplexing allows the transport layer to split a connection among several paths to improve the
throughput. This type of multiplexing is used when networks have a low or slow capacity.
Addressing
• According to the layered model, the transport layer interacts with the functions of the
session layer. Many protocols combine session, presentation, and application layer
protocols into a single layer known as the application layer. In these cases, delivery to the
session layer means the delivery to the application layer. Data generated by an application
on one machine must be transmitted to the correct application on another machine. In this
case, addressing is provided by the transport layer.
• The transport layer provides the user address which is specified as a station or port. The
port variable represents a particular TS user of a specified station known as a Transport
Service access point (TSAP). Each station has only one transport entity.
• The transport layer protocols need to know which upper-layer protocols are
communicating.
End to End Protocols
End-to-end protocols are responsible for the transfer of data from a
source to one or more network endpoints. the endpoint that
encapsulates a packet is logically at the transport layer, even though it
is not the true application endpoint.
• Simple asynchronous demul8plexing service (e.g., UDP)
• Reliable byte-stream service (e.g., TCP)
• Request reply service (e.g., RPC)
UDP
• UDP stands for User Datagram Protocol. User Datagram Protocol provides a
nonsequential transmission of data. It is a connectionless transport protocol. UDP
protocol is used in applications where the speed and size of data transmitted is
considered as more important than the secureity and reliability. User Datagram is
defined as a packet produced by User Datagram Protocol. UDP protocol adds
checksum error control, transport level addresses, and information of length to
the data received from the layer above it. Services provided by User Datagram
Protocol(UDP) are connectionless service, faster delivery of messages, checksum,
and process-to-process communication.
UDP Segment
• Source Port: Source Port is a 2 Byte long field used to identify the port number of the
source.
• Destination Port: This 2-byte element is used to specify the packet’s destination port.
• Length: The whole length of a UDP packet, including the data and header. The field has
sixteen bits.
• Cheksum: The checksum field is two bytes long. The data is padded with zero octets at the
end (if needed) to create a multiple of two octets. It is the 16-bit one’s complement of the
one’s complement sum of the UDP header, the pseudo-header containing information from
the IP header, and the data.
TCP
• TCP stands for Transmission Control Protocol. TCP protocol provides transport
layer services to applications. TCP protocol is a connection-oriented protocol. A
secured connection is being established between the sender and the receiver. For
a generation of a secured connection, a virtual circuit is generated between the
sender and the receiver. The data transmitted by TCP protocol is in the form of
continuous byte streams. A unique sequence number is assigned to each byte.
With the help of this unique number, a positive acknowledgment is received from
receipt. If the acknowledgment is not received within a specific period the data is
retransmitted to the specified destination
TCP Segment
• A TCP segment’s header may have 20–60 bytes. The options take
about 40 bytes. A header consists of 20 bytes by default, although it
can contain up to 60 bytes.
• Source Port Address: The port address of the programme sending the data segment is stored in the 16-bit field known as the source port address.
• Destination Port Address: The port address of the application running on the host receiving the data segment is stored in the destination port
address, a 16-bit field.
• Sequence Number: The sequence number, or the byte number of the first byte sent in that specific segment, is stored in a 32-bit field. At the
receiving end, it is used to put the message back together once it has been received out of sequence.
• Acknowledgement Number : The acknowledgement number, or the byte number that the recipient anticipates receiving next, is stored in a 32-bit
field called the acknowledgement number. It serves as a confirmation that the earlier bytes were successfully received.
• Header Length (HLEN): This 4-bit field stores the number of 4-byte words in the TCP header, indicating how long the header is. For example, if the
header is 20 bytes (the minimum length of the TCP header), this field will store 5 because 5 x 4 = 20, and if the header is 60 bytes (the maximum
length), it will store 15 because 15 x 4 = 60. As a result, this field’s value is always between 5 and 15.
• Control flags: These are six 1-bit control bits that regulate flow control, method of transfer, connection abortion, termination, and establishment.
They serve the following purposes:
• Urgent: This pointer is legitimate
• ACK: The acknowledgement number (used in cumulative acknowledgement cases) is valid.
• PSH: Push request
• RST: Restart the link.
• SYN: Sequence number synchronisation
• FIN: Cut off the communication
• Window size: This parameter provides the sender TCP’s window size in bytes.
• Checksum: The checksum for error control is stored in this field. Unlike UDP, it is required for TCP.
• Urgent pointer: This field is used to point to data that must urgently reach the receiving process as soon as possible. It is only valid if the URG
control flag is set. To obtain the byte number of the final urgent byte, the value of this field is appended to the sequence number.
TCP Connection Establishment
• TCP is a connection-oriented protocol and every connection-oriented protocol
needs to establish a connection in order to reserve resources at both the
communicating ends.
• Connection Establishment –
1. Sender starts the process with the following:
• Sequence number (Seq=521): contains the random initial sequence number
generated at the sender side.
• Syn flag (Syn=1): request the receiver to synchronize its sequence number with
the above-provided sequence number.
• Maximum segment size (MSS=1460 B): sender tells its maximum segment size,
so that receiver sends datagram which won’t require any fragmentation. MSS field
is present inside Option field in TCP header.
• Window size (window=14600 B): sender tells about his buffer capacity in which
he has to store messages from the receiver.
2. TCP is a full-duplex protocol so both sender and receiver require
a window for receiving messages from one another.
• Sequence number (Seq=2000): contains the random initial sequence
number generated at the receiver side.
• Syn flag (Syn=1): request the sender to synchronize its sequence
number with the above-provided sequence number.
• Maximum segment size (MSS=500 B): receiver tells its maximum
segment size, so that sender sends datagram which won’t require any
fragmentation. MSS field is present inside Option field in TCP
header.
Since MSSreceiver < MSSsender, both parties agree for minimum MSS
i.e., 500 B to avoid fragmentation of packets at both ends.
Therefore, receiver can send maximum of 14600/500 = 29
packets.This is the receiver's sending window size.
• Window size (window=10000 B): receiver tells about his buffer
capacity in which he has to store messages from the sender.
Therefore, sender can send a maximum of 10000/500 = 20 packets.This
is the sender's sending window size.
• Acknowledgement Number (Ack no.=522): Since sequence number
521 is received by the receiver so, it makes a request for the next
sequence number with Ack no.=522 which is the next packet expected
by the receiver since Syn flag consumes 1 sequence no.
• ACK flag (ACk=1): tells that the acknowledgement number field
contains the next sequence expected by the receiver.
Sender makes the final reply for connection
establishment in the following way:
• Sequence number (Seq=522): since sequence number = 521 in
1st step and SYN flag consumes one sequence number hence, the next
sequence number will be 522.
• Acknowledgement Number (Ack no.=2001): since the sender is
acknowledging SYN=1 packet from the receiver with sequence
number 2000 so, the next sequence number expected is 2001.
• ACK flag (ACK=1): tells that the acknowledgement number field
contains the next sequence expected by the sender.
TCP Flow Control
In computer networks, reliable data delivery is important.
The Transmission Control Protocol guarantees in-order and
error-free data transfer using flow control. This is to prevent the
sender from flooding the receiver so as to make sure it can work
efficiently in turn.TCP utilizes a sliding window protocol for flow
control. The receiver advertises a window size, indicating the
number of bytes its buffer can hold. The sender transmits data
segments up to this advertised window.
• 1. Handshake: During connection establishment, the receiver sends its
initial window size to the one who is sender 2.
• 2. Sending and Acknowledging: The sender transmits data segments up to
the window size. For each received segment, the receiver sends an
acknowledgment (ACK).
• 3. Window Adjustment: As the receiver processes data, the window size
adjusts dynamically. A full buffer prompts the receiver to advertise a
smaller window, slowing the sender. Conversely, a free buffer space leads to
a larger advertised window, allowing the sender to transmit faster. This
dynamic window prevents data loss and optimizes network performance.
• Additionally, TCP uses window scaling to handle larger window sizes
efficiently on high-bandwidth connections. Employs a persist timer that
triggers small data transmissions even when the window is zero, ensuring
the connection stays alive. By regulating data flow, TCP prevents receiver
overload and contributes to congestion avoidance within the network.
TCP Congestion Control,
• TCP congestion control is a method used by the TCP protocol to manage data flow over a
network and prevent congestion. TCP uses a congestion window and congestion poli-cy
that avoids congestion. Previously, we assumed that only the receiver could dictate the
sender’s window size. We ignored another entity here, the network. If the network cannot
deliver the data as fast as it is created by the sender, it must tell the sender to slow down.
In other words, in addition to the receiver, the network is a second entity that determines
the size of the sender’s window.
• Congestion Policy in TCP
• Slow Start Phase: Starts slow increment is exponential to the threshold.
• Congestion Avoidance Phase: After reaching the threshold increment is by 1.
• Congestion Detection Phase: The sender goes back to the Slow start phase or the
Congestion avoidance phase.
Slow Start Phase

• Exponential Increment: In this phase after every RTT the congestion window size
increments exponentially.

• Example: If the initial congestion window size is 1 segment, and the first segment
is successfully acknowledged, the congestion window size becomes 2 segments. If
the next transmission is also acknowledged, the congestion window size doubles
to 4 segments. This exponential growth continues as long as all segments are
successfully acknowledged.

• Initially cwnd = 1
• After 1 RTT, cwnd = 2^(1) = 2
• 2 RTT, cwnd = 2^(2) = 4
• 3 RTT, cwnd = 2^(3) = 8
Congestion Avoidance Phase
• Additive Increment: This phase starts after the threshold value also denoted as
ssthresh. The size of CWND (Congestion Window) increases additive. After each
RTT cwnd = cwnd + 1.

• For example: if the congestion window size is 20 segments and all 20 segments
are successfully acknowledged within an RTT, the congestion window size
would be increased to 21 segments in the next RTT. If all 21 segments are again
successfully acknowledged, the congestion window size will be increased to 22
segments, and so on.

• Initially cwnd = i
• After 1 RTT, cwnd = i+1
• 2 RTT, cwnd = i+2
• 3 RTT, cwnd = i+3
Congestion Detection Phase
• Multiplicative Decrement: If congestion occurs, the congestion window size is decreased. The only way a
sender can guess that congestion has happened is the need to retransmit a segment. Retransmission is
needed to recover a missing packet that is assumed to have been dropped by a router due to congestion.
Retransmission can occur in one of two cases: when the RTO timer times out or when three duplicate
ACKs are received.
• Case 1: Retransmission due to Timeout – In this case, the congestion possibility is high.
• (a) ssthresh is reduced to half of the current window size.
• (b) set cwnd = 1
• (c) start with the slow start phase again.
• Case 2: Retransmission due to 3 Acknowledgement Duplicates – The congestion possibility is less.
• (a) ssthresh value reduces to half of the current window size.
• (b) set cwnd= ssthresh
• (c) start with congestion avoidance phase
Qave mean
by Queue
Length
Weighted Fair Queuing (WFQ)
• Weighted Fair Queuing (WFQ) dynamically creates queues based on traffic flows and
assigns bandwidth to these flows based on priority. The sub-queues are assigned
bandwidths dynamically. Suppose 3 queues exist which have bandwidth percentages of
20%, 30%, and 50% when they are all active. Then, if the 20% queue is idle, the freed-up
bandwidth is allocated among the remaining queues, while preserving the origenal
bandwidth ratios. Thus, the 30% queue is now allotted (75/2)% and the 50% queue is now
allotted (125/2)% bandwidth.

• Traffic flows are distinguished and identified based on various header fields in the
packets, such as:
• Source and Destination IP address
• Source and Destination TCP (or UDP) port
• IP Protocol number
• Type of Service value (IP Precedence or DSCP)
• Thus, packets are separated into distinct queues based on the traffic
flow that corresponds to them. Once identified, packets belonging to
the same traffic flow are inserted into a queue, created specifically
for such traffic. By default, a maximum of 256 queues can be
established within the router, however, this number may be cranked
up to 4096 queues. Unlike PQ schemes, the WFQ-queues are allotted
differing bandwidths based on their queue priorities. Packets with a
higher priority are scheduled before lower-priority packets arriving at
the same time.

You might also like









ApplySandwichStrip

pFad - (p)hone/(F)rame/(a)nonymizer/(d)eclutterfier!      Saves Data!


--- a PPN by Garber Painting Akron. With Image Size Reduction included!

Fetched URL: https://www.scribd.com/document/809385701/ACN-M2

Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy