CN (Unit Iv)

Download as pdf or txt
Download as pdf or txt
You are on page 1of 16

UNIT-IV

THE TRANSPORT LAYER

Transport Layer
 The transport Layer is the second layer in the TCP/IP model and the
fourth layer in the OSI model.
 It is an end-to-end layer used to deliver messages to a host.
 It is termed an end-to-end layer because it provides a point-to-point
connection rather than hop-to-hop, between the source host and
destination host to deliver the services reliably.
 The unit of data encapsulation in the Transport Layer is a segment.

Working of Transport Layer

The transport layer takes services from the Application layer and provides
services to the Network layer.
At the sender’s side: The transport layer receives data (message) from the
Application layer and then performs Segmentation, divides the actual message
into segments, adds the source and destination’s port numbers into the header
of the segment, and transfers the message to the Network layer.
At the receiver’s side: The transport layer receives data from the Network
layer, reassembles the segmented data, reads its header, identifies the port
number, and forwards the message to the appropriate port in the Application
layer.

The Transport Service:


The services provided by the transport layer are similar to those of the data link layer. The
data link layer provides the services within a single network while the transport layer
provides the services across an internetwork made up of many networks. The data link layer
controls the physical layer while the transport layer controls all the lower layers.

o End-to-end delivery
o Addressing
o Reliable delivery
o Flow control
o Multiplexing

End-to-end delivery:
The transport layer transmits the entire message to the destination. Therefore, it
ensures the end-to-end delivery of an entire message from a source to the destination.

Reliable delivery:
The transport layer provides reliability services by retransmitting the lost and damaged
packets.

The reliable delivery has four aspects:

o Error control
o Sequence control
o Loss control
o Duplication control

Error Control

o The primary role of reliability is Error Control. In reality, no transmission will be 100
percent error-free delivery. Therefore, transport layer protocols are designed to
provide error-free transmission.
o The data link layer also provides the error handling mechanism, but it ensures only
node-to-node error-free delivery. However, node-to-node reliability does not ensure
the end-to-end reliability.
o The data link layer checks for the error between each network. If an error is introduced
inside one of the routers, then this error will not be caught by the data link layer. It only
detects those errors that have been introduced between the beginning and end of the
link. Therefore, the transport layer performs the checking for the errors end-to-end to
ensure that the packet has arrived correctly.

Sequence Control

o The second aspect of the reliability is sequence control which is implemented at the
transport layer.
o On the sending end, the transport layer is responsible for ensuring that the packets
received from the upper layers can be used by the lower layers. On the receiving end,
it ensures that the various segments of a transmission can be correctly reassembled.

Loss Control

Loss Control is a third aspect of reliability. The transport layer ensures that all the
fragments of a transmission arrive at the destination, not some of them. On the
sending end, all the fragments of transmission are given sequence numbers by a
transport layer. These sequence numbers allow the receivers transport layer to identify
the missing segment.

Duplication Control

Duplication Control is the fourth aspect of reliability. The transport layer guarantees
that no duplicate data arrive at the destination. Sequence numbers are used to identify
the lost packets; similarly, it allows the receiver to identify and discard duplicate
segments.

Flow Control
Flow control is used to prevent the sender from overwhelming the receiver. If the
receiver is overloaded with too much data, then the receiver discards the packets and
asking for the retransmission of packets. This increases network congestion and thus,
reducing the system performance. The transport layer is responsible for flow control.
It uses the sliding window protocol that makes the data transmission more efficient as
well as it controls the flow of data so that the receiver does not become overwhelmed.
Sliding window protocol is byte oriented rather than frame oriented.

Multiplexing
The transport layer uses the multiplexing to improve transmission efficiency.

Multiplexing can occur in two ways:

o Upward multiplexing: Upward multiplexing means multiple transport layer


connections use the same network connection. To make more cost-effective, the
transport layer sends several transmissions bound for the same destination along the
same path; this is achieved through upward multiplexing.
o Downward multiplexing: Downward multiplexing means one transport layer
connection uses the multiple network connections. Downward multiplexing
allows the transport layer to split a connection among several paths to improve
the throughput. This type of multiplexing is used when networks have a low or
slow capacity.

Addressing
o According to the layered model, the transport layer interacts with the functions of the
session layer. Many protocols combine session, presentation, and application layer
protocols into a single layer known as the application layer. In these cases, delivery to
the session layer means the delivery to the application layer. Data generated by an
application on one machine must be transmitted to the correct application on another
machine. In this case, addressing is provided by the transport layer.
o The transport layer provides the user address which is specified as a station or port.
The port variable represents a particular TS user of a specified station known as a
Transport Service access point (TSAP). Each station has only one transport entity.
o The transport layer protocols need to know which upper-layer protocols are
communicating.

What are the elements of Transport Protocol?


Error Control

Error detection and error recovery are an integral part of reliable service, and
therefore they are necessary to perform error control mechanisms on an end-to-
end basis. To control errors from lost or duplicate segments, the transport layer
enables unique segment sequence numbers to the different packets of the
message, creating virtual circuits, allowing only one virtual circuit per session.

Flow Control

The underlying rule of flow control is to maintain a synergy between a fast


process and a slow process. The transport layer enables a fast process to keep
pace with a slow one. Acknowledgements are sent back to manage end-to-end
flow control. Go back N algorithms are used to request retransmission of packets
starting with packet number N. Selective Repeat is used to request specific
packets to be retransmitted.

Connection Establishment/Release

The transport layer creates and releases the connection across the network. This
includes a naming mechanism so that a process on one machine can indicate with
whom it wishes to communicate. The transport layer enables us to establish and
delete connections across the network to multiplex several message streams onto
one communication channel.

Multiplexing/De multiplexing

The transport layer establishes a separate network connection for each transport
connection required by the session layer. To improve throughput, the transport
layer establishes multiple network connections. When the issue of throughput is
not important, it multiplexes several transport connections onto the same network
connection, thus reducing the cost of establishing and maintaining the network
connections.

When several connections are multiplexed, they call for demultiplexing at the
receiving end. In the case of the transport layer, the communication takes place
only between two processes and not between two machines. Hence,
communication at the transport layer is also known as peer-to-peer or process-to-
process communication.

Fragmentation and re-assembly

When the transport layer receives a large message from the session layer, it breaks
the message into smaller units depending upon the requirement. This process is
called fragmentation. Thereafter, it is passed to the network layer. Conversely,
when the transport layer acts as the receiving process, it reorders the pieces of a
message before reassembling them into a message.

Addressing

Transport Layer deals with addressing or labelling a frame. It also differentiates


between a connection and a transaction. Connection identifiers are ports or
sockets that label each frame, so the receiving device knows which process it has
been sent from. This helps in keeping track of multiple-message conversations.
Ports or sockets address multiple conservations in the same location.

Congestion Control
TCP congestion control is a method used by the TCP protocol to manage
data flow over a network and prevent congestion. TCP uses a congestion
window and congestion policy that avoids congestion. Previously, we
assumed that only the receiver could dictate the sender’s window size. We
ignored another entity here, the network. If the network cannot deliver the
data as fast as it is created by the sender, it must tell the sender to slow
down. In other words, in addition to the receiver, the network is a second
entity that determines the size of the sender’s window
Congestion Policy in TCP
1. Slow Start Phase: Starts slow increment is exponential to the threshold.
2. Congestion Avoidance Phase: After reaching the threshold increment is
by 1.
3. Congestion Detection Phase: The sender goes back to the Slow start
phase or the Congestion avoidance phase.

Slow Start Phase

Exponential increment: In this phase after every RTT the congestion window
size increments exponentially.
Example:- If the initial congestion window size is 1 segment, and the first
segment is successfully acknowledged, the congestion window size becomes
2 segments. If the next transmission is also acknowledged, the congestion
window size doubles to 4 segments. This exponential growth continues as long
as all segments are successfully acknowledged.
Initially cwnd = 1
After 1 RTT, cwnd = 2^(1) = 2
2 RTT, cwnd = 2^(2) = 4
3 RTT, cwnd = 2^(3) = 8

Congestion Avoidance Phase

Additive increment: This phase starts after the threshold value also denoted
as ssthresh. The size of cwnd(congestion window) increases additive. After
each RTT cwnd = cwnd + 1.
Example:- if the congestion window size is 20 segments and all 20 segments
are successfully acknowledged within an RTT, the congestion window size
would be increased to 21 segments in the next RTT. If all 21 segments are
again successfully acknowledged, the congestion window size would be
increased to 22 segments, and so on.
Initially cwnd = i
After 1 RTT, cwnd = i+1
2 RTT, cwnd = i+2
3 RTT, cwnd = i+3

Congestion Detection Phase

Multiplicative decrement: If congestion occurs, the congestion window size


is decreased. The only way a sender can guess that congestion has happened
is the need to retransmit a segment. Retransmission is needed to recover a
missing packet that is assumed to have been dropped by a router due to
congestion. Retransmission can occur in one of two cases: when the RTO
timer times out or when three duplicate ACKs are received.
Case 1: Retransmission due to Timeout – In this case, the congestion
possibility is high.
(a) ssthresh is reduced to half of the current window size.
(b) set cwnd = 1
(c) start with the slow start phase again.
Case 2: Retransmission due to 3 Acknowledgement Duplicates – The
congestion possibility is less.
(a) ssthresh value reduces to half of the current window size.
(b) set cwnd= ssthresh
(c) start with congestion avoidance phase
Example
Assume a TCP protocol experiencing the behavior of slow start. At the 5th
transmission round with a threshold (ssthresh) value of 32 goes into the
congestion avoidance phase and continues till the 10th transmission. At the
10th transmission round, 3 duplicate ACKs are received by the receiver and
entered into additive increase mode. Timeout occurs at the 16th transmission
round. Plot the transmission round (time) vs congestion window size of TCP
segments.

The Internet Transport Protocols:


The transport layer is represented majorly by TCP and UDP protocols. Today
almost all operating systems support multiprocessing multi-user
environments. This transport layer protocol provides connections to the
individual ports. These ports are known as protocol ports. Transport layer
protocols work above the IP protocols and deliver the data packets from IP
serves to destination port and from the originating port to destination IP
services. Below are the protocols used at the transport layer.
1. UDP
o UDP stands for User Datagram Protocol.
o UDP is a simple protocol and it provides non-sequenced transport
functionality.
o UDP is a connectionless protocol.
o This type of protocol is used when reliability and security are less
important than speed and size.
o UDP is an end-to-end transport level protocol that adds transport-level
addresses, checksum error control, and length information to the data from
the upper layer.
o The packet produced by the UDP protocol is known as a user datagram.
o Services provided by User Datagram Protocol(UDP) are connectionless
service, faster delivery of messages, checksum, and process-to-process
communication.

User Datagram Format


The user datagram has a 16-byte header which is shown below:

Where,

o Source port address: It defines the address of the application process that has
delivered a message. The source port address is of 16 bits address.
o Destination port address: It defines the address of the application process that
will receive the message. The destination port address is of a 16-bit address.
o Total length: It defines the total length of the user datagram in bytes. It is a
16-bit field.
o Checksum: The checksum is a 16-bit field which is used in error detection.
Advantages of UDP
 UDP also provides multicast and broadcast transmission of data.
 UDP protocol is preferred more for small transactions such as DNS
lookup.
 It is a connectionless protocol, therefore there is no compulsion to have a
connection-oriented network.
 UDP provides fast delivery of messages.
Disadvantages of UDP
 In UDP protocol there is no guarantee that the packet is delivered.
 UDP protocol suffers from worse packet loss.
 UDP protocol has no congestion control mechanism.
 UDP protocol does not provide the sequential transmission of data.

2. TCP
o TCP stands for Transmission Control Protocol.
o It provides full transport layer services to applications.
o It is a connection-oriented protocol means the connection established between
both the ends of the transmission. For creating the connection, TCP generates
a virtual circuit between sender and receiver for the duration of a transmission.

Features Of TCP protocol


o Stream data transfer: TCP protocol transfers the data in the form of contiguous
stream of bytes. TCP group the bytes in the form of TCP segments and then passed it
to the IP layer for transmission to the destination. TCP itself segments the data and
forward to the IP.
o Reliability: TCP assigns a sequence number to each byte transmitted and expects a
positive acknowledgement from the receiving TCP. If ACK is not received within a
timeout interval, then the data is retransmitted to the destination.
The receiving TCP uses the sequence number to reassemble the segments if they arrive
out of order or to eliminate the duplicate segments.
o Flow Control: When receiving TCP sends an acknowledgement back to the sender
indicating the number the bytes it can receive without overflowing its internal buffer.
The number of bytes is sent in ACK in the form of the highest sequence number that it
can receive without any problem. This mechanism is also referred to as a window
mechanism.
o Multiplexing: Multiplexing is a process of accepting the data from different
applications and forwarding to the different applications on different computers. At
the receiving end, the data is forwarded to the correct application. This process is
known as demultiplexing. TCP transmits the packet to the correct application by using
the logical channels known as ports.
o Logical Connections: The combination of sockets, sequence numbers, and window
sizes, is called a logical connection. Each connection is identified by the pair of sockets
used by sending and receiving processes.
o Full Duplex: TCP provides Full Duplex service, i.e., the data flow in both the directions
at the same time. To achieve Full Duplex service, each TCP should have sending and
receiving buffers so that the segments can flow in both the directions. TCP is a
connection-oriented protocol. Suppose the process A wants to send and receive the
data from process B. The following steps occur:
o Establish a connection between two TCPs.
o Data is exchanged in both the directions.
o The Connection is terminated.

TCP Segment Format


Where,

o Source port address: It is used to define the address of the application program in a
source computer. It is a 16-bit field.
o Destination port address: It is used to define the address of the application program
in a destination computer. It is a 16-bit field.
o Sequence number: A stream of data is divided into two or more TCP segments. The
32-bit sequence number field represents the position of the data in an original data
stream.
o Acknowledgement number: A 32-field acknowledgement number acknowledge the
data from other communicating devices. If ACK field is set to 1, then it specifies the
sequence number that the receiver is expecting to receive.
o Header Length (HLEN): It specifies the size of the TCP header in 32-bit words. The
minimum size of the header is 5 words, and the maximum size of the header is 15
words. Therefore, the maximum size of the TCP header is 60 bytes, and the minimum
size of the TCP header is 20 bytes.
o Reserved: It is a six-bit field which is reserved for future use.
o Control bits: Each bit of a control field functions individually and independently. A
control bit defines the use of a segment or serves as a validity check for other fields.

There are total six types of flags in control field:


o URG: The URG field indicates that the data in a segment is urgent.
o ACK: When ACK field is set, then it validates the acknowledgement number.
o PSH: The PSH field is used to inform the sender that higher throughput is needed so
if possible, data must be pushed with higher throughput.
o RST: The reset bit is used to reset the TCP connection when there is any confusion
occurs in the sequence numbers.
o SYN: The SYN field is used to synchronize the sequence numbers in three types of
segments: connection request, connection confirmation ( with the ACK bit set ), and
confirmation acknowledgement.
o FIN: The FIN field is used to inform the receiving TCP module that the sender has
finished sending data. It is used in connection termination in three types of segments:
termination request, termination confirmation, and acknowledgement of termination
confirmation.
o Window Size: The window is a 16-bit field that defines the size of the window.
o Checksum: The checksum is a 16-bit field used in error detection.
o Urgent pointer: If URG flag is set to 1, then this 16-bit field is an offset from
the sequence number indicating that it is a last urgent data byte.
o Options and padding: It defines the optional fields that convey the additional
information to the receiver.

Advantages of TCP
 TCP supports multiple routing protocols.
 TCP protocol operates independently of that of the operating system.
 TCP protocol provides the features of error control and flow control.
 TCP provides a connection-oriented protocol and provides the delivery of
data.
Disadvantages of TCP
 TCP protocol cannot be used for broadcast or multicast transmission.
 TCP protocol has no block boundaries.
 No clear separation is being offered by TCP protocol between its
interface, services, and protocols.
 In TCP/IP replacement of protocol is difficult.
Differences b/w TCP & UDP
Basis for TCP UDP
Comparison

Definition TCP establishes a virtual circuit UDP transmits the data directly to the
before transmitting the data. destination computer without
verifying whether the receiver is ready
to receive or not.

Connection Type It is a Connection-Oriented It is a Connectionless protocol


protocol

Speed slow high

Reliability It is a reliable protocol. It is an unreliable protocol.

Header size 20 bytes 8 bytes

acknowledgement It waits for the It neither takes the acknowledgement,


acknowledgement of data and nor it retransmits the damaged frame.
has the ability to resend the
lost packets.

Performance Problems in Computer Networks


Network performance refers to the quality and speed of a network's
transmission of data between devices. It is typically measured by factors such
as bandwidth, latency, and throughput.

Network performance is important because it determines how well devices can


communicate with each other and access the resources they need, such as the
internet or shared files. Poor network performance can lead to slow response
times, reduced productivity, and other problems.
Five Common Potential Issues that can Affect Network Performance
 Bandwidth bottlenecks − If the network's available bandwidth is
inadequate for the number and type of devices and applications using it,
performance can suffer.
A bandwidth bottleneck is a network performance issue that occurs when
the available bandwidth of the network is not sufficient to handle the
volume of data being transmitted. This can result in slow response times
and decreased performance for devices on the network.

 Interference − Physical objects or other electronic devices can interfere


with wireless signals, causing them to degrade and reducing network
performance.

Interface-related network performance issues can occur when there are


problems with the hardware or software interfaces that connect devices to
the network. These issues can affect the ability of devices to communicate
with each other and access network resources, leading to reduced
performance

Some common interface-related problems include −

Incorrectly configured interfaces − If an interface is not configured


properly, it may not be able to communicate with other devices on the
network.
Faulty hardware − Physical issues with an interface, such as a damaged
connector or malfunctioning hardware, can prevent it from working
properly.
Incompatible software − If the software that controls an interface is not
compatible with the rest of the network, it may not function correctly.

 Congestion − When too many devices are trying to use the network at the
same time, congestion can occur, leading to slow performance.

Network congestion is a performance issue that occurs when there are


too many devices trying to use the network at the same time. This can
result in slow response times, dropped connections, and other problems.
 Malware − Malware, such as viruses and worms, can compromise the
performance of individual devices and the network as a whole.

Malware is software that is specifically designed to harm or exploit


computer systems. It can take many forms, including viruses, worms,
Trojans, and ransomware. Malware can have a significant impact on
network performance by consuming resources, slowing down devices,
and disrupting communication between devices.

 Outdated hardware or software − Using outdated equipment or software


can limit the network's capabilities and lead to poor performance
Hardware and software issues can both affect network performance.
Hardware problems can occur when there are physical issues with the
devices or equipment that make up the network, while software
problems can occur when there are issues with the programs and
operating systems that run on the devices.

Some common hardware-related network performance issues include −

 Faulty hardware − If a device or piece of equipment is malfunctioning, it


can disrupt communication on the network and reduce performance.
 Incompatible hardware − If the hardware on a device is not compatible
with the rest of the network, it may not function correctly and could cause
performance issues.
 Insufficient hardware resources − If a device does not have enough
processing power, memory, or other resources, it may not be able to handle
the demands placed on it, leading to reduced performance.

Some common software-related network performance issues include −

 Outdated software − If the software on a device is outdated, it may not be


able to take advantage of newer technologies or may not be compatible
with the rest of the network, leading to reduced performance.
 Software bugs − If the software on a device contains bugs or errors, it can
cause performance issues or even crash the device.
 Inefficient software − If the software on a device is not optimized for
performance, it may consume more resources than necessary, leading to
reduced performance.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy