Transport Layer UNIT 4
Transport Layer UNIT 4
Transport Layer UNIT 4
TCP
o TCP stands for Transmission Control Protocol.
o It provides full transport layer services to applications.
o It is a connection-oriented protocol means the connection established between both the
ends of the transmission. For creating the connection, TCP generates a virtual circuit
between sender and receiver for the duration of a transmission.
TCP Segment Format
Where,
o Source port address: It is used to define the address of the application program in a
source computer. It is a 16-bit field.
o Destination port address: It is used to define the address of the application program
in a destination computer. It is a 16-bit field.
o Sequence number: A stream of data is divided into two or more TCP segments. The
32-bit sequence number field represents the position of the data in an original data
stream.
o Acknowledgement number: A 32-field acknowledgement number acknowledge the
data from other communicating devices. If ACK field is set to 1, then it specifies the
sequence number that the receiver is expecting to receive.
o Header Length (HLEN): It specifies the size of the TCP header in 32-bit words. The
minimum size of the header is 5 words, and the maximum size of the header is 15 words.
Therefore, the maximum size of the TCP header is 60 bytes, and the minimum size of
the TCP header is 20 bytes.
o Reserved: It is a six-bit field which is reserved for future use.
o Control bits: Each bit of a control field functions individually and independently. A
control bit defines the use of a segment or serves as a validity check for other fields.
UDP
o UDP stands for User Datagram Protocol.
o UDP is a simple protocol and it provides nonsequenced transport functionality.
o UDP is a connectionless protocol.
o This type of protocol is used when reliability and security are less important than speed
and size.
o UDP is an end-to-end transport level protocol that adds transport-level addresses,
checksum error control, and length information to the data from the upper layer.
o The packet produced by the UDP protocol is known as a user datagram.
Where,
o Source port address: It defines the address of the application process that has
delivered a message. The source port address is of 16 bits address.
o Destination port address: It defines the address of the application process that will
receive the message. The destination port address is of a 16-bit address.
o Total length: It defines the total length of the user datagram in bytes. It is a 16-bit field.
o Checksum: The checksum is a 16-bit field which is used in error detection.
Definition TCP establishes a virtual circuit UDP transmits the data directly to the
before transmitting the data. destination computer without verifying
whether the receiver is ready to receive or not.
acknowledgement It waits for the acknowledgement of It neither takes the acknowledgement, nor it
data and has the ability to resend the retransmits the damaged frame.
lost packets.
Process-to-Process Delivery
Process-to-Process Delivery: A transport-layer protocol's first task is to
perform process-to-process delivery. A process is an entity of the application
layer which uses the services of the transport layer. Two processes can be
communicated between the client/server relationships.
Client/Server Paradigm
Congestion
Congestion is a state in the network layer that may occur in contrast to the
packet switching technique of data transmission. It is a situation, where the
number of packets that a network can carry gets exceed which results in
message traffic and thus slows down the data transmission rate.
In short, we can say congestion means traffic in the network occurs due to the
availability of extra packets in the subnet.
For example, If we compare the congestion with a real-life example then it
would be the same as road traffic which we encounter occasionally while
travelling. Both are having almost similar reasons to occur, i.e., load is greater
than available resources.
Effect Of Congestion
The main function of the network gets affected by the congestion, which
results in :
Congestion Control
Congestion control is a method to monitor the traffic over the network to keep
it at an acceptable level so that congestion can be prevented before it occurs
or if congestion already occurred, it can be removed.
We can deal with congestion, either by increasing the resources or either by
reducing the load. We will discuss few techniques as well as algorithms for
congestion control.
Congestion Control Techniques
There are two techniques used for congestion control. One technique deals
with prevention while another deals with cure.
The two congestion control techniques are:
a. Retransmission Policy: As per this policy, if the sender feels that the
message that was sent by him was either lost or corrupted, then the
retransmission of the message occurs. However, retransmission may
further lead to congestion but for this retransmission timer must be
designed to avoid congestion and provide optimal efficiency.
Window Policy: There are two window policies being used at the
sender side to control the congestion.
o Go-Back-N Window: This policy retransmits the entire packet
even if the single-packet lost or corrupt while transmitting. So, this
window policy may become the cause of duplication and will
increase the congestion in the network.
o Selective Repeat Window: This window policy should be a
better choice for congestion control as it retransmits only the
selective lost or corrupted data.
From the figure, there is congestion at the 3rd node and thus refuses to receive
the packets from its upstream node which creates congestion at the 2nd node
and it also refuses to receive the packet from the 1st one. This creates back
pressure and will inform the source to slow down.
Choke Point: The chokepoint or choke packet technique is used to inform the
source directly about the congestion that occurred at any particular node by
sending the choke packet so that the transmission rate can be slow down by
the source.
From the figure, there is congestion at the 3rd node in the system. So, the
choke packet is being sent to the source by the node to inform about the
occurred congestion to slow down the transmission rate.
For example: Source sends packets and waits for acknowledgement from the
receiver. If there is no response for a while, the source assumes there may be
congestion.
Network traffic is the amount of data moving across a computer network at any given time.
Network traffic, also called data traffic, is broken down into data packets and sent over a network
before being reassembled by the receiving device or computer.
Network traffic has two directional flows, north-south and east-west. Traffic affects network
quality because an unusually high amount of traffic can mean slow download speeds or spotty
Voice over Internet Protocol (VoIP) connections. Traffic is also related to security because an
unusually high amount of traffic could be the sign of an attack.
Data Packets
When data travels over a network or over the internet, it must first be broken down into smaller
batches so that larger files can be transmitted efficiently. The network breaks down, organizes,
and bundles the data into data packets so that they can be sent reliably through the network and
then opened and read by another user in the network. Each packet takes the best route possible to
spread network traffic evenly.
North-south Traffic
North-south traffic refers to client-to-server traffic that moves between the data center and the
rest of the network (i.e., a location outside of the data center).
East-west Traffic
East-west traffic refers to traffic within a data center, also known as server-to-server traffic.
Types of Network Traffic
To better manage bandwidth, network administrators decide how certain types of traffic are to be
treated by network devices like routers and switches. There are two general categories of network
traffic: real-time and non-real-time.
Real-time Traffic
Traffic deemed important or critical to business operations must be delivered on time and with
the highest quality possible. Examples of real-time network traffic include VoIP,
videoconferencing, and web browsing.
Non-real-time Traffic
Non-real-time traffic, also known as best-effort traffic, is traffic that network administrators
consider less important than real-time traffic. Examples include File Transfer Protocol (FTP) for
web publishing and email applications.
Why Network Traffic Analysis and Monitoring Are Important
QoS:
Quality of service (QoS) is the use of mechanisms or technologies that work on a network to
control traffic and ensure the performance of critical applications with limited network capacity.
It enables organizations to adjust their overall network traffic by prioritizing specific high-
performance applications.
QoS is typically applied to networks that carry traffic for resource-intensive systems. Common
services for which it is required include internet protocol television (IPTV), online gaming,
streaming media, videoconferencing, video on demand (VOD), and Voice over IP (VoIP).
Using QoS in networking, organizations have the ability to optimize the performance of multiple
applications on their network and gain visibility into the bit rate, delay, jitter, and packet rate of
their network. This ensures they can engineer the traffic on their network and change the way that
packets are routed to the internet or other networks to avoid transmission delay. This also ensures
that the organization achieves the expected service quality for applications and delivers expected
user experiences.
As per the QoS meaning, the key goal is to enable networks and organizations to prioritize traffic,
which includes offering dedicated bandwidth, controlled jitter, and lower latency. The
technologies used to ensure this are vital to enhancing the performance of business applications,
wide-area networks (WANs), and service provider networks.
How Does QoS Work?
QoS networking technology works by marking packets to identify service types, then configuring
routers to create separate virtual queues for each application, based on their priority. As a result,
bandwidth is reserved for critical applications or websites that have been assigned priority
access.
QoS technologies provide capacity and handling allocation to specific flows in network traffic.
This enables the network administrator to assign the order in which packets are handled and
provide the appropriate amount of bandwidth to each application or traffic flow.
Types of Network Traffic
Understanding how QoS network software works is reliant on defining the various types of traffic
that it measures. These are:
1. Bandwidth: The speed of a link. QoS can tell a router how to use bandwidth. For example,
assigning a certain amount of bandwidth to different queues for different traffic types.
2. Delay: The time it takes for a packet to go from its source to its end destination. This can often
be affected by queuing delay, which occurs during times of congestion and a packet waits in a
queue before being transmitted. QoS enables organizations to avoid this by creating a priority
queue for certain types of traffic.
3. Loss: The amount of data lost as a result of packet loss, which typically occurs due to network
congestion. QoS enables organizations to decide which packets to drop in this event.
4. Jitter: The irregular speed of packets on a network as a result of congestion, which can result in
packets arriving late and out of sequence. This can cause distortion or gaps in audio and video
being delivered.
Implementing QoS begins with an enterprise identifying the types of traffic that are important to
them, use high volumes of bandwidth, and/or are sensitive to latency or packet loss.
This helps the organization understand the needs and importance of each traffic type on its
network and design an overall approach. For example, some organizations may only need to
configure bandwidth limits for specific services, whereas others may need to fully configure
interface and security policy bandwidth limits for all their services, as well as prioritize queuing
critical services relative to traffic rate.
The organization can then deploy policies that classify traffic and ensure the availability and
consistency of its most important applications. Traffic can be classified by port or internet
protocol (IP), or through a more sophisticated approach such as by application or user.
Bandwidth management and queuing tools are then assigned roles to handle traffic flow
specifically based on the classification they received when they entered the network. This allows
for packets within traffic flows to be stored until the network is ready to process them. Priority
queuing can also be used to ensure the necessary availability and minimal latency of network
performance for important applications and traffic. This is so that the network’s most important
activities are not starved of bandwidth by those of lesser priority.
Furthermore, bandwidth management measures and controls traffic flow on the network
infrastructure to ensure it does not exceed capacity and prevent congestion. This includes using
traffic shaping, a rate-limiting technique that optimizes or guarantees performance and increases
usable bandwidth, and scheduling algorithms, which offer several methods for providing
bandwidth to specific traffic flows.
Why is QoS Important?
Traditional business networks operated as separate entities. Phone calls and teleconferences were
handled by one network, while laptops, desktops, servers and other devices connected to another.
They rarely crossed paths, unless a computer used a telephone line to access the internet.
When networks only carried data, speed was not overly critical. But now, interactive applications
carrying audio and video content need to be delivered at high speed, without packet loss or
variations in delivery speed.
QoS is particularly important to guarantee the high performance of critical applications that
require high bandwidth for real-time traffic. For example, it helps businesses to prioritize the
performance of “inelastic” applications that often have minimum bandwidth requirements,
maximum latency limits, and high sensitivity to jitter and latency, such as VoIP and
videoconferencing.
QoS helps businesses prevent the delay of these sensitive applications, ensuring they perform to
the level that users require. For example, lost packets could cause a delay to the stream, which
results in the sound and video quality of a videoconference call to become choppy and
indecipherable.
QoS is increasingly important as network performance requirements adapt to the growing number
of people using them. The latest online applications and services require vast amounts of
bandwidth and network performance, and users demand they offer high performance at all times.
Organizations, therefore, need to deploy techniques and technologies that guarantee the best
possible service.
QoS is also becoming increasingly important as the Internet of Things (IoT) continues to come to
maturity. For example, in the manufacturing sector, machines now leverage networks to provide
real-time status updates on any potential issues. Therefore, any delay in feedback could cause
highly costly mistakes in IoT networking. QoS enables the data stream to take priority in the
network and ensures that the information flows as quickly as possible.
Cities are now filled with smart sensors that are vital to running large-scale IoT projects such as
smart buildings. The data collected and analyzed, such as humidity and temperature data, is often
highly time-sensitive and needs to be identified, marked, and queued appropriately.
What Techniques and Best Practices Are Involved in QoS?
Techniques
There are several techniques that businesses can use to guarantee the high performance of their
most critical applications. These include:
Prioritization of delay-sensitive VoIP traffic via routers and switches: Many enterprise
networks can become overly congested, which sees routers and switches start dropping packets
as they come in and out faster than they can be processed. As a result, streaming applications
suffer. Prioritization enables traffic to be classified and receive different priorities depending on
its type and destination. This is particularly useful in a situation of high congestion, as packets
with higher priority can be sent ahead of other traffic.
Resource reservation: The Resource Reservation Protocol (RSVP) is a transport layer protocol
that reserves resources across a network and can be used to deliver specific levels of QoS for
application data streams. Resource reservation enables businesses to divide network resources
by traffic of different types and origins, define limits, and guarantee bandwidth.
Queuing: Queuing is the process of creating policies that provide preferential treatment to
certain data streams over others. Queues are high-performance memory buffers in routers and
switches, in which packets passing through are held in dedicated memory areas. When a packet
is assigned higher priority, it is moved to a dedicated queue that pushes data at a faster rate,
which reduces the chances of it being dropped. For example, businesses can assign a policy to
give voice traffic priority over the majority of network bandwidth. The routing or switching
device will then move this traffic’s packets and frames to the front of the queue and
immediately transmit them.
Traffic marking: When applications that require priority over other bandwidth on a network
have been identified, the traffic needs to be marked. This is possible through processes like
Class of Service (CoS), which marks a data stream in the Layer 2 frame header, and
Differentiated Services Code Point (DSCP), which marks a data stream in the Layer 3 packet
header.
Best Practices
In addition to these techniques, there are also several best practices that organizations should keep
in mind when determining their QoS requirements.
1. Ensure that maximum bandwidth limits at the source interface and security policy are not set
too low to prevent excessive packet discard.
2. Consider the ratio at which packets are distributed between available queues and which queues
are used by which services. This can affect latency levels, queue distribution, and packet
assignment.
3. Only place bandwidth guarantees on specific services. This will avoid the possibility of all traffic
using the same queue in high-volume situations.
4. Configure prioritization for all traffic through either type of service-based priority or security
policy priority, not both. This will simplify analysis and troubleshooting.
5. Try to minimize the complexity of QoS configuration to ensure high performance.
6. To get accurate testing results, use the User Datagram Protocol (UDP), and do not oversubscribe
bandwidth throughput.
Advantages of QoS
The deployment of QoS is crucial for businesses that want to ensure the availability of their
business-critical applications. It is vital for delivering differentiated bandwidth and ensuring data
transmission takes place without interrupting traffic flow or causing packet losses. Major
advantages of deploying QoS include:
Traffic conditioning – Ensures that the traffic entering the DiffServ domain.
Packet classification – Categorizes the packet within a specific group using the traffic
descriptor.