Transport Layer UNIT 4

Download as pdf or txt
Download as pdf or txt
You are on page 1of 21

Transport Layer protocols

o The transport layer is a 4th layer from the top.


o The main role of the transport layer is to provide the communication services
directly to the application processes running on different hosts.
o The transport layer protocols are implemented in the end systems but not in
the network routers.
o A computer network provides more than one protocol to the network
applications. For example, TCP and UDP are two transport layer protocols that
provide a different set of services to the network layer.
o All transport layer protocols provide multiplexing/demultiplexing service. It also
provides other services such as reliable data transfer, bandwidth guarantees,
and delay guarantees.
o Each of the applications in the application layer has the ability to send a
message by using TCP or UDP. The application communicates by using either
of these two protocols. Both TCP and UDP will then communicate with the
internet protocol in the internet layer. The applications can read and write to
the transport layer. Therefore, we can say that communication is a two-way
process.

o The transport layer is represented by two protocols: TCP and UDP.

TCP
o TCP stands for Transmission Control Protocol.
o It provides full transport layer services to applications.
o It is a connection-oriented protocol means the connection established between both the
ends of the transmission. For creating the connection, TCP generates a virtual circuit
between sender and receiver for the duration of a transmission.
TCP Segment Format

Where,

o Source port address: It is used to define the address of the application program in a
source computer. It is a 16-bit field.
o Destination port address: It is used to define the address of the application program
in a destination computer. It is a 16-bit field.
o Sequence number: A stream of data is divided into two or more TCP segments. The
32-bit sequence number field represents the position of the data in an original data
stream.
o Acknowledgement number: A 32-field acknowledgement number acknowledge the
data from other communicating devices. If ACK field is set to 1, then it specifies the
sequence number that the receiver is expecting to receive.
o Header Length (HLEN): It specifies the size of the TCP header in 32-bit words. The
minimum size of the header is 5 words, and the maximum size of the header is 15 words.
Therefore, the maximum size of the TCP header is 60 bytes, and the minimum size of
the TCP header is 20 bytes.
o Reserved: It is a six-bit field which is reserved for future use.
o Control bits: Each bit of a control field functions individually and independently. A
control bit defines the use of a segment or serves as a validity check for other fields.

There are total six types of flags in control field:


o URG: The URG field indicates that the data in a segment is urgent.
o ACK: When ACK field is set, then it validates the acknowledgement number.
o PSH: The PSH field is used to inform the sender that higher throughput is needed so if
possible, data must be pushed with higher throughput.
o RST: The reset bit is used to reset the TCP connection when there is any confusion
occurs in the sequence numbers.
o SYN: The SYN field is used to synchronize the sequence numbers in three types of
segments: connection request, connection confirmation ( with the ACK bit set ), and
confirmation acknowledgement.
o FIN: The FIN field is used to inform the receiving TCP module that the sender has
finished sending data. It is used in connection termination in three types of segments:
termination request, termination confirmation, and acknowledgement of termination
confirmation.
o Window Size: The window is a 16-bit field that defines the size of the window.
o Checksum: The checksum is a 16-bit field used in error detection.
o Urgent pointer: If URG flag is set to 1, then this 16-bit field is an offset from
the sequence number indicating that it is a last urgent data byte.
o Options and padding: It defines the optional fields that convey the additional
information to the receiver.

UDP
o UDP stands for User Datagram Protocol.
o UDP is a simple protocol and it provides nonsequenced transport functionality.
o UDP is a connectionless protocol.
o This type of protocol is used when reliability and security are less important than speed
and size.
o UDP is an end-to-end transport level protocol that adds transport-level addresses,
checksum error control, and length information to the data from the upper layer.
o The packet produced by the UDP protocol is known as a user datagram.

User Datagram Format

The user datagram has a 16-byte header which is shown below:

Where,
o Source port address: It defines the address of the application process that has
delivered a message. The source port address is of 16 bits address.
o Destination port address: It defines the address of the application process that will
receive the message. The destination port address is of a 16-bit address.
o Total length: It defines the total length of the user datagram in bytes. It is a 16-bit field.
o Checksum: The checksum is a 16-bit field which is used in error detection.

Basis for TCP UDP


Comparison

Definition TCP establishes a virtual circuit UDP transmits the data directly to the
before transmitting the data. destination computer without verifying
whether the receiver is ready to receive or not.

Connection Type It is a Connection-Oriented protocol It is a Connectionless protocol

Speed slow high

Reliability It is a reliable protocol. It is an unreliable protocol.

Header size 20 bytes 8 bytes

acknowledgement It waits for the acknowledgement of It neither takes the acknowledgement, nor it
data and has the ability to resend the retransmits the damaged frame.
lost packets.
Process-to-Process Delivery
Process-to-Process Delivery: A transport-layer protocol's first task is to
perform process-to-process delivery. A process is an entity of the application
layer which uses the services of the transport layer. Two processes can be
communicated between the client/server relationships.

Client/Server Paradigm

There are many ways to obtain the process-to-process communication, and


the most common way is through the client/server paradigm. A process is
called a client on the local-host. Usually, the remote host is needed services
on the processes, that is called server. The same name applies to both
processes (client and server). IP address and port number combination are
called socket address, and that address defines a process and a host.
Multiplexing and Demultiplexing
Multiplexing: At the sender site, multiple processes can occur, and those
processes are required to send packets. It is a technique that combines
multiple processes into one process.
Demultiplexing: At the receiver site, it is a technique that separates many
processes.
UDP (User Datagram Protocol)
UDP was developed by David P Reed in 1980. It is a connection-less and
unreliable protocol. This means when the data transfer occurs; this protocol
does not establish the connection between the sender and receiver. The
receiver does not send any acknowledgment of the receiving data. It sends
the data directly. In the UDP, the data packet is called datagram. UDP does
not guarantee your data that will reach its destination or not. It does not
require that the data reach the receiver in the same order in which the sender
has sent the data.
Transmission Control Protocol
TCP stands for Transmission Control Protocol. It was introduced in 1974. It is
a connection-oriented and reliable protocol. It establishes a connection
between the source and destination device before starting the communication.
It detects whether the destination device has received the data sent from the
source device or not. If the data received is not in the proper format, it sends
the data again. TCP is highly reliable because it uses a handshake and traffic
control mechanism. In TCP protocol, data is received in the same sequencer
in which the sender sends the data. We use the TCP protocol services in our
daily life, such as HTTP, HTTPS, Telnet, FTP, SMTP, etc.

Congestion
Congestion is a state in the network layer that may occur in contrast to the
packet switching technique of data transmission. It is a situation, where the
number of packets that a network can carry gets exceed which results in
message traffic and thus slows down the data transmission rate.
In short, we can say congestion means traffic in the network occurs due to the
availability of extra packets in the subnet.
For example, If we compare the congestion with a real-life example then it
would be the same as road traffic which we encounter occasionally while
travelling. Both are having almost similar reasons to occur, i.e., load is greater
than available resources.

Causes Of Network Congestion


There may be several reasons for the congestion in a network. Let’s
understand them so that necessary steps can be taken as a preventive
measure. The network team uses NPM( Network Performance Moniters ) to
find out the any problem that may cause during the transmission.

1. Non-Compactable or Outdated Hardware: The network team should


be aware of the need of the enterprises as well as one should be
updated with the latest hardware components so that the devices like
switches, router, servers can be updated with the most optimal
hardware layout.
2. Poor Network Design And Subnet: Yes, a poorly designed network
can lead to congestion. So the network layout needs to be designed
fully optimised so that every part of the network is connected to
communicate effectively. Also, the subnet should be appropriately
seized as per the traffic.
3. Too Many Devices: Too many devices connected in a network will also
lead to congestion, as every network operates over a specific capacity
with limited bandwidth and traffic. So, if there will be more devices in
your network than previously specified then NPM will identify and inform
you to handle them.
4. Bandwidth Hog: Bandwidth Hog, it is a term used for the user or the
device that consumes more data than the other devices. Bandwidth hog
will utilize more resources and can lead to congestion. NPM( Network
Performance Monitors) will inform when any device will drain the
bandwidth above the expected level.

Effect Of Congestion
The main function of the network gets affected by the congestion, which
results in :

1. Slowing down the response time


2. Retransmission of data
3. Confidentiality Breach

Congestion Control
Congestion control is a method to monitor the traffic over the network to keep
it at an acceptable level so that congestion can be prevented before it occurs
or if congestion already occurred, it can be removed.
We can deal with congestion, either by increasing the resources or either by
reducing the load. We will discuss few techniques as well as algorithms for
congestion control.
Congestion Control Techniques
There are two techniques used for congestion control. One technique deals
with prevention while another deals with cure.
The two congestion control techniques are:

1. Open Loop Congestion Control


2. Closed Loop Congestion Control
Let’s discuss each technique in detail

1. Open Loop Congestion Control: This technique is used to prevent


congestion before it occurs. Either sender or receiver controls the
congestion using this method. In Open loop congestion control, few
policies are used to prevent the congestion before it happens.

Let’s discuss each policy of open-loop congestion control technique in


detail:

a. Retransmission Policy: As per this policy, if the sender feels that the
message that was sent by him was either lost or corrupted, then the
retransmission of the message occurs. However, retransmission may
further lead to congestion but for this retransmission timer must be
designed to avoid congestion and provide optimal efficiency.

 Acknowledgement policy: Choosing the best acknowledgement policy


will control congestion. Although acknowledgement is also a part of the
load in the network so it will be good for the receiver to send an
acknowledgement for the n-packet after every timer expires rather than
to send the acknowledgement for each packet individually.

 Admission Policy: In admission policy, the availability of the resources


for the transmission is checked by the switches. If there is a congestion
or even a chance for the same to occur then the router will deny to
establish the virtual network connection.

 Window Policy: There are two window policies being used at the
sender side to control the congestion.
o Go-Back-N Window: This policy retransmits the entire packet
even if the single-packet lost or corrupt while transmitting. So, this
window policy may become the cause of duplication and will
increase the congestion in the network.
o Selective Repeat Window: This window policy should be a
better choice for congestion control as it retransmits only the
selective lost or corrupted data.

 Discarding Policy: In the discarding policy, the packet containing less


sensitive data or corrupted data is discarded and keeping the quality of
the message unaffected.

Closed Loop Congestion Control: This technique is used to remove the


congestion if congestion has already occurred in the network. We have further
few techniques inside closed loop to deal with the connection that already
occurred.
a. Back Pressure: As the source and destination contains different nodes
between them through which packets gets transferred. In this, the
congested node denies to receive packet from its upper node and again
it causes congestion to the last node which was denied to send the
packet further. And this way the packets reach back to the source with
the information to slow down the process.

From the figure, there is congestion at the 3rd node and thus refuses to receive
the packets from its upstream node which creates congestion at the 2nd node
and it also refuses to receive the packet from the 1st one. This creates back
pressure and will inform the source to slow down.
Choke Point: The chokepoint or choke packet technique is used to inform the
source directly about the congestion that occurred at any particular node by
sending the choke packet so that the transmission rate can be slow down by
the source.
From the figure, there is congestion at the 3rd node in the system. So, the
choke packet is being sent to the source by the node to inform about the
occurred congestion to slow down the transmission rate.

 Implicit signaling: It’s an assumption-based technique. In this, no


interaction is established between the congested node and the source.
Based on the assumption, the source(sender) guesses for the
congestion.

For example: Source sends packets and waits for acknowledgement from the
receiver. If there is no response for a while, the source assumes there may be
congestion.

 Explicit Signaling: In explicit signaling, if there is congestion at any


node in the system, it will send the packet explicitly either to the source
or destination to inform about the congestion. It is a little bit different
from the choke packet technique, as the choke packet technique sends
a separate choke packet to the source while explicit signaling uses a
data packet which also includes a signal that to be sent.

Explicit signaling can occur in two ways:

i. Forward Explicit Signaling: In this, the signal is to be sent in the


direction of data flow. i.e., If the congestion occurs at any node then the
signal with the data packet is sent to the destination/receiver explicitly to
inform about the congestion to slow down the acknowledgement to the
source.
1. Backward Explicit Signaling: In this, the signal is to be sent to
the source. i.e., in the direction opposite to data flow by the node
to inform about the congestion (if occurred) to slow down the
transmission rate.
Network Traffic

Network traffic is the amount of data moving across a computer network at any given time.
Network traffic, also called data traffic, is broken down into data packets and sent over a network
before being reassembled by the receiving device or computer.

Network traffic has two directional flows, north-south and east-west. Traffic affects network
quality because an unusually high amount of traffic can mean slow download speeds or spotty
Voice over Internet Protocol (VoIP) connections. Traffic is also related to security because an
unusually high amount of traffic could be the sign of an attack.
Data Packets

When data travels over a network or over the internet, it must first be broken down into smaller
batches so that larger files can be transmitted efficiently. The network breaks down, organizes,
and bundles the data into data packets so that they can be sent reliably through the network and
then opened and read by another user in the network. Each packet takes the best route possible to
spread network traffic evenly.
North-south Traffic

North-south traffic refers to client-to-server traffic that moves between the data center and the
rest of the network (i.e., a location outside of the data center).
East-west Traffic

East-west traffic refers to traffic within a data center, also known as server-to-server traffic.
Types of Network Traffic

To better manage bandwidth, network administrators decide how certain types of traffic are to be
treated by network devices like routers and switches. There are two general categories of network
traffic: real-time and non-real-time.
Real-time Traffic

Traffic deemed important or critical to business operations must be delivered on time and with
the highest quality possible. Examples of real-time network traffic include VoIP,
videoconferencing, and web browsing.
Non-real-time Traffic

Non-real-time traffic, also known as best-effort traffic, is traffic that network administrators
consider less important than real-time traffic. Examples include File Transfer Protocol (FTP) for
web publishing and email applications.
Why Network Traffic Analysis and Monitoring Are Important

Network traffic analysis (NTA) is a technique used by network administrators to examine


network activity, manage availability, and identify unusual activity. NTA also enables admins to
determine if any security or operational issues exist—or might exist moving forward—under
current conditions. Addressing such issues as they occur not only optimizes the organization's
resources but also reduces the possibility of an attack. As such, NTA is tied to enhanced security.
1. Identify bottlenecks: Bottlenecks are likely to occur as a result of a spike in the number of users
in a single geographic location.
2. Troubleshoot bandwidth issues: A slow connection can be because a network is not designed to
accommodate an increase in the number of users or amount of activity.
3. Improve visibility of devices on your network: Increased awareness of endpoints can help
administrators anticipate network traffic and make adjustments if necessary.
4. Detect security issues and fix them more quickly: NTA works in real time, alerting admins when
there is a traffic anomaly or possible breach.

QoS:

Quality of service (QoS) is the use of mechanisms or technologies that work on a network to
control traffic and ensure the performance of critical applications with limited network capacity.
It enables organizations to adjust their overall network traffic by prioritizing specific high-
performance applications.

QoS is typically applied to networks that carry traffic for resource-intensive systems. Common
services for which it is required include internet protocol television (IPTV), online gaming,
streaming media, videoconferencing, video on demand (VOD), and Voice over IP (VoIP).

Using QoS in networking, organizations have the ability to optimize the performance of multiple
applications on their network and gain visibility into the bit rate, delay, jitter, and packet rate of
their network. This ensures they can engineer the traffic on their network and change the way that
packets are routed to the internet or other networks to avoid transmission delay. This also ensures
that the organization achieves the expected service quality for applications and delivers expected
user experiences.

As per the QoS meaning, the key goal is to enable networks and organizations to prioritize traffic,
which includes offering dedicated bandwidth, controlled jitter, and lower latency. The
technologies used to ensure this are vital to enhancing the performance of business applications,
wide-area networks (WANs), and service provider networks.
How Does QoS Work?

QoS networking technology works by marking packets to identify service types, then configuring
routers to create separate virtual queues for each application, based on their priority. As a result,
bandwidth is reserved for critical applications or websites that have been assigned priority
access.

QoS technologies provide capacity and handling allocation to specific flows in network traffic.
This enables the network administrator to assign the order in which packets are handled and
provide the appropriate amount of bandwidth to each application or traffic flow.
Types of Network Traffic

Understanding how QoS network software works is reliant on defining the various types of traffic
that it measures. These are:
1. Bandwidth: The speed of a link. QoS can tell a router how to use bandwidth. For example,
assigning a certain amount of bandwidth to different queues for different traffic types.
2. Delay: The time it takes for a packet to go from its source to its end destination. This can often
be affected by queuing delay, which occurs during times of congestion and a packet waits in a
queue before being transmitted. QoS enables organizations to avoid this by creating a priority
queue for certain types of traffic.
3. Loss: The amount of data lost as a result of packet loss, which typically occurs due to network
congestion. QoS enables organizations to decide which packets to drop in this event.
4. Jitter: The irregular speed of packets on a network as a result of congestion, which can result in
packets arriving late and out of sequence. This can cause distortion or gaps in audio and video
being delivered.

Getting Started with QoS

Implementing QoS begins with an enterprise identifying the types of traffic that are important to
them, use high volumes of bandwidth, and/or are sensitive to latency or packet loss.

This helps the organization understand the needs and importance of each traffic type on its
network and design an overall approach. For example, some organizations may only need to
configure bandwidth limits for specific services, whereas others may need to fully configure
interface and security policy bandwidth limits for all their services, as well as prioritize queuing
critical services relative to traffic rate.

The organization can then deploy policies that classify traffic and ensure the availability and
consistency of its most important applications. Traffic can be classified by port or internet
protocol (IP), or through a more sophisticated approach such as by application or user.

Bandwidth management and queuing tools are then assigned roles to handle traffic flow
specifically based on the classification they received when they entered the network. This allows
for packets within traffic flows to be stored until the network is ready to process them. Priority
queuing can also be used to ensure the necessary availability and minimal latency of network
performance for important applications and traffic. This is so that the network’s most important
activities are not starved of bandwidth by those of lesser priority.

Furthermore, bandwidth management measures and controls traffic flow on the network
infrastructure to ensure it does not exceed capacity and prevent congestion. This includes using
traffic shaping, a rate-limiting technique that optimizes or guarantees performance and increases
usable bandwidth, and scheduling algorithms, which offer several methods for providing
bandwidth to specific traffic flows.
Why is QoS Important?

Traditional business networks operated as separate entities. Phone calls and teleconferences were
handled by one network, while laptops, desktops, servers and other devices connected to another.
They rarely crossed paths, unless a computer used a telephone line to access the internet.

When networks only carried data, speed was not overly critical. But now, interactive applications
carrying audio and video content need to be delivered at high speed, without packet loss or
variations in delivery speed.

QoS is particularly important to guarantee the high performance of critical applications that
require high bandwidth for real-time traffic. For example, it helps businesses to prioritize the
performance of “inelastic” applications that often have minimum bandwidth requirements,
maximum latency limits, and high sensitivity to jitter and latency, such as VoIP and
videoconferencing.

QoS helps businesses prevent the delay of these sensitive applications, ensuring they perform to
the level that users require. For example, lost packets could cause a delay to the stream, which
results in the sound and video quality of a videoconference call to become choppy and
indecipherable.

QoS is increasingly important as network performance requirements adapt to the growing number
of people using them. The latest online applications and services require vast amounts of
bandwidth and network performance, and users demand they offer high performance at all times.
Organizations, therefore, need to deploy techniques and technologies that guarantee the best
possible service.

QoS is also becoming increasingly important as the Internet of Things (IoT) continues to come to
maturity. For example, in the manufacturing sector, machines now leverage networks to provide
real-time status updates on any potential issues. Therefore, any delay in feedback could cause
highly costly mistakes in IoT networking. QoS enables the data stream to take priority in the
network and ensures that the information flows as quickly as possible.

Cities are now filled with smart sensors that are vital to running large-scale IoT projects such as
smart buildings. The data collected and analyzed, such as humidity and temperature data, is often
highly time-sensitive and needs to be identified, marked, and queued appropriately.
What Techniques and Best Practices Are Involved in QoS?

Techniques

There are several techniques that businesses can use to guarantee the high performance of their
most critical applications. These include:

 Prioritization of delay-sensitive VoIP traffic via routers and switches: Many enterprise
networks can become overly congested, which sees routers and switches start dropping packets
as they come in and out faster than they can be processed. As a result, streaming applications
suffer. Prioritization enables traffic to be classified and receive different priorities depending on
its type and destination. This is particularly useful in a situation of high congestion, as packets
with higher priority can be sent ahead of other traffic.
 Resource reservation: The Resource Reservation Protocol (RSVP) is a transport layer protocol
that reserves resources across a network and can be used to deliver specific levels of QoS for
application data streams. Resource reservation enables businesses to divide network resources
by traffic of different types and origins, define limits, and guarantee bandwidth.
 Queuing: Queuing is the process of creating policies that provide preferential treatment to
certain data streams over others. Queues are high-performance memory buffers in routers and
switches, in which packets passing through are held in dedicated memory areas. When a packet
is assigned higher priority, it is moved to a dedicated queue that pushes data at a faster rate,
which reduces the chances of it being dropped. For example, businesses can assign a policy to
give voice traffic priority over the majority of network bandwidth. The routing or switching
device will then move this traffic’s packets and frames to the front of the queue and
immediately transmit them.
 Traffic marking: When applications that require priority over other bandwidth on a network
have been identified, the traffic needs to be marked. This is possible through processes like
Class of Service (CoS), which marks a data stream in the Layer 2 frame header, and
Differentiated Services Code Point (DSCP), which marks a data stream in the Layer 3 packet
header.
Best Practices

In addition to these techniques, there are also several best practices that organizations should keep
in mind when determining their QoS requirements.

1. Ensure that maximum bandwidth limits at the source interface and security policy are not set
too low to prevent excessive packet discard.
2. Consider the ratio at which packets are distributed between available queues and which queues
are used by which services. This can affect latency levels, queue distribution, and packet
assignment.
3. Only place bandwidth guarantees on specific services. This will avoid the possibility of all traffic
using the same queue in high-volume situations.
4. Configure prioritization for all traffic through either type of service-based priority or security
policy priority, not both. This will simplify analysis and troubleshooting.
5. Try to minimize the complexity of QoS configuration to ensure high performance.
6. To get accurate testing results, use the User Datagram Protocol (UDP), and do not oversubscribe
bandwidth throughput.
Advantages of QoS

The deployment of QoS is crucial for businesses that want to ensure the availability of their
business-critical applications. It is vital for delivering differentiated bandwidth and ensuring data
transmission takes place without interrupting traffic flow or causing packet losses. Major
advantages of deploying QoS include:

1. Unlimited application prioritization: QoS guarantees that businesses’ most mission-critical


applications will always have priority and the necessary resources to achieve high performance.
2. Better resource management: QoS enables administrators to better manage the organization’s
internet resources. This also reduces costs and the need for investments in link expansions.
3. Enhanced user experience: The end goal of QoS is to guarantee the high performance of critical
applications, which boils down to delivering optimal user experience. Employees enjoy high
performance on their high-bandwidth applications, which enables them to be more effective
and get their job done more quickly.
4. Point-to-point traffic management: Managing a network is vital however traffic is delivered, be
it end to end, node to node, or point to point. The latter enables organizations to deliver
customer packets in order from one point to the next over the internet without suffering any
packet loss.
5. Packet loss prevention: Packet loss can occur when packets of data are dropped in transit
between networks. This can often be caused by a failure or inefficiency, network congestion, a
faulty router, loose connection, or poor signal. QoS avoids the potential of packet loss by
prioritizing bandwidth of high-performance applications.
6. Latency reduction: Latency is the time it takes for a network request to go from the sender to
the receiver and for the receiver to process it. This is typically affected by routers taking longer
to analyze information and storage delays caused by intermediate switches and bridges. QoS
enables organizations to reduce latency, or speed up the process of a network request, by
prioritizing their critical application.

Difference Between Integrated Services


and Differentiated Services
The main difference between integrated services and differentiated services is
that integrated services involve prior reservation of resources before
achieving the required quality of service, while differential services mark
the packets with priority and send it to the network without prior
reservation.
QoS refers to Quality of Service. It refers to a set of network technologies that allows
the network to deliver the required results. Additionally, QoS helps to increase the
performance of the network in terms of availability, error rate, latency and throughput.
Furthermore, QoS supports prioritizing network traffic. QoS can also consider a
specific router or a server, etc. Therefore, network monitoring systems are typically
deployed as a part of QoS to ensure that the network is performing at the desired
level. Overall, QoS provides two types of services: integrated services and
differentiated services.
What is Integrated Services
Integrated services refer to an architecture that ensures the Quality of Service (QoS)
on a network. Moreover, these services allow the receiver to watch and listen to video
and sound without any interruption. Each router in the network implements integrated
services. Furthermore, every application requires some kind of guarantee to make an
individual reservation.

Furthermore, it is possible to implement the integrated service structure through


signalling protocol and admission control routine, classifier and packet scheduler.
Moreover, these services require an explicit signalling mechanism to convey
information to routers so that they can provide the requested resources.

What is Differentiated Services


Differentiated services refer to a multiple service model that can satisfy many
requirements. In other words, it supports multiple mission-critical applications.
Moreover, these services help to minimize the burden of the network devices and also
support the scaling of the network. Some major differentiated services are as follows.

Traffic conditioning – Ensures that the traffic entering the DiffServ domain.

Packet classification – Categorizes the packet within a specific group using the traffic
descriptor.

Packet marking – Classify a packet based on a specific traffic descriptor.


Congestion Management – Achieve queuing and traffic scheduling.

Congestion avoidance – Monitor traffic loads to minimize congestion. In involves


packet dropping.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy