0% found this document useful (0 votes)
14 views

Ilovepdf Merged

Uploaded by

shiven pokhriyal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views

Ilovepdf Merged

Uploaded by

shiven pokhriyal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 295

Congestion

Control
The main focus of congestion control and quality of
service is data traffic. In congestion control we try to
avoid traffic congestion. In quality of service, we try to
create an appropriate environment for the traffic. So,
before talking about congestion control and quality of
service, we discuss the data traffic itself.
Traffic descriptors
Three traffic profiles
CONGESTION

Congestion in a network may occur if the load on the


network—the number of packets sent to the network—is
greater than the capacity of the network—the number of
packets a network can handle. Congestion control refers
to the mechanisms and techniques to control the
congestion and keep the load below the capacity.
Congestion Control Algorithms
• Congestion - the situation in which
too many packets are present in the
subnet.
Causes of Congestion
• Congestion occurs when a router receives data
faster than it can send it
– Insufficient bandwidth
– Slow hosts
– Data simultaneously arriving from multiple
lines destined for the same outgoing line.
• The system is not balanced
– Correcting the problem at one router will
probably just move the bottleneck to another
router.
Congestion Causes More Congestion
– Incoming messages must be placed in queues
• The queues have a finite size
– Overflowing queues will cause packets to be
dropped
– Long queue delays will cause packets to be resent
– Dropped packets will cause packets to be resent
• Senders that are trying to transmit to a congested
destination also become congested
– They must continually resend packets that have
been dropped or that have timed-out
– They must continue to hold
outgoing/unacknowledged
messages in memory.
Congestion Control versus Flow Control
• Flow control
– controls point-to-point traffic between
sender and receiver
– e.g., a fast host sending to a slow host
• Congestion Control
– controls the traffic throughout the network
CONGESTION CONTROL

Congestion control refers to techniques and mechanisms


that can either prevent congestion, before it happens, or
remove congestion, after it has happened. In general,
we can divide congestion control mechanismsinto two
broad categories: open-loop congestion control
(prevention) and closed-loop congestion control
(removal).
Congestion Control

• When one part of the subnet (e.g. one or


more routers in an area) becomes
overloaded, congestion results.
• Because routers are receiving packets faster
than they can forward them, one of two
things must happen:
– The subnet must prevent additional packets
from entering the congested region until
those already present can be processed.
– The congested routers can discard queued
packets to
make room for those that are arriving.

11
Two Categories of Congestion Control

• Open loop solutions


– Attempt to prevent problems rather
than correct them
– Does not utilize runtime feedback from
the system
• Closed loop solutions
– Uses feedback (measurements of system
performance) to make corrections at
runtime.
General Principles of Congestion Control
• Analogy with Control Theory:
– Open-loop, and
– Closed-loop approach.

• Open-loop approach
– Problem is solved at the design cycle
– Once the system is running midcourse correction are NOT made.
– Tools for doing open-loop control:
• Deciding when to accept new traffic,
• Deciding when to disregard packets and which ones.
• Making scheduling decision at various points in the network.
• Note that all those decisions are made without regard to the current state
of the network.
General Principles of Congestion Control

• Closed-loop approach
– It is based on the principle of feedback-loop. The approach
has
three parts when applied to congestion control:
1. Monitor the system to detect when and where congestion
occurs,
2. Pass this information to places where action can be taken
3. Adjust system operation to correct the problem.
Congestion control categories
Warning Bit/ Backpressure
• A special bit in the packet header is set by
the router to warn the source when
congestion is detected.
• The bit is copied and piggy-backed on the
ACK and sent to the sender.
• The sender monitors the number of ACK
packets it receives with the warning bit
set and adjusts its transmission rate
accordingly.

6
Backpressure method for alleviating congestion
Choke Packets
• A more direct way of telling the source
to slow down.
• A choke packet is a control
packet generated at a
congested node and
transmitted to restrict traffic flow.
• The source, on receiving the choke
packet must reduce its transmission
rate by a certain percentage.
• An example of a choke packet is the
ICMP Source Quench Packet.
Choke packet
Open-Loop Control
• Network performance is guaranteed to all
traffic flows that have been admitted into
the network
• Initially for connection-oriented networks
• Key Mechanisms
– Admission Control
– Policing
– Traffic Shaping
– Traffic Scheduling
Admission Control
• Flows negotiate contract
with
Peak rate network
• Specify requirements:
Bits/second

– Peak, Avg., Min Bit rate


Average rate
– Maximum burst size
– Delay, Loss requirement
• Network computes
resources needed
– “Effective” bandwidth
Time • If flow accepted, network
Typical bit rate demanded by allocates resources to
a variable bit rate information ensure QoS delivered as
source long as source conforms
to contract
Policing
• Network monitors traffic flows continuously to
ensure they meet their traffic contract
• When a packet violates the contract, network can
discard or tag the packet giving it lower priority
• If congestion occurs, tagged packets are discarded
first
• Leaky Bucket Algorithm is the most commonly
used policing mechanism
– Bucket has specified leak rate for average contracted rate
– Bucket has specified depth to accommodate variations in
arrival
rate
– Arriving packet is conforming if it does not result in overflow
Traffic Shaping
• Another method of congestion control is
to “shape” the traffic before it enters the
network.
• Traffic shaping controls the rate at
which packets are sent (not just how
many). Used in ATM and Integrated
Services networks.
• At connection set-up time, the sender
and carrier negotiate a traffic pattern
(shape).
• Two traffic shaping algorithms are:
– Leaky Bucket
– Token Bucket
The Leaky Bucket Algorithm
• The Leaky Bucket Algorithm used to
control rate in a network. It is
implemented as a single-server queue
with constant service time. If the bucket
(buffer) overflows then packets are
discarded.
The Leaky Bucket Algorithm

(a) A leaky bucket with water(b) a leaky bucket with


packets.
Leaky Bucket Algorithm, cont.
• The leaky bucket enforces a constant output
rate (average rate) regardless of the
burstiness of the input. Does nothing when
input is idle.
• The host injects one packet per clock tick onto
the network. This results in a uniform flow of
packets, smoothing out bursts and reducing
congestion.
• When packets are the same size (as in ATM
cells), the one packet per tick is okay. For
variable length packets though, it is better to
allow a fixed number of bytes per tick. E.g.
1024 bytes per tick will allow one 1024-byte
packet or two 512-byte packets or four 256-
byte packets on 1 tick.
Leaky bucket
Leaky bucket implementation
Note
A leaky bucket algorithm shapes bursty traffic into fixed-rate traffic by
averaging the data rate. It may drop the packets if the bucket is
full.
Note
The token bucket allows bursty traffic at a regulated maximum rate.
Leaky Bucket Traffic Shaper
Size N
Incoming traffic Shaped traffic
Server

Packet

• Buffer incoming packets


• Play out periodically to conform to parameters
• Surges in arrivals are buffered & smoothed out
• Possible packet loss due to buffer overflow
• Too restrictive, since conforming traffic does not
need to be completely smooth
Token Bucket Algorithm
• In contrast to the LB, the Token Bucket
Algorithm, allows the output rate to vary,
depending on the size of the burst.
• In the TB algorithm, the bucket holds
tokens.
To transmit a packet, the host must capture
and destroy one token.
• Tokens are generated by a clock at the rate of
one token every t sec.
• Idle hosts can capture and save up tokens
(up to the max. size of the bucket) in order
to send larger bursts later.
The Token Bucket Algorithm

5-
34

(a) (b) After. 33


Token bucket
Token Bucket Traffic
Shaper Tokens arrive
periodically

An incoming packet must


have sufficient tokens
before admission into the
network Size K
Token

Size N
Incoming traffic Shaped traffic
Server

Packet
• Token rate regulates transfer of packets
• If sufficient tokens available, packets enter network without
delay
Leaky Bucket vs Token Bucket
• LB discards packets; TB does
not. TB discards tokens.
• With TB, a packet can only be
transmitted if there are enough tokens
to cover its length in bytes.
• LB sends packets at an average rate.
TB allows for large bursts to be sent
faster by speeding up the output.
• TB allows saving up tokens
(permissions) to send large bursts. LB
does not allow saving.

36
Load Shedding
• When buffers become full, routers simply
discard packets.
• Which packet is chosen to be the victim
depends on the application and on the error
strategy used in the data link layer.
• For a file transfer, for, e.g. cannot discard
older packets since this will cause a gap in
the received data.
•For real-time voice or video it is probably better
to throw away old data and keep new packets.
• Get the application to mark packets with
discard priority.
What is IP?

An IP stands for internet protocol. An IP address is assigned to each


device connected to a network. Each device uses an IP address for
communication. It also behaves as an identifier as this address is used
to identify the device on a network. It defines the technical format of the
packets. Mainly, both the networks, i.e., IP and TCP, are combined
together, so together, they are referred to as a TCP/IP. It creates a
virtual connection between the source and the destination.
We can also define an IP address as a numeric address assigned to
each device on a network. An IP address is assigned to each device
so that the device on a network can be identified uniquely. To
facilitate the routing of packets, TCP/IP protocol uses a 32-bit logical
address known as IPv4(Internet Protocol version 4).

An IP address consists of two parts, i.e., the first one is a network


address, and the other one is a host address.
There are two types of IP addresses:
•IPv4
•IPv6
What is IPv4?

IPv4 is a version 4 of IP. It is a current version and the most commonly


used IP address. It is a 32-bit address written in four numbers
separated by 'dot', i.e., periods. This address is unique for each
device.

For example, 66.94.29.13


The above example represents the IP address in which each group of
numbers separated by periods is called an Octet. Each number in an octet
is in the range from 0-255. This address can produce 4,294,967,296
possible unique addresses.

In today's computer network world, computers do not understand the IP


addresses in the standard numeric format as the computers understand the
numbers in binary form only. The binary number can be either 1 or 0.

The IPv4 consists of four sets, and these sets represent the octet. The bits
in each octet represent a number.Each bit in an octet can be either 1 or 0. If
the bit the 1, then the number it represents will count, and if the bit is 0, then
the number it represents does not count.
Representation of 8 Bit Octet

The above representation shows the structure of 8- bit octet.


Now, we will see how to obtain the binary representation of the above
IP address, i.e., 66.94.29.13
Step 1: First, we find the binary number
of 66.

To obtain 66, we put 1 under 64 and 2 as the sum of 64 and 2 is equal to


66 (64+2=66), and the remaining bits will be zero, as shown above.
Therefore, the binary bit version of 66 is 01000010.
Step 2: Now, we calculate the binary number of 94.

To obtain 94, we put 1 under 64, 16, 8, 4, and 2 as the sum of these
numbers is equal to 94, and the remaining bits will be zero. Therefore,
the binary bit version of 94 is 01011110.
Step 3: The next number is 29.

To obtain 29, we put 1 under 16, 8, 4, and 1 as the sum of these numbers is
equal to 29, and the remaining bits will be zero. Therefore, the binary bit
version of 29 is 00011101.
Step 4: The last number is 13.

To obtain 13, we put 1 under 8, 4, and 1 as the sum of these numbers


is equal to 13, and the remaining bits will be zero. Therefore, the binary
bit version of 13 is 00001101.
Drawback of IPv4
Currently, the population of the world is 7.6 billion. Every user is having
more than one device connected with the internet, and private companies
also rely on the internet. As we know that IPv4 produces 4 billion
addresses, which are not enough for each device connected to the
internet on a planet. Although the various techniques were invented, such
as variable- length mask, network address translation, port address
translation, classes, inter-domain translation, to conserve the bandwidth of
IP address and slow down the depletion of an IP address. In these
techniques, public IP is converted into a private IP due to which the user
having public IP can also use the internet. But still, this was not so
efficient, so it gave rise to the development of the next generation of IP
addresses, i.e., IPv6.
What is IPv6?

IPv4 produces 4 billion addresses, and the developers think that these
addresses are enough, but they were wrong. IPv6 is the next generation
of IP addresses. The main difference between IPv4 and IPv6 is the
address size of IP addresses. The IPv4 is a 32-bit address, whereas
IPv6 is a 128-bit hexadecimal address. IPv6 provides a large address
space, and it contains a simple header as compared to IPv4.
It provides transition strategies that convert IPv4 into IPv6, and these
strategies are as follows:
•Dual stacking: It allows us to have both the versions, i.e., IPv4 and
IPv6, on the same device.
•Tunneling: In this approach, all the users have IPv6 communicates
with an IPv4 network to reach IPv6.
•Network Address Translation: The translation allows the
communication between the hosts having a different version of IP.
This hexadecimal address contains both numbers and alphabets. Due
to the usage of both the numbers and alphabets, IPv6 is capable of
producing over 340 undecillion (3.4*1038) addresses.

IPv6 is a 128-bit hexadecimal address made up of 8 sets of 16 bits


each, and these 8 sets are separated by a colon. In IPv6, each
hexadecimal character represents 4 bits. So, we need to convert 4 bits
to a hexadecimal number at a time
Address format
The address format of IPv4:

The address format of IPv6:

The above diagram shows the address format of IPv4 and IPv6. An IPv4 is a
32-bit decimal address. It contains 4 octets or fields separated by 'dot', and
each field is 8-bit in size. The number that each field contains should be in the
range of 0-255. Whereas an IPv6 is a 128-bit hexadecimal address. It
contains 8 fields separated by a colon, and each field is 16-bit in size.
Differences between IPv4 and IPv6
The Network Layer

1
Review
ISO/OSI’s network model
How many layers have the OSI’s model divided the
network architecture into?

Seven layers
APPLICATION
PRESENTATION

What are they from the bottom SESSION

to the top? TRANSPORT


NETWORK
DATA LINK
PHYSICAL 2
Description of the network layer
a) The network layer is concerned with
getting packets from the source all the
way to the destination.
To achieve its goals, the network layer must
know about the topology of the communica-
tion subnet and choose appropriate paths
through it. It must also take care to choose
routes to avoid overloading some of the
communication lines and routers while
leaving others idle. 3
The Network Layer

1. Network Layer Design Issues


2. Routing Algorithms
3. The Network Layer in the Internet

4
Network Layer Design Issues

a) Store-and-Forward Packet Switching


b) Services Provided to the Transport Layer
c) Implementation of Connectionless Service
d) Implementation of Connection-Oriented Service
e) Comparison of Virtual-Circuit and Datagram Subnets

5
Store-and-Forward Packet Switching

Customer’s equipment

fig 5-1

The environment of the network layer protocols.

• This equipment is used as follows:

6
a) A host with a packet to send transmits it
to the nearest router, either on its own
LAN or over a point-to-point link to the
carrier. The packet is stored there until
it has fully arrived so the checksum can
be verified. Then it is forwarded to the
next router along the path until it
reaches the destination host, where it is
delivered. This mechanism is store-and-
forward packet switching.

7
Network Layer Design Issues

a) Store-and-Forward Packet Switching


b) Services Provided to the Transport Layer
c) Implementation of Connectionless Service
d) Implementation of Connection-Oriented Service
e) Comparison of Virtual-Circuit and Datagram Subnets

8
Services Provided to the Transport Layer
What kind of services the network layer provides to
the transport layer ?

• The network layer services have been designed with the


following goals:
1. The services should be independent of the router tech-
nology.
2. The transport layer should be shielded from the num-
ber, type, and topology of the routers present.
3. The network addresses made available to the transport
layer should use a uniform numbering plan, even across
LANs and WANs.
The routers' job is moving packets around and no-
thing else. In their view , the subnet is inherently un-
reliable, no matter how it is designed. Therefore, the
hosts should accept the fact that the network is unre-
liable and do error control (i.e., error detection and
correction) and flow control themselves. So the
network service should be connectionless.

The Internet offers connectionless


network-layer service

10
The subnet should provide a reliable, connection-
oriented service. In this view, quality of service is the
dominant factor, and without connections in the sub-
net, quality of service is very difficult to achieve, esp-
ecially for real-time traffic such as voice and video.

ATM networks offer connection-oriented


network-layer service.
11
Network Layer Design Issues

a) Store-and-Forward Packet Switching


b) Services Provided to the Transport Layer
c) Implementation of Connectionless Service
d) Implementation of Connection-Oriented Service
e) Comparison of Virtual-Circuit and Datagram Subnets

12
a) Two different organizations are possible, depending
on the type of service offered.
b) If connectionless service is offered, packets are
injected into the subnet individually and routed
independently of each other. No advance setup is
needed. In this context, the packets are frequently
called datagrams and the subnet is called a
datagram subnet.
c) If connection-oriented service is used, a path from
the source router to the destination router must be
established before any data packets can be sent.
This connection is called a VC (virtual circuit) and
the subnet is called a virtual-circuit subnet.

13
Implementation of Connectionless Service

Routing within a diagram subnet.


14
Network Layer Design Issues

a) Store-and-Forward Packet Switching


b) Services Provided to the Transport Layer
c) Implementation of Connectionless Service
d) Implementation of Connection-Oriented Service
e) Comparison of Virtual-Circuit and Datagram Subnets

15
a) For connection-oriented service, we need a
virtual - circuit subnet.
b) The idea behind virtual circuits is to avoid having to
choose a new route for every packet sent.
Instead,when a connection is established, a route
from the source machine to the destination machine
is chosen as part of the connection setup and stored
in tables inside the routers. That route is used for all
traffic flowing over the connection, exactly the same
way that the telephone system works. When the
connection is released, the virtual circuit is also
terminated.
c) With connection-oriented service, each packet
carries an identifier telling which virtual circuit it
belongs to. 16
Implementation of Connection-Oriented Service

Routing within a virtual-circuit subnet.


17
Network Layer Design Issues

a) Store-and-Forward Packet Switching


b) Services Provided to the Transport Layer
c) Implementation of Connectionless Service
d) Implementation of Connection-Oriented Service
e) Comparison of Virtual-Circuit and Datagram Subnets

18
Comparison of Virtual-Circuit and
Datagram Subnets

5-4

19
a) Inside the subnet, several trade-offs exist between virtual circuits and
datagrams.

b) One trade-off is between router memory space and bandwidth.


Virtual circuits allow packets to contain circuit numbers instead of
full destination addresses. If the packets tend to be fairly short, a full
destination address in every packet may represent a significant
amount of overhead and hence, wasted bandwidth. The price paid
for using virtual circuits internally is the table space within the
routers. Depending upon the relative cost of communication circuits
versus router memory, one or the other may be cheaper.

20
c) Another trade-off is setup time versus
address parsing time

Using virtual circuits requires a setup phase,


which takes time and consumes resources.
However, figuring out what to do with a data
packet in a virtual-circuit subnet is easy: the
router just uses the circuit number to index into
a table to find out where the packet goes. In a
datagram subnet, a more complicated lookup
procedure is required to locate the entry for the
destination.

21
a) Virtual circuits have some advantages in
guaranteeing quality of service and
avoiding congestion within the subnet.
b) The loss of a communication line is fatal
to virtual circuits using it but can be
easily compensated for if datagrams are
used. Datagrams also allow the routers
to balance the traffic throughout the
subnet, since routes can be changed
partway through a long sequence of
packet transmissions.
22
The Network Layer
Routing Algorithms

23
Description of Routing Algorithms
1 Definition: The routing algorithm is that part of the
network layer software responsible for deciding
which output line an incoming packet should be
transmitted on.
packet

2 Properties of routing algorithm: correctness,


simplicity, robustness, stability, fairness, and
optimality.
24
Description of Routing Algorithms
1)Robustness:Once a major network comes on the
air, it may be expected to run continuously for years
without system- wide failures. During that period
there will be hardware and software failures of all
kinds. Hosts, routers, and lines will fail repeatedly,
and the topology will change many times. The
routing algorithm should be able to cope with
changes in the topology and traffic without requiring
all jobs in all hosts to be aborted and the network to
be rebooted every time some router crashes.

25
Description of Routing Algorithms
2) Stability: It is also an important goal for the routing
algorithm. There exist routing algorithms that never
converge to equilibrium, no matter how long they run.
A stable algorithm reaches equilibrium and stays there.

A B
Q

26
3) Fairness and optimality may sound obvious, but
as it turns out, they are often contradictory goals.

There is enough traffic between A and A', between B and


B', and between C and C' to saturate the horizontal links.
To maximize the total flow, the X to X' traffic should be
shut off altogether. Unfortunately, X and X' may not see it
that way. Evidently, some compromise between global
efficiency and fairness to individual connections is
needed. 28
Description of Routing Algorithms
Two Categories of algorithm: nonadaptive and adaptive.
1) Nonadaptive algorithms do not base their routing
decisions on measurements or estimates of the current
traffic and topology. Instead, the choice of the route to
use to get from I to J is computed in advance, off-line,
and downloaded to the routers when the network is booted.
B
A -
A D
B B
C C C
D B
This procedure is sometimes called static routing.
28
Description of Routing Algorithms
2) Adaptive algorithms, in contrast, change their
routing decisions to reflect changes in the topology,
and usually the traffic as well.
B
A -
A D
B B
C C C
D B
A -
B C
C C
D C
some
This procedure is some times called dynamic-routing.
29
Routing Algorithms

• The Optimality Principle


• Shortest Path Routing
• Flooding
• Distance Vector Routing
• Link State Routing
• Hierarchical Routing
• Broadcast Routing

30
The Optimality Principle
The Optimality Principle: if router J is on the optimal
path from router I to router K, then the optimal path from
J to K also falls along the same route.

I J
K

31
The Optimality Principle
The set of optimal routes from all sources to a given
destination form a tree rooted at the destination. Such a
tree is called a sink tree.
Figure(a) A subnet (b) A sink tree for router B.

32
The Optimality Principle
Note: A sink tree is not necessarily unique; other trees
with the same path lengths may exist.
The goal of all routing algorithms is to discover and use
the sink trees for all routers.

33
Shortest Path Routing
a) A technique to study routing algorithms: The idea is to build a graph
of the subnet, with each node of the graph representing a router and
each arc of the graph representing a communication line (often called
a link).
b) To choose a route between a given pair of routers, the algorithm just
finds the shortest path between them on the graph.
c) One way of measuring path length is the number of hops. Another
metric is the geographic distance in kilometers . Many other metrics
are also possible. For example, each arc could be labeled with the
mean queuing and transmission delay for some standard test packet
as determined by hourly test runs.
d) In the general case, the labels on the arcs could be computed as a
function of the distance, bandwidth, average traffic, communication
cost, mean queue length, measured delay, and other factors. By
changing the weighting function, the algorithm would then compute
the ''shortest'' path measured according to any one of a number of
criteria or to a combination of criteria.
34
Shortest Path Routing

Dijkstra algorithm:Each node is labeled (in


parentheses) with its distance from the source node
along the best known path. Initially, no paths are
known, so all nodes are labeled with infinity. As the
algorithm proceeds and paths are found, the labels may
change, reflecting better paths. A label may be either
tentative or permanent.
Initially, all labels are tentative. When it is discovered
that a label represents the shortest possible path from
the source to that node, it is made permanent and
never changed thereafter.

36
Next steps?
Figure. The first five steps used in computing the shortest path
from A to D. The arrows indicate the working node.
37
The shortest path from A to D ABEFHD
Flooding
a) Flooding algorithm:Every incoming packet is sent out on
every outgoing line except the one it arrived on.
b) Flooding obviously generates vast numbers of duplicate packets, in
fact, an infinite number unless some measures are taken to damp the
process.
c) One such measure is to have a hop counter contained in the header
of each packet, which is decremented at each hop, with the packet
being discarded when the counter reaches zero.
d) An alternative technique for damming the flood is to keep track of
which packets have been flooded, to avoid sending them out a
second time.

37
Flooding
a) A variation of flooding that is slightly more practical is
selective flooding.In this algorithm the routers do not
send every incoming packet out on every line, only on
those lines that are going approximately in the right
direction.
b) Applications of flooding algorithm:
1. military applications
2. distributed database applications
3. wireless networks
4. as a metric against which other routing algorithms
can be compared
38
Distance Vector Routing
It is a dynamic routing algorithm.
Distance vector routing algorithms
operate by having each router maintain a
table (i.e, a vector) giving the best known
distance to each destination and which
line to use to get there. These tables are
updated by exchanging information with
the neighbors. (also named the distributed
Bellman-Ford routing algorithm and the
Ford- Fulkerson algorithm )
Distance Vector Routing
a) Table content: In distance vector routing, each router maintains
a routing table indexed by, and containing one entry for, each
router in the subnet. This entry contains two parts: the preferred
outgoing line to use for that destination and an estimate of the
time or distance to that destination.
b) Table updating method:Assume that the router knows the delay
to each of its neighbors. Once every T msec each router sends to
each neighbor a list of its estimated delays to each destination. It
also receives a similar list from each neighbor. Imagine that one
of these tables has just come in from neighbor X, with Xi being
X's estimate
of how long it takes to get to router i. If the router knows that the
delay to X is m msec, it also knows that it can reach router i via
X in Xi + m msec. By performing this calculation for each
neighbor, a
router can find out which estimate seems the best and use that
estimate and the corresponding line in its new routing table.
Note that the old routing table is not used in the calculation.
• Part (a) shows a subnet. The first four columns of part (b) show the
delay vectors received from the neighbors of router J. Suppose that
J has measured or estimated its delay to its neighbors, A, I, H, and
K as 8, 10, 12, and 6 msec, respectively. 42
Li nk St at e Routing
It is a dynamic routing algorithm The idea behind link state
routing can be stated as five parts. Each router must do the
following :

1. Discover its neighbors and learn their network addresses.


2..Measure the delay or cost to each of its neighbors.
3. Construct apackettelling all i t h a s just learned.
4. Sendthis packetto all other routers.
5. Compute the shortest pathto every other router.
In effect, the complete topology and all delays are experimentally
• measured and distributed to every router. Then Dijkstra's
algorithm can be run to find the shortest path to every other
router.
43
Li nk St at e Routing
1. Learning about the Neighbors
• It accomplishes this goal by sending a special HELLO packet on
each point-to-point line. The router on the other end is expected to
send back a reply telling who it is. These names must be globally
unique because when a distant router later hears that three routers
are all connected to F, it is essential that it can determine whether
all three mean the same F.
2. Measuring Line Cost
• The most direct way to determine this delay is to send over the line
a special ECHO packet that the other side is required to send back
immediately. By measuring the round-trip time and dividing it by
two, the sending router can get a reasonable estimate of the delay.
For even better results, the test can be conducted several times, and
the average used. Of course, this method implicitly assumes the
delays are symmetric, which may not always be the case.
43
Li nk St at e Routing
3. Building Link State Packets
• The packet starts with the identity of the sender,
followed by a sequence number and age (to be
described later), and a list of neighbors. For each
neighbor, the delay to that neighbor is given.
(a) A subnet. (b) The link state packets for this
subnet

44
• Building the link state packet is easy ,the hard part is determining when
to build them. One possibility is to build them periodically, that is, at
regular intervals. Another possibility is to build them when some
significant event occurs, such as
a line or neighbor going down or coming back up again or changing its
properties appreciably.
4. Distributing the Link State Packets
c) The basic distribution algorithm:The fundamental idea is to use flooding
to distribute the link state packets. To keep the flood in check, each
packet contains a sequence number that is incremented for each new
packet sent. Routers keep track of all the (source router, sequence) pairs
they see. When a new link state packet comes in, it is checked against
the list of packets already seen. If it is new, it is forwarded on all lines
except the one it arrived on. If it is a duplicate, it is discarded. If a
packet with a sequence number lower than the highest one seen so far
ever arrives, it is rejected as being obsolete since the router has more
recent data.
45
Li nk St at e Routing
a) First problem with this algorithm:if the sequence numbers wrap around,
confusion will reign. The solution here is to use a 32-bit sequence number.
With one link state packet per second, it would take 137 years to wrap
around, so this possibility can be ignored.
b) Second problem :if a router ever crashes, it will lose track of its sequence
number. If it starts again at 0, the next packet will be rejected as a duplicate.
c) Third problem :if a sequence number is ever corrupted and 65,540 is received
instead of 4 (a 1-bit error), packets 5 through 65,540 will be rejected as
obsolete, since the current sequence number is thought to be 65,540.

46
Li nk St at e Routing
a) The solution to all these problems is to include the age of each packet
after the sequence number and decrement it once per second. When the
age hits zero, the information from that router is discarded.

5. Computing the New Routes


• Once a router has accumulated a full set of link state packets, it can
construct the entire subnet graph because every link is represented.
• Now Dijkstra's algorithm can be run locally to construct the shortest
path to all possible destinations.

47
Hierarchical Routing
• The routers are divided into what we will call
regions, with each router knowing all the details
about how to route packets to destinations
within its own region, but knowing nothing
about the internal structure of other regions.
• For huge networks, a two-level hierarchy may be
insufficient; it may be necessary to group the regions
into clusters, the clusters into zones, the zones into
groups, and so on, until we run out of names for
aggregations.

48
• The full routing table for
router 1A has 17 entries, as
shown in (b). When routing
is done hierarchically, as in
(c), there are entries for all
the local routers as before,
but all other regions have
been condensed into a
single router, so all traffic
for region 2 goes via the 1B
-2A line, but the rest of the
remote traffic goes via the
1C -3B line. Hierarchical
routing has reduced the
table from 17 to 7 entries.
49
• Unfortunately, these gains in space are not free. There
is a penalty to be paid, and this penalty is in the form
of increased path length. For example, the best route
from 1A to 5C is via region 2, but with hierarchical
routing all traffic to region 5 goes via region 3,
because that is better for most destinations in region 5.

50
Br oadcas t Rout i ng
1)Sending a packet to all destinations simultaneously is
called broadcasting.
2)The source simply sends a distinct packet to each
destination. Not only is the method wasteful of
bandwidth, but it also requires the source to have a
complete list of all destinations.
3) Flooding.
The problem with flooding as a broadcast technique is
that it generates too many packets and consumes too
much bandwidth.

51
Broadcast Routing
It is multi-destination routing.
If this method is used, each packet contains either a list of
destinations or a bit map indicating the desired destinations.
When a packet arrives at a router, the router checks all the
destinations to determine the set of output lines that will be
needed. (An output line is needed if it is the best route to at least
one of the destinations.) The router generates a new copy of the
packet for each output line to be used and includes in each packet
only those destinations that are to use the line. In effect, the
destination set is partitioned among the output lines. After a
sufficient number of hops, each packet will carry only one
destination and can be treated as a normal packet.

52
Broadcast Routing
A broadcast algorithm makes explicit use of the sink
tree for the router initiating the broadcast—or any
other convenient spanning tree for that matter.
A spanning tree is a subset of the subnet that includes
all the routers but contains no loops.
If each router knows which of its lines belong to the
spanning tree, it can copy an incoming broadcast packet
onto all the spanning tree lines except the one it arrived
on.

53
Broadcast Routing
It is reverse path forwarding.
When a broadcast packet arrives at a router, the router
checks to see if the packet arrived on the line that is
normally used for sending packets to the source of the
broadcast. If so, there is an excellent chance that the
broadcast packet itself followed the best route from the
router and is therefore the first copy to arrive at the
router. This being the case, the router forwards copies
of it onto all lines except the one it arrived on. If,
however, the broadcast packet arrived on a line other
than the preferred one for reaching the source, the
packet is discarded as a likely duplicate.
54
Broadcast Routing
How does the reverse path algorithm works?

Rever s e pat h forwarding. ( a ) A s ubnet .


( b) A s i nk t r ee. ( c) The t r ee bui l t by
reverse path forwarding. 56
The Network Layer

1. Network Layer Design Issues


2. Routing Algorithms
3. The Network Layer in the Internet

56
The Network Layer in the Internet

• The IP Protocol
• IP Addresses
• Internet Control Protocols

57
The I P Protocol
An IP datagram consists of a header part and a text
part. The header has a 20-byte fixed part and a
variable length optional part.

58
IP Addresses
• Every host and router on the Internet has an IP address, which encodes
its network number and host number.
• All IP addresses are 32 bits long. It is important to note that an IP
address does not actually refer to a host. It really refers to a network
interface, so if a host is on two networks, it must have two IP
addresses.
• IP addresses were divided into the five categories
network mask

255.0.0.0
255.255.0.0
255.255.255.0

59
IP Addresses
• The values 0 and -1 (all 1s) have special meanings. The value
0 means this network or this host. The value of -1 is used as a
broadcast address to mean all hosts on the indicated network.

60
IP Addresses
• All the hosts in a network must have the same network
number. This property of IP addressing can cause problems as
networks grow.
• The problem is the rule that a single class A, B, or C address
refers to one network, not to a collection of LANs.
• The solution is to allow a network to be split into several
parts for internal use but still act like a single network to
the outside world.

61
IP Addresses
• To implement subnetting, the main router needs a subnet
mask that indicates the split between network + subnet
number and host.
• For example, if the university has a B address(130.50.0.0) and
35 departments, it could use a 6-bit subnet number and a 10-bit
host number, allowing for up to 64 Ethernets, each with a
maximum of 1022 hosts.

• The subnet mask can be written as 255.255.252.0. An


alternative notation is /22 to indicate that the subnet mask is 22
bits long.
62
Internet Control Protocols
1 、 The Internet Control Message Protocol
• The operation of the Internet i s monitored
clo se ly by the ro u t e rs. When somet hi ng
unexpected occurs, the event i s r epor t ed by t he
I CMP ( I nt er net Cont r ol Mes s age Pr ot ocol ) , whi ch
i s also used to t e st the I n t e r n e t .
• Each I CMP mes s age t ype i s encaps ul at ed i n an I P
packet.

63
Internet Control Protocols
2 、 ARP—The Address Resolution Protocol
• Most hos t s at compani es and uni ver si t i es ar e
at t ached t o a LAN by an i nt er f ace boar d t hat
only understands LAN addresses.
• The ques t i on : How do IP addresses get mapped
ont o data l i n k layer addr es s es, s uch as
Ethernet?
• Let us s t ar t out by seei ng how a us er on hos t 1
sends a packet t o a user on host 2 .

64
Internet Control Protocols
1 ) The upper l ayer s of t war e on hos t 1 now bui l ds
a packet wi t h 192.31.65.5 in the Destination
addr ess f i el d and gi ves i t to the IP software
to transmit.
2 ) The I P sof t war e can l ook at t he addr ess and
see that the destination i s on i t s own net wor k,
but i t needs some way to f ind the destination's
Ethernet address.
• Hos t 1 outputs a broadcast packet onto the Ethernet
ask ing: Who owns IP address 192 . 3 1 . 6 5 . 5 ? The broadcast
wi l l ar r i ve at every machine on Ethernet 192.31.65 . 0 ,
and each one wi l l check i t s IP address.
• Host 2 al one wi l l r es pond with i t s Ethernet address
( E2) . I n t hi s way hos t 1 l ear ns that IP address
192. 31. 65. 5 i s on the host with Ethernet address E2
• The pr ot ocol us ed f or as. ki ng t hi s ques t i on and get t i ng
the reply i s c alled ARP (Address Resolution Protoco l66).
T h e sof t w a r e on host 1 bui l d s
anEt her net I P f ra m e a d d re sse d t o E 2 ,
p u t s t h e I P p a cke t t o 1 9 2 . 3 1 . 6 5 . 5 in t h e p a ylo a d
f ie ld a n d d u m p s it o n t o t h e E t h e rn e t

4 ) The Ethernet boar d of host 2 d e t e c t t hi s


f r ame recognizes i t as a frame f o r i t s e l f ,
scoops i t up, and causes an i n t e r r u p t . The
Et her net dr i ver ext r act s t he I P packet f r om t he
payload and passes i t t o t he IP s of t war e, whi ch
sees t hat i t i s cor r ect l y addr es s ed and
processes i t .

67
I nternet Control Protocols
3 、 RARP, BOOTP, and DHCP
b) Gi ven an Et her net addr es s, what i s t he
cor r espondi ng I P address? I n par t i cul ar , t hi s
pr obl em occur s when a di skl es s wor kst at i on i s
booted.
c) The f i r s t s ol ut i on devi s ed was to use RARP
(Reverse Address Resolution P ro t o co l). Thi s
protocol al l ows a newly-booted workstation to
br oadcas t i t s Et her net addr es s and say: My 48-
bi t Et her net address i s 14. 04. 05. 18. 01. 25. Does
anyone out t her e know m y addr es s ? The RARP
server sees t h i s r eques t , l ooks up t he Et her net
addr es s in i t s conf i gur at i on f i l es , and sends
back t he cor r es pondi ng I P addr es s . 68
Internet Control Protocols
• A di sadvant age of RARP i s t hat i t uses a
destination address of a l l 1s (lim it e d
broadcasting) to reach the RARP se rve r.
However , s uch broadcasts are not forwarded by
r out er s , so a RARP ser ver i s needed on each
net wor k.
• Unl i ke RARP, BOOTP uses UDP mess ages, whi ch ar e
f or war ded over r out er s . I t al so pr ovi des a
d iskle ss workstation wi t h additional
i nf or mat i on, i ncl udi ng t he I P addr es s of t he
f i l e ser ver hol di ng t he memor y i mage, t he I P
address of the default ro u t e r, and t he subnet
mas k t o use.
• A s er i ous pr obl em wi t h BOOTP i s t hat i t
requires manual configuration of t abl es mappi ng
69
I P addr es s t o Et her net addr es s.
• DHCP allows both manual and automatic address
assignment.
• Like RARP and BOOTP, DHCP is based on the
idea of a special server, assigns IP addresses
to hosts asking that this server need not on
the same LAN as the requesting
. host.

69
To find its IP address, newly-booted machine broadcasts a DHCP
DISCOVER packet. The DHCP relay agent on its LAN intercepts
all DHCP broadcasts. It sends DISCOVER packet to the DHCP
server, possibly as a unicast network. The only piece of
information relay agent needs is the IP address of the server.
Process-to-Process Delivery
UDP, TCP
PROCESS-TO-PROCESS DELIVERY

The transport layer is responsible for process-to-


process delivery—the delivery of a packet, part of a
message, from one process to another. Two
processes communicate in a client/server
relationship.
Note

The transport layer is responsible for


process-to-process delivery.
Types of data deliveries
Port numbers
IP addresses versus port numbers
Socket address
Multiplexing and demultiplexing
Reliable Vs Unreliable
Error control
Position of UDP, TCP, and SCTP in TCP/IP suite
USER DATAGRAM PROTOCOL (UDP)

The User Datagram Protocol (UDP) is called a


connectionless, unreliable transport protocol. It does
not add anything to the services of IP except to provide
process-to-process communication instead of host-to-
host communication.
User datagram format
Note

UDP length
= IP length – IP header’s length
Transmission Control Protocol (TCP)

TCP is a connection-oriented protocol; it creates a


virtual connection between two TCPs to send data. In
addition, TCP uses flow and error control mechanisms
at the transport level.
TCP Services

1. Process to Process Communication


2. Stream Delivery Service
3. Sending and Receiving Buffers
4. Full Duplex Communication
5. Connection -Oriented Service
6. Reliable Service
Stream delivery
Sending and receiving buffers
TCP segments
TCP Features

1. Numbering System
2. Flow Control
3. Error Control
4. Congestion Control
TCP Features

1. Numbering System

Byte Number
(TCP generates a random number b/w 0 to 2 32 -1)
Sequence Number
Note

The bytes of data being transferred in


each connection are numbered by TCP.
The numbering starts with a randomly
generated number.
Example 23.3

Example
Suppose a TCP connection is transferring a file of 5000 bytes. The first byte is numbered 10,001.
What are the sequence numbers for each segment if data are sent in five segments, each carrying
1000 bytes?

The following shows the sequence number for each segment:


Note

The value in the sequence number field


of a segment defines the
number of the first data byte
contained in that segment.
Note

The value of the acknowledgment field


in a segment defines
the number of the next byte a party
expects to receive.
The acknowledgment number is
cumulative.
TCP segment format
Figure 23.17 Control field
Table 23.3 Description of flags in the control field
Connection establishment using three-way handshaking
Note

A SYN segment cannot carry data, but it


consumes one sequence number.
Note

A SYN + ACK segment cannot


carry data, but does consume one
sequence number.
Note

An ACK segment, if carrying no data,


consumes no sequence number.
Figure 23.19 Data transfer
Figure 23.20 Connection termination using three-way handshaking
Note

The FIN segment consumes one


sequence number if it does
not carry data.
The ultimate goal of the transport layer is to provide efficient, reliable, and
cost- effective service to its users, normally processes in the application
layer. To achieve this goal, the transport layer makes use of the services
provided by the network layer. The hardware and/or software within the
transport layer that does the work is called the transport entity.
Position of transport layer
Transport
Layer

Process-to-process
delivery
Services Provided to the Upper Layers
The bottom four layers can be seen as the transport service provider,
whereas the upper layer(s) are the transport service user.

To allow users to access the transport service, the transport layer must
provide some operations to application programs, that is, a transport
service interface. Each transport service has its own interface.

The transport service is similar to the network service, but there are also
some important differences. The main difference is that the network
service is intended to model the service offered by real networks. Real
networks can lose packets, so the network service is generally unreliable.
The (connection-oriented) transport service, in contrast, is reliable.

A second difference between the network service and transport service


is whom the services are intended for. The network service is used only
by the transport entities. Few users write their own transport entities, and
thus few users or programs ever see the bare network service. In
contrast, many programs see the transport primitives. Consequently, the
transport service must be convenient and easy to use.
Transport Service
Primitives

The primitives for a simple transport service.

The nesting of TPDUs, packets, and frames.


• A machine provides a variety of services and to
differentiate between these services, each service is
assigned with a unique port number.
• The port numbers less than 1024 are considered as well-
known ports and are reserved for standard services.
• In transport layer, two processes communicate with each
other via sockets. A socket acts as an end point of the
communication path between the processes.
• The IP address and Port address put together defines the
socket address.
Berkeley Sockets
• Socketswere first released as part of the Berkeley
UNIX 4.2BSD software distribution in 1983.
Elements of Transport Protocols
The transport service is implemented by a transport protocol used
between the two transport entities.

Though the transport protocols resemble the Data Link Protocols,


significant differences are present due to the major dissimilarities
between the environments in which the two protocols operate.

A physical channel exists in DLL, where as it is replaced by the entire


subnet for Transport Layer

No explicit addressing of destinations is required in DLL, where it is


required for Transport layer

A final difference between the data link and transport layers is one of
amount rather than of kind. Buffering and flow control are needed in both
layers, but the presence of a large and dynamically varying number of
connections in the transport layer may require a different approach than
we used in the data link layer
Addressing
• When an application process wishes to set up a connection to a
remote application process, it must specify which one to connect to.
The method normally used is to define transport addresses to which
processes can listen for connection requests. In the Internet, these
end points are called ports.

Application processes, both


clients and servers, can attach
themselves to a TSAP to
establish a connection to a
remote TSAP. These connections
run through NSAPs on each host
initial connection
protocol

Instead of every conceivable server listening at a well-known TSAP, each


machine that wishes to offer services to remote users has a special
process server that acts as a proxy for less heavily used servers. It
listens to a set of ports at the same time, waiting for a connection request.
This server is called inetd on UNIX systems…….
Connection Establishment
The problem with a simple communication request &
communication accepted exchange is that the network can
lose, store, and duplicate packets. This behavior causes
serious complications.

The Crux of the problem is the existence of delayed duplicates.


Various ways to attack the problem are:

 Using a throw-away transport address


 Give each connection a connection identifier

 A mechanism to kill off aged packets

 Restricted subnet design

 Hop Counter

 Time Stamping
Connection Establishment using Three-Way Handshake

Three protocol scenarios for establishing a connection using a


three-way handshake.
CR denotes CONNECTION REQUEST.
(a) Normal operation
(b) Old CONNECTION REQUEST appearing out of nowhere.
(c) Duplicate CONNECTION REQUEST and duplicateACK.
Connection Release

Abrupt disconnection with loss of data.


Connection Release

Four protocol scenarios for releasing a connection


(a) Normal case of a three-way handshake b) final ACK lost
Connection
Release

6-14, c,d

(c) Response lost (d)Response lost and subsequent DRs lost


Flow Control and Buffering

(a) Chained fixed-size buffers. (b) Chained variable-sized buffers.


(c) One large circular buffer per connection.
• The optimum trade-off between source buffering and
destination buffering depends on the type of traffic carried
by the connection. For low-bandwidth bursty traffic, such
as that produced by an interactive terminal the sender
must retain a copy of the TPDU until it is acknowledged.

• On the other hand, for file transfer and other high-


bandwidth traffic, it is better if the receiver does dedicate
a full window of buffers, to allow the data to flow at
maximum speed.

• Thus, for low-bandwidth bursty traffic, it is better to buffer


at the sender, and for high bandwidth smooth traffic, it is
better to buffer at the receiver.
Multiplexing

If only one network address is available on a host, all transport connections


on that machine have to use it. When a TPDU comes in, some way is
needed to tell which process to give it to. This situation is called upward
multiplexing.

Multiplexing can also be useful in the transport layer for another reason.
Suppose, for example, that a subnet uses virtual circuits internally and
imposes a maximum data rate on each one. If a user needs more
bandwidth than one virtual circuit can provide, a way out is to open multiple
network connections and distribute the traffic among them on a round-
robin basis, called downward multiplexing.
• The User Datagram Protocol (UDP) is a transport
layer protocol defined for use with the IP network
layer protocol. It is defined by RFC 768 written by
John Postel. It provides a best-effort datagram
service to an End System (IP host).
• The service provided by UDP is an unreliable
service that provides no guarantee for delivery
protection from duplication.
• UDP provides a minimal, unreliable, best-effort,
message-passing transport to applications and
upper- layer protocols
Introduction to UDP
The Internet protocol suite supports a connectionless transport protocol,
UDP (User Datagram Protocol).
The UDP header.

UDP provides a way for applications to send encapsulated IP datagrams and send them
without having to establish a connection. UDP is described in RFC 768.

UDP transmits segments consisting of an 8-byte header followed by the payload. The two
ports serve to identify the end points within the source and destination machines. When a
UDP packet arrives, its payload is handed to the process attached to the destination port.

The source port is primarily needed when a reply must be sent back to the source.
The UDP length field includes the 8-byte header and the data.
UDP Checksum (A checksum to verify that the end to end data has not
been corrupted by routers or bridges in the network or by the processing
in an end system. The algorithm to compute the checksum is the
Standard Internet Checksum algorithm. This allows the receiver to verify
that it was the intended destination of the packet, because it covers the
IP addresses, port numbers and protocol number, and it verifies that the
packet is not truncated or padded, because it covers the size field.
Therefore, this protects an application against receiving corrupted
payload data in place of, or in addition to, the data that was sent. In the
cases where this check is not required, the value of 0x0000 is placed in
this field, in which case the data is not checked by the receiver.
Remote Procedure Call

The idea behind RPC is to make a remote procedure call look as much as
possible like a local one. In the simplest form, to call a remote procedure,
the client program must be bound with a small library procedure, called
the client stub, that represents the server procedure in the client's
address space. Similarly, the server is bound with a procedure called the
server stub. These procedures hide the fact that the procedure call from
the client to the server is not local.
The Real-Time Transport Protocol

 The basic function of RTP is to multiplex several real-time data streams onto a
single stream of UDP packets. RTP format contains several features to help
receivers work with multimedia information.
 Each RTP packet in the stream is given a number one higher than its
predecessor. This allows the destination to determine if any packets were
missing.
 RTP has no acknowledgements and no mechanism to request retransmissions.
 Each RTP payload may contain multiple samples, and they may be coded
any way that the application wants.
The RTP header

version (V): 2 bits: This field identifies the version of RTP. The
version is 2 upto RFC 1889.
padding (P): 1 bit: If the padding bit is set, the packet contains
one or more additional padding octets at the end which are
not part of the payload.
Xtension (X): 1 bit: If the extension bit is set, the fixed header
is followed by exactly one header extension
• CSRC count (CC): 4 bits: The CSRC count contains the number of CSRC
identifiers that follow the fixed header.
• marker (M): 1 bit: Marker bit is used by specific applications to serve a purpose
of its own. Used to mark the start of a video frame, start of a word in audio
channel.
• payload type (PT): 7 bits: This field identifies the format (e.g. encoding) of the
RTP payload and determines its interpretation by the application.
• sequence number: 16 bits: The sequence number increments by one for each
RTP data packet sent, and may be used by the receiver to detect packet loss
and to restore packet sequence.
• timestamp: 32 bits: The Timestamp is produced by the stream’s source to note
when the first sample in the packet was made. This value can help reduce timing
variability called jitter at the receiver by decoupling the playback from the packet
arrival time.
• SSRC: 32 bits: The SSRC field identifies the synchronization source. It is the
method used to multiplex and demultiplex data streams onto a single stream of
UDP packets
• CSRC list: 0 to 15 items, 32 bits each: The CSRC list identifies the contributing
sources for the payload contained in this packet.
Real-time Transport Control Protocol
• RTP has a little sister protocol called RTCP (Realtime Transport
Control Protocol).
• It handles feedback, synchronization, and the user interface but
does not transport any data. The first function can be used to
provide feedback on delay, jitter, bandwidth, congestion, and other
network properties to the sources
• RTCP also handles interstream synchronization. The problem is
that different streams may use different clocks, with different
granularities and different drift rates. RTCP can be used to keep
them in sync.
• RTCP provides a way for naming the various sources (e.g., in
ASCII text). This information can be displayed on the receiver's
screen to indicate who is talking at the moment.
• TCP (Transmission Control Protocol) was specifically
designed to provide a reliable end-to-end byte stream
over an unreliable internetwork.

• TCP was formally defined in RFC 793.

Datagrams may arrive in the wrong order.It is also up to


TCP to reassemble them into messages in the proper
sequence. In short, TCP must furnish the reliability that
most users want and that IP does not provide.
The TCP Service Model
• TCP service is obtained by both the sender and receiver creating end
points, called sockets. Each socket has a socket number (address)
consisting of the IP address of the host and a 16-bit number local to
that host, called a port.

A socket may be used for multiple connections at the same time. In


other words, two or more connections may terminate at the same
socket. Connections are identified by the socket identifiers at both ends,
that is, (socket1, socket2). No virtual circuit numbers or other identifiers
are used.
 Port numbers below 1024 are called well-known ports and are
reserved for standard services
TCP Protocol
application application
writes data reads data
socket socket
layer layer
TCP TCP
data segment
send buffer receive buffer
ACK segment

• Provides a reliable, in-order, byte stream abstraction:


– Recover lost packets and detect/drop duplicates
– Detect and drop corrupted packets
– Preserve order in byte stream, no “message boundaries”
– Full-duplex: bi-directional data flow in same connection
• Flow and congestion control:
– Flow control: sender will not overwhelm receiver
– Congestion control: sender will not overwhelm the network
– Sliding window flow control
– Send and receive buffers
– Congestion control done via adaptive flow control window
size
29
The TCP Service Model
• All TCP connections are full duplex and point-to-point. Full
duplex means that traffic can go in both directions at the
same time. Point-to-point means that each connection has
exactly two end points. TCP does not support multicasting
or broadcasting.

• A TCP connection is a byte stream, not a message stream.


Message boundaries are not preserved end to end.

(a) Four 512-byte segments sent as separate IP datagrams.


(b) The 2048 bytes of data delivered to the application in a single
READ CALL.
The TCP Protocol
• Every byte on a TCP connection has its own 32-bit
sequence number.

• The sending and receiving TCP entities exchange data in


the form of segments. A TCP segment consists of a
fixed 20-byte header (plus an optional part) followed by
zero or more data bytes. The TCP software decides how
big segments should be.

• Two limits restrict the segment size. First, each segment,


including the TCP header, must fit in the 65,515-byte IP
payload. Second, each network has a maximum transfer
unit, or MTU, and each segment must fit in the MTU
TCP segment format
The TCP Segment
Header

TCP
• The Source port and Destination port fields identify the
local end points of the connection. A port plus its host's IP
address forms a 48-bit unique end point. The source and
destination end points together identify the connection.
• This connection identifier is called a 5 tuple because it
consists of five pieces of information: the protocol (TCP),
source IP and source port, and destination IP and
destination port.
• The Sequence number and Acknowledgement number
fields perform their usual functions. The later specifies the
next byte expected, not the last byte correctly received.
Both are 32 bits long because every byte of data is
numbered in a TCP stream.
• The TCP header length tells how many 32-bit words are
contained in the TCP header.
• CWR and ECE are used to signal congestion when ECN
(Explicit Congestion Notification) is used. ECE is set to
signal an ECN- Echo to a TCP sender to tell it to slow
down when the TCP receiver gets a congestion
indication from the network. CWR is set to signal
Congestion Window Reduced from the TCP sender to
the TCP receiver so that it knows the sender has slowed
down and can stop sending the ECN-Echo.

• URG is set to 1 if the Urgent pointer is in use. The Urgent


pointer is used to indicate a byte offset from the current
sequence number at which urgent data are to be found.

• The ACK bit is set to 1 to indicate that the


Acknowledgement number is valid.
• The PSH bit indicates PUSHed data. The receiver is
hereby kindly requested to deliver the data to the
application upon arrival and not buffer it until a full buffer
has been received.

• The RST bit is used to abruptly reset a connection that


has become confused due to a host crash or some other
reason.

• The SYN bit is used to establish connections. The


connection request has SYN = 1 and ACK = 0. The
connection reply does bear an acknowledgement,
however, so it has SYN = 1 and ACK = 1.

• The FIN bit is used to release a connection. It specifies


that the sender has no more data to transmit.
• Flow control in TCP is handled using a variable-sized
sliding window. The Window size field tells how many
bytes may be sent starting at the byte acknowledged.
• A Checksum is also provided for extra reliability. It
checksums the header, the data, and a conceptual
pseudoheader in exactly the same way as UDP.
• The Options field provides a way to add extra facilities not
covered by the regular header. Many options have been
defined and several are commonly used.
– A widely used option is the one that allows each host to specify the
MSS(Maximum Segment Size) it is willing to accept.
– The timestamp option carries a timestamp sent by the sender
and echoed by the receiver.
– The SACK (Selective ACKnowledgement) option lets a
receiver tell a sender the ranges of sequence numbers that it has
received
The TCP Segment Header
The pseudoheader included in the
TCP
checksum.
TCP Connection Establishment
• Connections are established in TCP by means of the three-way
handshake. To establish a connection, one side, say, the server,
passively waits for an incoming connection by executing the LISTEN
and ACCEPT primitives.
• The other side, say, the client, executes a CONNECT primitive,
specifying the IP address and port to which it wants to connect, the
maximum TCP segment size it is willing to accept, and optionally some
user data (e.g., a password). The CONNECT primitive sends a TCP
segment with the SYN bit on and ACK bit off and waits for a response.
• When this segment arrives at the destination, the TCP entity there
checks to see if there is a process that has done a LISTEN on the port
given in the Destination port field. If not, it sends a reply with the RST
bit on to reject the connection.
(a) TCP connection establishment in the normal case. (b)
Call collision
TCP Transmission Policy
Window probe is a packet sent
by the sender, who can send a 1-
byte segment to force the
receiver to reannounce the next
byte expected and the window
size.

Delayed acknowledgements is
an optimization, where the idea is
to delay acknowledgements
and window updates for up to 500
msec in the hope of acquiring
some data on which to hitch a
free ride.

Window management in TCP


TCP Transmission Policy
Nagle’s algorithm is a way to reduce
the bandwidth wastage by a sender
that sends multiple short packets
(e.g., 41- byte packets containing 1
byte of data).

when data come into the sender in


small pieces, just send the first piece
and buffer all the rest until the first
piece is acknowledged. Then send all
the buffered data in one TCP segment
and start buffering again until the next
segment is acknowledged.

Silly window syndrome is a problem


that occurs when data are passed to
the sending TCP entity in large blocks,
but an interactive application on the
receiving side reads data only 1 byte
at a time.
• Clark’s solution is to prevent the receiver from
sending a window update for 1 byte. Instead, it is
forced to wait until it has a decent amount of
space available and advertise that instead.

• Nagle’s algorithm and Clark’s solution to the silly


window syndrome are complementary. Nagle was
trying to solve the problem caused by the sending
application delivering data to TCP a byte at a time.
Clark was trying to solve the problem of the
receiving application sucking the data up from
TCP a byte at a time.

• Both solutions are valid and can work together. The


goal is for the sender not to send small segments
and the receiver not to ask for them.
TCP Congestion Control

(a)A fast network feeding a low capacity


receiver.
(b) A slow network feeding a high-capacity
TCP Congestion Control
• To deal with the two problems of receivers capacity and network
capacity, each sender maintains two windows: the window the receiver
has granted and a second window, the congestion window.
• Each reflects the number of bytes the sender may transmit. The number
of bytes that may be sent is the minimum of the two windows.
• When a connection is established, the sender initializes the congestion
window to the size of the maximum segment in use on the connection.
It then sends one maximum segment. Each burst acknowledged
doubles the congestion window.
• The congestion window keeps growing exponentially until either a
timeout occurs or the receiver's window is reached. This algorithm is
called slow start.
• Internet congestion control algorithm uses a third parameter, the
threshold, initially 64 KB, in addition to the receiver and congestion
windows. When a timeout occurs, the threshold is set to half of the
current congestion window, and the congestion window is reset to one
maximum segment.
TCP Congestion
Control

An example of the Internet congestion algorithm.


TCP Timer Management
• Retransmission timer: When a segment is sent, a
retransmission timer is started. If the segment is acknowledged
before the timer expires, the timer is stopped. If, on the other
hand, the timer goes off before the acknowledgement comes in,
the segment is retransmitted (and the timer started again).

• Persistence timer is designed to prevent a deadlock situation


where, the sender keeps waiting for a window update from the
receiver, which is lost. When the persistence timer goes off, the
sender transmits a probe to the receiver. The response to the
probe gives the window size.

• Keepalive timer: When a connection has been idle for a long


time, the keepalive timer may go off to cause one side to check
whether the other side is still there. If it fails to respond, the
connection is terminated.
Wireless TCP and UDP
Splitting a TCP connection into two
connections.

The advantage of this scheme called indirect TCP, is that both connections are
now homogeneous. Timeouts on the first connection can slow the sender
down, whereas timeouts on the second one can speed it up
Transport Layer
Transport Layer

•The transport layer is a 4th layer from the top.


•The main role of the transport layer is to provide the communication
services directly to the application processes running on different
hosts.
•The transport layer provides a logical communication between
application processes running on different hosts. Although the
application processes on different hosts are not physically connected,
application processes use the logical communication provided by the
transport layer to send the messages to each other.
•The transport layer protocols are implemented in the end systems but
not in the network routers.
•A computer network provides more than one protocol to the network
applications. For example, TCP and UDP are two transport layer
protocols that provide a different set of services to the network layer.
•All transport layer protocols provide multiplexing/demultiplexing service.
It also provides other services such as reliable data transfer, bandwidth
guarantees, and delay guarantees.
•Each of the applications in the application layer has the ability to send a
message by using TCP or UDP. The application communicates by using
either of these two protocols. Both TCP and UDP will then communicate
with the internet protocol in the internet layer. The applications can read
and write to the transport layer. Therefore, we can say that
communication is a two-way process.
Services provided by the Transport Layer
The services provided by the transport layer are similar to those of the
data link layer. The data link layer provides the services within a single
network while the transport layer provides the services across an
internetwork made up of many networks. The data link layer controls
the physical layer while the transport layer controls all the lower
layers.
The services provided by the transport layer protocols can be
divided into five categories:
•End-to-end delivery
•Addressing
•Reliable delivery
•Flow control
•Multiplexing
End-to-end delivery:
The transport layer transmits the entire message to the destination.
Therefore, it ensures the end-to-end delivery of an entire message from
a source to the destination.
Reliable delivery:
The transport layer provides reliability services by retransmitting the lost
and damaged packets.
The reliable delivery has four aspects:
•Error control
•Sequence control
•Loss control
•Duplication control
Error Control

The primary role of reliability is Error Control. In reality, no transmission will be


100 percent error-free delivery. Therefore, transport layer protocols are
designed to provide error-free transmission.
The data link layer also provides the error handling mechanism, but it ensures
only node-to-node error-free delivery. However, node-to-node reliability does
not ensure the end-to-end reliability.
The data link layer checks for the error between each network. If an error is
introduced inside one of the routers, then this error will not be caught by the
data link layer. It only detects those errors that have been introduced between
the beginning and end of the link. Therefore, the transport layer performs the
checking for the errors end-to-end to ensure that the packet has arrived
correctly.
Sequence Control
•The second aspect of the reliability is sequence control which is
implemented at the transport layer.
•On the sending end, the transport layer is responsible for ensuring that
the packets received from the upper layers can be used by the lower
layers. On the receiving end, it ensures that the various segments of a
transmission can be correctly reassembled.
Loss Control
Loss Control is a third aspect of reliability. The transport layer ensures
that all the fragments of a transmission arrive at the destination, not
some of them. On the sending end, all the fragments of transmission are
given sequence numbers by a transport layer. These sequence
numbers allow the receiver’s transport layer to identify the missing
segment.
Duplication Control
Duplication Control is the fourth aspect of reliability. The transport
layer guarantees that no duplicate data arrive at the destination.
Sequence numbers are used to identify the lost packets; similarly,
it allows the receiver to identify and discard duplicate segments.
Flow Control
Flow control is used to prevent the sender from overwhelming the
receiver. If the receiver is overloaded with too much data, then the
receiver discards the packets and asking for the retransmission of
packets. This increases network congestion and thus, reducing the
system performance. The transport layer is responsible for flow
control. It uses the sliding window protocol that makes the data
transmission more efficient as well as it controls the flow of data so
that the receiver does not become overwhelmed. Sliding window
protocol is byte oriented rather than frame oriented.
Multiplexing
The transport layer uses the multiplexing to improve transmission
efficiency.
Multiplexing can occur in two ways:
•Upward multiplexing: Upward multiplexing means multiple
transport layer connections use the same network connection. To
make more cost-effective, the transport layer sends several
transmissions bound for the same destination along the same path;
this is achieved through upward multiplexing.
•Downward multiplexing: Downward multiplexing means one transport
layer connection uses the multiple network connections. Downward
multiplexing allows the transport layer to split a connection among
several paths to improve the throughput. This type of multiplexing is used
when networks have a low or slow capacity.
Addressing
•According to the layered model, the transport layer interacts with the
functions of the session layer. Many protocols combine session,
presentation, and application layer protocols into a single layer known
as the application layer. In these cases, delivery to the session layer
means the delivery to the application layer. Data generated by an
application on one machine must be transmitted to the correct
application on another machine. In this case, addressing is provided by
the transport layer.
•The transport layer provides the user address which is specified as a
station or port. The port variable represents a particular TS user of a
specified station known as a Transport Service access point (TSAP).
Each station has only one transport entity.
•The transport layer protocols need to know which upper-layer protocols
are communicating.
Computer Network Security

Computer network security consists of measures taken by business or some organizations to monitor and
prevent unauthorized access from the outside attackers.
Different approaches to computer network security management have different requirements depending on
the size of the computer network. For example, a home office requires basic network security while large
businesses require high maintenance to prevent the network from malicious attacks.
Network Administrator controls access to the data and software on the network. A network administrator
assigns the user ID and password to the authorized person.
Aspects of Network Security:

Following are the desirable properties to achieve secure communication:


•Privacy: Privacy means both the sender and the receiver expects confidentiality. The transmitted message
should be sent only to the intended receiver while the message should be opaque for other users. Only the
sender and receiver should be able to understand the transmitted message as eavesdroppers can intercept the
message. Therefore, there is a requirement to encrypt the message so that the message cannot be
intercepted. This aspect of confidentiality is commonly used to achieve secure communication.

•Message Integrity: Data integrity means that the data must arrive at the receiver exactly as it was sent. There
must be no changes in the data content during transmission, either maliciously or accident, in a transit. As
there are more and more monetary exchanges over the internet, data integrity is more crucial. The data
integrity must be preserved for secure communication.
•End-point authentication: Authentication means that the receiver is sure of the sender’s identity, i.e., no
imposter has sent the message.

•Non-Repudiation: Non-Repudiation means that the receiver must be able to prove that the received
message has come from a specific sender. The sender must not deny sending a message that he or she
send. The burden of proving the identity comes on the receiver. For example, if a customer sends a
request to transfer the money from one account to another account, then the bank must have a proof
that the customer has requested for the transaction.
Privacy

The concept of how to achieve privacy has not been changed for thousands of years: the message cannot
be encrypted. The message must be rendered as opaque to all the unauthorized parties. A good
encryption/decryption technique is used to achieve privacy to some extent. This technique ensures that the
eavesdropper cannot understand the contents of the message.
Encryption/Decryption

Encryption: Encryption means that the sender converts the original information into another form and sends
the unintelligible message over the network.
Decryption: Decryption reverses the Encryption process in order to transform the message back to the
original form.
The data which is to be encrypted at the sender site is known as plaintext, and the encrypted data is known
as ciphertext. The data is decrypted at the receiver site.
There are two types of Encryption/Decryption techniques:

•Privacy with secret key Encryption/Decryption


•Privacy with public key Encryption/Decryption
Secret Key Encryption/Decryption technique
•In Secret Key Encryption/Decryption technique, the same key is used by both the parties, i.e., the sender
and receiver.

•The sender uses the secret key and encryption algorithm to encrypt the data; the receiver uses this key and
decryption algorithm to decrypt the data.

•In Secret Key Encryption/Decryption technique, the algorithm used for encryption is the inverse of the
algorithm used for decryption. It means that if the encryption algorithm uses a combination of addition and
multiplication, then the decryption algorithm uses a combination of subtraction and division.
•In Secret Key Encryption/Decryption technique, the same key is used by both the parties, i.e., the
sender and receiver.

•The sender uses the secret key and encryption algorithm to encrypt the data; the receiver uses this key
and decryption algorithm to decrypt the data.

•In Secret Key Encryption/Decryption technique, the algorithm used for encryption is the inverse of the
algorithm used for decryption. It means that if the encryption algorithm uses a combination of addition
and multiplication, then the decryption algorithm uses a combination of subtraction and division.
Data Encryption Standard (DES)

•The Data Encryption Standard (DES) was designed by IBM and adopted by the U.S. government as the
standard encryption method for nonmilitary and nonclassified use.
•The Data Encryption Standard is a standard used for encryption, and it is a form of Secret Key
Cryptography.
Advantage

Efficient: The secret key algorithms are more efficient as it takes less time to encrypt the message than
to encrypt the message by using a public key encryption algorithm. The reason for this is that the size
of the key is small. Due to this reason, Secret Key Algorithms are mainly used for encryption and
decryption.
Disadvantages of Secret Key Encryption

The Secret Key Encryption/Decryption has the following disadvantages:


•Each pair of users must have a secret key. If the number of people wants to use this method in the world is
N, then there are N(N-1)/2 secret keys. For example, for one million people, then there are half billion secret
keys.
•The distribution of keys among different parties can be very difficult. This problem can be resolved by
combining the Secret Key Encryption/Decryption with the Public Key Encryption/Decryption algorithm.
Public Key Encryption/Decryption technique

•There are two keys in public key encryption: a private key and a public key.
•The private key is given to the receiver while the public key is provided to the public.
In the above figure, we see that A is sending the message to user B. 'A' uses the public key to encrypt the
data while 'B' uses the private key to decrypt the data.

•In public key Encryption/Decryption, the public key used by the sender is different from the private key
used by the receiver.

•The public key is available to the public while the private key is kept by each individual.

•The most commonly used public key algorithm is known as RSA.


Advantages of Public Key Encryption

•The main restriction of private key encryption is the sharing of a secret key. A third party cannot use this
key. In public key encryption, each entity creates a pair of keys, and they keep the private one and
distribute the public key.
•The number of keys in public key encryption is reduced tremendously. For example, for one million users
to communicate, only two million keys are required, not a half-billion keys as in the case of secret key
encryption.
Disadvantages of Public Key Encryption

•Speed: One of the major disadvantage of the public-key encryption is that it is slower than secret-key
encryption. In secret key encryption, a single shared key is used to encrypt and decrypt the message
which speeds up the process while in public key encryption, different two keys are used, both related to
each other by a complex mathematical process. Therefore, we can say that encryption and decryption
take more time in public key encryption.

•Authentication: A public key encryption does not have a built-in authentication. Without authentication,
the message can be interpreted or intercepted without the user's knowledge.

•Inefficient: The main disadvantage of the public key is its complexity. If we want the method to be
effective, large numbers are needed. But in public key encryption, converting the plaintext into ciphertext
using long keys takes a lot of time. Therefore, the public key encryption algorithms are efficient for short
messages not for long messages.
Differences b/w Secret Key Encryption & Public Key Encryption
Application Layer
The application layer in the OSI model is the closest layer to the end user which means that the
application layer and end user can interact directly with the software application. The application layer
programs are based on client and servers.
The Application layer includes the following functions:
•Identifying communication partners: The application layer identifies the availability of
communication partners for an application with data to transmit.
•Determining resource availability: The application layer determines whether sufficient network
resources are available for the requested communication.
•Synchronizing communication: All the communications occur between the applications requires
cooperation which is managed by an application layer.
Services of Application Layers
•Network Virtual terminal: An application layer allows a user to log on to a remote host. To do so,
the application creates a software emulation of a terminal at the remote host. The user's computer
talks to the software terminal, which in turn, talks to the host. The remote host thinks that it is
communicating with one of its own terminals, so it allows the user to log on.
•File Transfer, Access, and Management (FTAM): An application allows a user to access files in a
remote computer, to retrieve files from a computer and to manage files in a remote computer. FTAM
defines a hierarchical virtual file in terms of file structure, file attributes and the kind of operations
performed on the files and their attributes.
•Addressing: To obtain communication between client and server, there is a need for addressing.
When a client made a request to the server, the request contains the server address and its own
address. The server response to the client request, the request contains the destination address, i.e.,
client address. To achieve this kind of addressing, DNS is used.
•Mail Services: An application layer provides Email forwarding and storage.
•Directory Services: An application contains a distributed database that provides access for global
information about various objects and services.
Authentication: It authenticates the sender or receiver's message or both.
DNS
An application layer protocol defines how the application processes running on different systems, pass
the messages to each other.
•DNS stands for Domain Name System.
•DNS is a directory service that provides a mapping between the name of a host on the network and its
numerical address.
•DNS is required for the functioning of the internet.
•Each node in a tree has a domain name, and a full domain name is a sequence of symbols specified by
dots.
•DNS is a service that translates the domain name into IP addresses. This allows the users of networks
to utilize user-friendly names when looking for other hosts instead of remembering the IP addresses.
•For example, suppose the FTP site at EduSoft had an IP address of 132.147.165.50, most people
would reach this site by specifying ftp.EduSoft.com. Therefore, the domain name is more reliable than
IP address.
DNS is a TCP/IP protocol used on different platforms. The
domain name space is divided into three different
sections: generic domains, country domains, and inverse
domain.
Generic Domains
•It defines the registered hosts according to their generic behavior.
•Each node in a tree defines the domain name, which is an index to the DNS database.
•It uses three-character labels, and these labels describe the organization type.
Label Description

aero Airlines and aerospace companies

biz Businesses or firms

com Commercial Organizations

coop Cooperative business Organizations

edu Educational institutions

gov Government institutions

info Information service providers

int International Organizations

mil Military groups

museum Museum & other nonprofit organizations

name Personal names

net Network Support centers

org Nonprofit Organizations

pro Professional individual Organizations


Country Domain
The format of country domain is same as a generic domain, but it uses two-character country
abbreviations (e.g., us for the United States) in place of three character organizational
abbreviations.
Inverse Domain
The inverse domain is used for mapping an address to a name. When the server has received a
request from the client, and the server contains the files of only authorized clients. To determine
whether the client is on the authorized list or not, it sends a query to the DNS server and ask for
mapping an address to the name.
Working of DNS
•DNS is a client/server network communication protocol. DNS clients send requests to the. server
while DNS servers send responses to the client.
•Client requests contain a name which is converted into an IP address known as a forward DNS
lookups while requests containing an IP address which is converted into a name known as reverse
DNS lookups.
•DNS implements a distributed database to store the name of all the hosts available on the internet.
•If a client like a web browser sends a request containing a hostname, then a piece of software such
as DNS resolver sends a request to the DNS server to obtain the IP address of a hostname. If DNS
server does not contain the IP address associated with a hostname, then it forwards the request to
another DNS server. If IP address has arrived at the resolver, which in turn completes the request
over the internet protocol.
FTP
•FTP stands for File transfer protocol.
•FTP is a standard internet protocol provided by TCP/IP used for transmitting the files from one
host to another.
•It is mainly used for transferring the web page files from their creator to the computer that acts as
a server for other computers on the internet.
•It is also used for downloading the files to computer from other servers.
Objectives of FTP
•It provides the sharing of files.
•It is used to encourage the use of remote computers.
•It transfers the data more reliably and efficiently.

Why FTP?
Although transferring files from one system to another is very simple and
straightforward, but sometimes it can cause problems. For example, two
systems may have different file conventions. Two systems may have different
ways to represent text and data. Two systems may have different directory
structures. FTP protocol overcomes these problems by establishing two
connections between hosts. One connection is used for data transfer, and
another connection is used for the control connection.
Mechanism of FTP

The above figure shows the basic model of the FTP. The FTP client has three components: the user
interface, control process, and data transfer process. The server has two components: the server
control process and the server data transfer process.
There are two types of connections in FTP:
•Control Connection: The control connection uses very simple rules for communication.
Through control connection, we can transfer a line of command or line of response at a time.
The control connection is made between the control processes. The control connection remains
connected during the entire interactive FTP session.
•Data Connection: The Data Connection uses very complex rules as data types may vary. The
data connection is made between data transfer processes. The data connection opens when a
command comes for transferring the files and closes when the file is transferred.
FTP Clients
•FTP client is a program that implements a file transfer protocol which allows you to transfer
files between two hosts on the internet.
•It allows a user to connect to a remote host and upload or download the files.
•It has a set of commands that we can use to connect to a host, transfer the files between you
and your host and close the connection.
•The FTP program is also available as a built-in component in a Web browser. This GUI based
FTP client makes the file transfer very easy and also does not require to remember the FTP
commands.
Advantages of FTP:
•Speed: One of the biggest advantages of FTP is speed. The FTP is one of the fastest way to
transfer the files from one computer to another computer.
•Efficient: It is more efficient as we do not need to complete all the operations to get the entire
file.
•Security: To access the FTP server, we need to login with the username and password.
Therefore, we can say that FTP is more secure.
•Back & forth movement: FTP allows us to transfer the files back and forth. Suppose you are a
manager of the company, you send some information to all the employees, and they all send
information back on the same server.
Disadvantages of FTP:
•The standard requirement of the industry is that all the FTP transmissions should be encrypted.
However, not all the FTP providers are equal and not all the providers offer encryption. So, we will
have to look out for the FTP providers that provides encryption.
•FTP serves two operations, i.e., to send and receive large files on a network. However, the size
limit of the file is 2GB that can be sent. It also doesn't allow you to run simultaneous transfers to
multiple receivers.
•Passwords and file contents are sent in clear text that allows unwanted eavesdropping. So, it is
quite possible that attackers can carry out the brute force attack by trying to guess the FTP
password.
•It is not compatible with every system.
Telnet
•The main task of the internet is to provide services to users. For example, users want to run
different application programs at the remote site and transfers a result to the local site. This
requires a client-server program such as FTP, SMTP. But this would not allow us to create a
specific program for each demand.
•The better solution is to provide a general client-server program that lets the user access any
application program on a remote computer. Therefore, a program that allows a user to log on to
a remote computer. A popular client-server program Telnet is used to meet such demands. Telnet
is an abbreviation for Terminal Network.
•Telnet provides a connection to the remote computer in such a way that a local terminal appears
to be at the remote side.
There are two types of login:

Local Login
•When a user logs into a local computer, then it is known as local login.
•When the workstation running terminal emulator, the keystrokes entered by the user are
accepted by the terminal driver. The terminal driver then passes these characters to the
operating system which in turn, invokes the desired application program.
•However, the operating system has special meaning to special characters. For example, in
UNIX some combination of characters have special meanings such as control character
with "z" means suspend. Such situations do not create any problem as the terminal driver
knows the meaning of such characters. But, it can cause the problems in remote login.
Remote login

•When the user wants to access an application program on a remote computer, then the user must
perform remote login.
How remote login occurs
At the local site
The user sends the keystrokes to the terminal driver, the characters are then sent to the TELNET
client. The TELNET client which in turn, transforms the characters to a universal character set known
as network virtual terminal characters and delivers them to the local TCP/IP stack
At the remote site
The commands in NVT forms are transmitted to the TCP/IP at the remote machine. Here, the
characters are delivered to the operating system and then pass to the TELNET server. The
TELNET server transforms the characters which can be understandable by a remote computer.
However, the characters cannot be directly passed to the operating system as a remote
operating system does not receive the characters from the TELNET server. Therefore it
requires some piece of software that can accept the characters from the TELNET server. The
operating system then passes these characters to the appropriate application program.
Network Virtual Terminal (NVT)
•The network virtual terminal is an interface that defines how data and commands are sent across
the network.
•In today's world, systems are heterogeneous. For example, the operating system accepts a
special combination of characters such as end-of-file token running a DOS operating
system ctrl+z while the token running a UNIX operating system is ctrl+d.
•TELNET solves this issue by defining a universal interface known as network virtual interface.
•The TELNET client translates the characters that come from the local terminal into NVT form and
then delivers them to the network. The Telnet server then translates the data from NVT form into a
form which can be understandable by a remote computer.
SMTP
•SMTP stands for Simple Mail Transfer Protocol.
•SMTP is a set of communication guidelines that allow software to transmit an electronic mail over
the internet is called Simple Mail Transfer Protocol.
•It is a program used for sending messages to other computer users based on e-mail addresses.
•It provides a mail exchange between users on the same or different computers, and it also supports:
• It can send a single message to one or more recipients.
• Sending message can include text, voice, video or graphics.
• It can also send the messages on networks outside the internet.
•The main purpose of SMTP is used to set up communication rules between servers. The servers
have a way of identifying themselves and announcing what kind of communication they are trying to
perform. They also have a way of handling the errors such as incorrect email address. For example,
if the recipient address is wrong, then receiving server reply with an error message of some kind.

Components of SMTP
•First, we will break the SMTP client and SMTP server into two components such as user agent
(UA) and mail transfer agent (MTA). The user agent (UA) prepares the message, creates the
envelope and then puts the message in the envelope. The mail transfer agent (MTA) transfers
this mail across the internet.
•SMTP allows a more complex system by adding a relaying system. Instead of just having one
MTA at sending side and one at receiving side, more MTAs can be added, acting either as a
client or server to relay the email.
•The relaying system without TCP/IP protocol can also be used to send the emails to users, and
this is achieved by the use of the mail gateway. The mail gateway is a relay MTA that can be used
to receive an email.
Working of SMTP
1.Composition of Mail: A user sends an e-mail by composing an electronic mail message
using a Mail User Agent (MUA). Mail User Agent is a program which is used to send and
receive mail. The message contains two parts: body and header. The body is the main part of
the message while the header includes information such as the sender and recipient address.
The header also includes descriptive information such as the subject of the message. In this
case, the message body is like a letter and header is like an envelope that contains the
recipient's address.
1.Submission of Mail: After composing an email, the mail client then submits the completed
e-mail to the SMTP server by using SMTP on TCP port 25.
2.Delivery of Mail: E-mail addresses contain two parts: username of the recipient and domain
name. For example, vivek@gmail.com, where "vivek" is the username of the recipient and
"gmail.com" is the domain name.
If the domain name of the recipient's email address is different from the sender's domain
name, then MSA will send the mail to the Mail Transfer Agent (MTA). To relay the email, the
MTA will find the target domain. It checks the MX record from Domain Name System to obtain
the target domain. The MX record contains the domain name and IP address of the recipient's
domain. Once the record is located, MTA connects to the exchange server to relay the
message.
1.Receipt and Processing of Mail: Once the incoming message is received, the exchange server
delivers it to the incoming server (Mail Delivery Agent) which stores the e-mail where it waits for the
user to retrieve it.
2.Access and Retrieval of Mail: The stored email in MDA can be retrieved by using MUA (Mail
User Agent). MUA can be accessed by using login and password.
SNMP
•SNMP stands for Simple Network Management Protocol.
•SNMP is a framework used for managing devices on the internet.
•It provides a set of operations for monitoring and managing the internet.

SNMP Concept
•SNMP has two components Manager and agent.
•The manager is a host that controls and monitors a set of agents such as routers.
•It is an application layer protocol in which a few manager stations can handle a set of agents.
•The protocol designed at the application level can monitor the devices made by different
manufacturers and installed on different physical networks.
•It is used in a heterogeneous network made of different LANs and WANs connected by routers or
gateways.
Managers & Agents
•A manager is a host that runs the SNMP client program while the agent is a router that runs
the SNMP server program.
•Management of the internet is achieved through simple interaction between a manager and
agent.
•The agent is used to keep the information in a database while the manager is used to access
the values in the database. For example, a router can store the appropriate variables such as a
number of packets received and forwarded while the manager can compare these variables to
determine whether the router is congested or not.
•Agents can also contribute to the management process. A server program on the agent checks
the environment, if something goes wrong, the agent sends a warning message to the
manager.
Management with SNMP has three basic ideas:
•A manager checks the agent by requesting the information that reflects the behavior of the
agent.
•A manager also forces the agent to perform a certain function by resetting values in the agent
database.
•An agent also contributes to the management process by warning the manager regarding an
unusual condition.
Management Components
•Management is not achieved only through the SNMP protocol but also the use
of other protocols that can cooperate with the SNMP protocol. Management is
achieved through the use of the other two protocols: SMI (Structure of
management information) and MIB(management information base).
•Management is a combination of SMI, MIB, and SNMP. All these three
protocols such as abstract syntax notation 1 (ASN.1) and basic encoding rules
(BER).
SMI
The SMI (Structure of management information) is a component used in network management.
Its main function is to define the type of data that can be stored in an object and to show how to
encode the data for the transmission over a network.
MIB
•The MIB (Management information base) is a second component for the network management.
•Each agent has its own MIB, which is a collection of all the objects that the manager can
manage. MIB is categorized into eight groups: system, interface, address translation, ip, icmp,
tcp, udp, and egp. These groups are under the mib object.
SNMP
SNMP defines five types of messages: GetRequest, GetNextRequest, SetRequest, GetResponse,
and Trap.
GetRequest: The GetRequest message is sent from a manager (client) to the agent (server) to
retrieve the value of a variable.
GetNextRequest: The GetNextRequest message is sent from the manager to agent to retrieve the
value of a variable. This type of message is used to retrieve the values of the entries in a table. If
the manager does not know the indexes of the entries, then it will not be able to retrieve the values.
In such situations, GetNextRequest message is used to define an object.
GetResponse: The GetResponse message is sent from an agent to the manager in response to
the GetRequest and GetNextRequest message. This message contains the value of a variable
requested by the manager.
SetRequest: The SetRequest message is sent from a manager to the agent to set a value in a
variable.

Trap: The Trap message is sent from an agent to the manager to report an event. For
example, if the agent is rebooted, then it informs the manager as well as sends the time of
rebooting.
HTTP
•HTTP stands for HyperText Transfer Protocol.
•It is a protocol used to access the data on the World Wide Web (www).
•The HTTP protocol can be used to transfer the data in the form of plain text, hypertext, audio,
video, and so on.
•This protocol is known as HyperText Transfer Protocol because of its efficiency that allows us to
use in a hypertext environment where there are rapid jumps from one document to another
document.
•HTTP is similar to the FTP as it also transfers the files from one host to another host. But, HTTP is
simpler than FTP as HTTP uses only one connection, i.e., no control connection to transfer the files.
•HTTP is used to carry the data in the form of MIME-like format.
•HTTP is similar to SMTP as the data is transferred between client and server. The HTTP differs
from the SMTP in the way the messages are sent from the client to the server and from server to
the client. SMTP messages are stored and forwarded while HTTP messages are delivered
immediately.
Features of HTTP:
•Connectionless protocol: HTTP is a connectionless protocol. HTTP client initiates a request
and waits for a response from the server. When the server receives the request, the server
processes the request and sends back the response to the HTTP client after which the client
disconnects the connection. The connection between client and server exist only during the
current request and response time only.
•Media independent: HTTP protocol is a media independent as data can be sent as long as both
the client and server know how to handle the data content. It is required for both the client and
server to specify the content type in MIME-type header.
•Stateless: HTTP is a stateless protocol as both the client and server know each other only during
the current request. Due to this nature of the protocol, both the client and server do not retain the
information between various requests of the web pages.
HTTP Transactions

The above figure shows the HTTP transaction between client and server. The client initiates a
transaction by sending a request message to the server. The server replies to the request
message by sending a response message.
Messages
HTTP messages are of two types: request and response. Both the message types follow the
same message format.
Request Message: The request message is sent by the client that consists of a request line,
headers, and sometimes a body.
Response Message: The response message is sent by the server to the client that consists of a
status line, headers, and sometimes a body.
Uniform Resource Locator (URL)
•A client that wants to access the document in an internet needs an address and to
facilitate the access of documents, the HTTP uses the concept of Uniform Resource
Locator (URL).
•The Uniform Resource Locator (URL) is a standard way of specifying any kind of
information on the internet.
•The URL defines four parts: method, host computer, port, and path.
•Method: The method is the protocol used to retrieve the document from a server. For example,
HTTP.
•Host: The host is the computer where the information is stored, and the computer is given an alias
name. Web pages are mainly stored in the computers and the computers are given an alias name
that begins with the characters "www". This field is not mandatory.
•Port: The URL can also contain the port number of the server, but it's an optional field. If the port
number is included, then it must come between the host and path and it should be separated from
the host by a colon.
•Path: Path is the pathname of the file where the information is stored. The path itself contain
slashes that separate the directories from the subdirectories and files.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy