Wa0008
Wa0008
FACULTY OF ENGINEERING
ENGINEERING
TERM PAPER
ON
PACKET SWITCHING
BY
NOVEMBER, 2018.
1
ABSTRACT
Packet switching is the basis for the Internet Protocol (IP). In packet switching, information
flows are broken into variable-sized packets. These packets are sent, one by one, to the nearest
router, which will look up the destination address, and then forward them to the corresponding
next hop. This process is repeated until the packet reaches its destination.
The routing of the information is thus done locally, hop-by-hop. Routing decisions are
independent of other decisions in the past and in other routers; however, they are based on
network state and topology information that is exchanged among routers using BGP, IS-IS or
OSPF. The network does not need to keep any state to operate, other than the routing tables. The
forwarding mechanism is called store and-forward because IP packets are completely received,
stored in the router while being processed, and then transmitted. Additionally, packets may need
If the system runs out of buffers, packets are dropped. With the most scheduling policies, such as
FCFS and WFQ, packet switching resources have contention when they have more
arrivals/requests that what they can process. Two examples are the outgoing links and the router
interconnects.
2
CONTENTS
ABSTRACT....................................................................................................................................2
1. INTRODUCTION....................................................................................................................4
3. PACKET FORMATS............................................................................................................8
7.1 FLOODING...................................................................................................................12
8.1. FIFO....................................................................................................................................16
11. SUMMARY.......................................................................................................................18
REFERENCES..............................................................................................................................18
3
1. INTRODUCTION
Traditional telephone network operate on the basis of circuit switching. A call setup process
reserves resources (time slot) along a path so that the stream of voice sample can be transmitted
with very low delay across the network. The resources allocated to the user can’t be used by
other users for the duration of the call. This approach is inefficient when the amount of
information transferred is small or if the information is produced in burst, as is the case in many
computer applications. In this paper we examine networks that transfer block of information
called packets. Packet switching network are better matched to computer application and can
Packet switching is similar to message switching using short messages. Any message exceeding
a network-defined maximum length is broken up into shorter units, known as packets, for
transmission; the packets, each with an associated header, are then transmitted individually
through the network. The performance of Packet Switching is called Best Effort performance. If
you transmit from sender to receiver, the entire network will do its best to get the packet to the
other end as fast as possible, but there are no guarantees on how fast that packet will arrive.
4
Datagram Packet Switching: This approach is connectionless and does not involve prior
based on the destination address until the packet of information arrives its destination.
Routing decisions are made dynamically, so each packet may follow a different route and
thus the packages may arrive out of order and a message it sent back for its resen
independently through the network. Each packet has an attached header t5hat provides all
the information required to route the packet to its destination. When a packet arrives at
packet switch, the destination address in the header is examined to determine the next
hop in the path to the destination. The packet is then placed in the queue to wait until the
given transmission line becomes available. By sharing the transmission line among
multiple packets, packet switching can achieve high utilization at the expense of packet
queuing delays. We note that routers in the internet are packet switch that operates in
Because each packet is routed independently, packets from the same source to the same
destination may transverse different paths through the network as shown in fig 1. For
example the route may change in response to a network fault. Thus packets may arrive
5
Fig 1; Packet Switching: Datagram approach.
setting up a connection across the network before the information can be transferred. The
setup procedures typically involve the exchange of signaling messages and the allocation
of resources along the path from the input to the output for the duration of the connection.
A route is set up prior to packets being sent. The packets will all follow this route. This
makes the routing through the network very easy and the packages will be received in the
correct order. Both approaches involve the use of switches or routers to direct packet
across the network. As in circuit switching, the call setup procedure usually takes place
before any packet can flow through the network as shown in fig 2
6
.
The connection setup procedure establishes a path through the network and then set
parameters in the switch along the path as shown in fig 3. The controller/processor in
every switch is involved in the exchange in the exchange of signaling message to set up
the path. As in the datagram approach, the transmission link are shared by packets in
many flows. All packet for the connection then follow the same path.
In datagram packet switching each must contain the full address of the source and
destination. In large networks, this address may require a large number of bits and result
in significant packet overhead and hence waste transmission bandwidth. One advantages
7
Fig 3. Delay in Virtual-circuit packet switching
located in the various switches along the path. At the input to every switch, the
input port, the VCI in the header is used to access the table. The table lookup provides
the output port to which the packet is to be forwarded and the VCI that is to be used at the
input port of the next switch. Thus the call setup procedure sets up a chain of pointer
across the network that directs the flow of packets in a connection (Huitema C., 1995).
The table entry for a VCI can also specify the type of priority that is to be given to the
packet by the scheduler that controls the transmissions in the next output port. (Stevens,
January 1997)
3. PACKET FORMATS
A packet contains three major fields: The header, the message, and redundancy check bits. Most
popular technique uses cyclic redundancy checks (CRCs) CRC is nothing more than a set of
parity bits that cover overlapping fields of message bits. CRC can detect small number of errors.
8
A header typically contains numerous subfields in addition to the necessary address field.
Each data packet is transmitted individually and can even follow different routes to its
destination. After all the packets forming, the message arrive at the destination is recompiled into
TCP/IP
example a 2MB file would be broken up into chunks of 512 bytes in size.
Before transmission the packet contains the ‘header’ that has network IP address that it
needs to arrive at and also details of the IP address from which it was sent.
Moreover the header also allots a number to each packet and records how many packets
After leaving the computer the packets start to head off in different directions taking the
The router figures out which is the next fastest connection and sends each packet on its
way. This technique works extremely well, because if one branch gets busy then the
As on the transmitting side the packets were given a number so when the packets arrive at
their destination, they are put back together again in the right order before its finally sent
in an arbitrary mesh like fashion as shown in fig 3. As suggested by the figure, a packet could
take several possible paths from host A to host B. For example, three possible paths are 1-3-6, 1-
4-5-6 and 1-2-5-6. However which path is the best one? Here the meaning of the best depends on
the objective function that the network operator tries to optimize. If the objective is to minimize
10
the number of hops, then path 1-3-6 is the best. If each link incurs a certain delay and the
objective function is to minimize the end-to-end delay, then the best path is the one that give the
end-to-end minimum delay. Yet a third objective function is selecting the path with the greatest
available bandwidth. The purpose of the routing algorithm is to identify the set of path that are
best in a sense defined by the network operator. Note that a routing algorithm must have global
knowledge about the network state in order to perform its task. (Halabi, 1997) (Jacobson, august
1988)
The main ingredient of a good routing algorithm depends on the objective function that one is
trying to optimize. However in general a routing algorithm should seek one or more of the
1. Rapid and accurate delivery of packets: A routing algorithm must operate correctly; that
is it must be able to find a route to the destination if it exists. In addition, the algorithm
should not take an unreasonably long time to find the route to the destination.
2. Adaptable to change in network topology resulting from node or link failure: In real
network equipment and transmission lines are subjected to failures. Thus a routing
11
algorithm must be able to adapt to this situation and reconfigure the routes automatically
3. Adaptability to varying source-destination traffic loads: Traffic loads are quantities that
are changing dynamically. In a period of 24 hours, traffic loads may go into cycles of
heavy and light periods. An adapting routing algorithm would be able to adjust the routes
4. Ability to route packets away from temporarily congested links: A routing algorithm
should try to avoid heavy congestion links. Often it is desirable to balance the load on
each link.
5. Ability to determine the connectivity of the network: To find optimal routes, the routing
exchanging control messages with other routing systems. These messages represent an
7.1 FLOODING
The principle of flooding calls for a packet switch to forward an incoming packet to all
port except the one the packet was received from. If each switch performs this flooding
process, the packet will eventually reach the destination. Flooding is a very effective
12
routing approach when the information in the routing tables is not available, such as
However, flooding may easily swamp the network as one packet creates multiple packets
that in turn create multiples of multiple packets, generating an exponential growth rate as
illustrated in fig 4. Initially one packet arrives at node 1 triggers three packet to node 2,3
and 4. In the second phase node 2,3 and 4 send two, two and three packets respectively.
These packets arrive at nodes 2 through 6. In the third phase 15 more packets are
generated, giving a total of 25 packets after three phases. Clearly, flooding needs to be
controlled so that packets are not generated excessively. To limit such a behavior one can
One simple method is to use a time-to-live field in each packet. When the source
sends a packet, the time-to-live field is initially set to some small number (say, 10
or smaller). Each switch decrements the field by one before flooding the packet.
If the value reaches zero, the switch discards the packet. To avoid unnecessary
waste of bandwidth, the time –to-live should ideally be set to the minimum hop
number between two further nodes (called the diameter of the network). In fig 4
the diameter of the network is two. To have a packet reach any destination, it is
In the second method, each switch adds its identifier to the header of the packet
before it floods the packet. When a switch encounters a packet that contains the
identifier of the switch, it discards the packet. This method effectively prevents a
13
The third method is similar to the second method in that they both try to discard
old packets. The only difference lies in the implementation. Here each packet
from a given source is identified with a unique sequence number. When a switch
receives a packet, the switch records the source address and the sequence number
of the packet. If the switch based on the stored source address and sequence , it
This approach requires the network to provide multiple paths for each source-destination
pair. Each switch first tries to forward a packet to the preferred port. If the preferred port
14
Is a routing approach that does not require an intermediate node to maintain a routing
table, but rather puts more burdens at the source hosts. Source routing works in either in
datagram or virtual-circuit packet switching. Before a source host can send a packet the
host has to know the complete route to the destination host in order to include the route
information in the header of the packet. The route information contains the sequence of
nodes to transverse and should give the intermediate node sufficient information to
forward the packet to the next node until the packet reaches the destination. Figure 5
shows how source routing works. (Jacobson, august 1988) (Huitema C., 1995)
Traffic management is concerned with the delivery of QoS to specific packet flows. Traffic
management entails mechanisms for managing the flows in a network to control the load that is
applied to various link and switches. Traffic management also involves the setting of priority and
for packets and cells belonging to different classes, flows or connections. It also may involve the
policing and shaping of traffic flows as they enter the network. (Jacobson, august 1988)
15
8.1. FIFO
The simplest approach to managing a multiplexer involves First-in, First-out (FIFO) queueing
where all arriving packet are placed in a common queue and transmitted in order of arrival as
shown in fig 8a. Packets are discarded when they arrives at a full buffer. The delay and loss
experience by packets in a FIFO system depends on the inter-arrival times and on the packet
lengths. AS inter-arrivals become bustier or packet length more variable, performance is not
possible to provide different information flows with different qualities of service. FIFO systems
are subjected to hopping, which occurs when a user sends packets at a high rate and fills the
buffer in the system, thus depriving other users of access to the multiplexer (Stevens, January
1997).
Fig 8(a): FIFO queueing. (b) FIFO queueing with discharge priority.
The second approach involves defining a number of priority classes. As shown in fig 8b, each
time the transmission line becomes available the next packet for transmission is selected from the
16
head of the line of the highest priority queue that is not empty. For example packets requiring
low delay may be assigned a high priority, whereas packets that are not urgent may be given
lower priority. The size of the buffer for the different priority classes can be selected to meet
different levels of service to the different classes, it still has shortcomings. For examples it does
not allow for providing some degree of guaranteed access to transmission bandwidth to the lower
priority classes. Fairness problem may arise here when a certain users hogs the bandwidth by
Line efficiency: There is better use of communication lines. Single node to node
Data rate conversion: Each station connects to the local node at its own speed.
17
Packets are accepted even when network is busy: Delivery may slow down but
There is quality of service issue because the links are shared by many packets.
This will bring congestion and packet may drop once it elapses the time-to-leave
(TOL)
It is complex because it needs to add address in switching and routing till it gets to
its destination.
11. CONCLUSION
sending data over the analogue circuit switched network. Circuit switching is not very efficient
for small messages due to the lack of efficiency in bandwidth utilization and the analogue
circuits make the data subject to noise and errors. The biggest packet switched network is the
internet. The internet uses the datagram packet switching method X.25 which is based on virtual
Packet switching is similar to message switching using short messages. Any message exceeding
a network-defined maximum length is broken up into shorter units, known as packets, for
18
transmission; the packets, each with an associated header, are then transmitted individually
through the network. The performance of Packet Switching is called Best Effort performance. If
you transmit from sender to receiver, the entire network will do its best to get the packet to the
other end as fast as possible, but there are no guarantees on how fast that packet will arrive.
19
REFERENCES
publishing .
review.
6. Stevens, W. (January 1997). Fast retransmit and recovering algorithm. RFC 2001.
http://www.telecomabc.com/p/packet-switching.html
20