Network Layer
Network Layer
Network Layer
NETWORK LAYER
NETWORK LAYER
Switching
• Switching is the process of transferring data packets from one device
to another in a network, or from one network to another, using
specific devices called switches.
• A computer user experiences switching all the time .
• For example, accessing the Internet from your computer device,
whenever a user requests a webpage to open, the request is
processed through switching of data packets only.
NETWORK LAYER
Network Switching
• A switch is a dedicated piece of computer hardware that facilitates the
process of switching i.e., incoming data packets and transferring them to
their destination.
• A switch works at the Data Link layer of the OSI Model. A switch primarily
handles the incoming data packets from a source computer or network and
decides the appropriate port through which the data packets will reach
their target computer or network.
• A switch decides the port through which a data packet shall pass with the
help of its destination MAC(Media Access Control) Address. A switch does
this effectively by maintaining a switching table, (also known as
forwarding table).
NETWORK LAYER
NETWORK LAYER
Process of Switching
• The switch receives a data frame or packet from a computer
connected to its ports.
• MAC Address Extraction: The switch reads the header of the data
frame and collects the destination MAC Address from it.
NETWORK LAYER
Circuit Switching
• Circuit switching is a communication method where a dedicated
communication path, or circuit, is established between two devices
before data transmission begins.
• The circuit remains dedicated to the communication for the duration
of the session, and no other devices can use it while the session is in
progress.
• Circuit switching is commonly used in voice communication and
some types of data communication.
Circuit Switching
Circuit Switching
Advantages of Circuit Switching:
• Guaranteed bandwidth: Circuit switching provides a dedicated path for
communication, ensuring that bandwidth is guaranteed for the duration of
the call.
• Low latency: Circuit switching provides low latency because the path is
predetermined, and there is no need to establish a connection for each
packet.
• Predictable performance: Circuit switching provides predictable
performance because the bandwidth is reserved, and there is no
competition for resources.
• Suitable for real-time communication: Circuit switching is suitable for real-
time communication, such as voice and video, because it provides low
latency and predictable performance.
Circuit Switching
Disadvantages of Circuit Switching:
• Inefficient use of bandwidth: Circuit switching is inefficient because
the bandwidth is reserved for the entire duration of the call, even
when no data is being transmitted.
• Limited scalability: Circuit switching is limited in its scalability
because the number of circuits that can be established is finite, which
can limit the number of simultaneous calls that can be made.
• High cost: Circuit switching is expensive because it requires dedicated
resources, such as hardware and bandwidth, for the duration of the
call.
Packet switching
Packet Switching
• Packet switching is a communication method where data is divided
into smaller units called packets and transmitted over the network.
• Each packet contains the source and destination addresses, as well as
other information needed for routing.
• The packets may take different paths to reach their destination, and
they may be transmitted out of order or delayed due to network
congestion.
Packet switching
Packet switching
Advantages of Packet Switching:
• Efficient use of bandwidth: Packet switching is efficient because
bandwidth is shared among multiple users, and resources are
allocated only when data needs to be transmitted.
• Flexible: Packet switching is flexible and can handle a wide range of
data rates and packet sizes.
• Scalable: Packet switching is highly scalable and can handle large
amounts of traffic on a network.
• Lower cost: Packet switching is less expensive than circuit switching
because resources are shared among multiple users.
Packet switching
Disadvantages of Packet Switching:
• Higher latency: Packet switching has higher latency than circuit
switching because packets must be routed through multiple nodes,
which can cause delay.
• Packet loss: Packet switching can result in packet loss due to
congestion on the network or errors in transmission.
• Unsuitable for real-time communication: Packet switching is not
suitable for real-time communication, such as voice and video,
because of the potential for latency and packet loss.
Message Switching
Message Switching
• Message switching was a technique developed as an alternative to circuit
switching before packet switching was introduced with no dedicated path.
• In message switching, end-users communicate by sending and receiving
messages that included the entire data to be shared. Messages are the
smallest individual unit.
• Also, the sender and receiver are not directly connected. There are a
number of intermediate nodes that transfer data and ensure that the
message reaches its destination.
• Message switched data networks are hence called hop-by-hop systems.
Message Switching
Applications
• The store-and-forward method was implemented in telegraph
message switching centres.
• Many major networks and systems are packet-switched or circuit-
switched networks, their delivery processes can be based on message
switching.
• For example, in most electronic mail systems the delivery process is
based on message switching, while the network is in fact either
circuit-switched or packet-switched.
Routing Algorithms
Routing
• It is the process of establishing the routes that data packets must
follow to reach the destination.
• In this process, a routing table is created which contains information
regarding routes that data packets follow.
• Various routing algorithms are used for the purpose of deciding which
route an incoming data packet needs to be transmitted on to reach
the destination efficiently.
Routing Algorithms
• A router is a networking device that forwards the packet based on the
information available in the packet header and forwarding table.
• The routing protocols use the metric/measurements such as hop
count, bandwidth, delay, current load on the path, etc to determine
the best path for the packet delivery
Optimality Principle
• The optimality principle is a routing algorithm concept that helps
ensure data is transmitted along the most efficient path.
• It states that if router J is on the optimal path from router I to router
K, then the optimal path from J to K also falls along the same route.
• i.e.,
If a better route could be found between router J and router K, the
path from router I to router K via J would be updated via this route.
Thus, the optimal path from J to K will again lie on the optimal path
from I to K.
Optimality Principle
• The optimal path from one router to another may be the least cost
path, the least distance path, the least time path, the least hops path,
or a combination of any of the above.
Optimality Principle-Example
• Consider a network of routers, {G, H, I, J, K, L, M, N} as shown in the
figure. Let the optimal route from I to K be as shown via the green
path, i.e. via the route I-G-J-L-K.
• According to the optimality principle, the optimal path from J to K
with be along the same route, i.e. J-L-K.
Optimality Principle-Examplec
• Now, suppose we find a better route from J to K is found, say along J-
M-N-K. Consequently, we will also need to update the optimal route
from I to K as I-G-J-M-N-K, since the previous route ceases to be
optimal in this situation.
• This new optimal path is shown line orange lines in the following
figure
Optimality principle - Benefits
• It helps to minimize network congestion by ensuring that data is
transmitted over the most efficient path.
• This is particularly important in large-scale networks where
congestion can significantly impact the performance of the network.
Algorithm:
Step 1: Mark the source node current distance as 0 and all others as infinity.
Step 2: Set the node with the smallest current distance among the non-
visited nodes as the current node.
Step 3: For each neighbor, N, of the current node:
• Calculate the potential new distance by adding the current distance of the
current node with the weight of the edge connecting the current node to
N.
• If the potential new distance is smaller than the current distance of node
N, update N’s current distance with the new distance.
Dijkstra’s Algorithm
Advantages of Flooding
• It is very simple to setup and implement, since a router may know
only its neighbours.
• All nodes which are directly or indirectly connected are visited. So,
there are no chances for any node to be left out. This is a main criteria
in case of broadcast messages.
• The shortest path is always chosen by flooding.
Quality of service(QOS)
• It is basically the ability to provide different priority to different
applications, users, or data flows, or in order to guarantee a certain
level of performance to the flow of data.
• QoS is basically the overall performance of the computer network.
Mainly the performance of the network is seen by the user of the
Network.
Quality of service(QOS)
Reliability
• It is one of the main characteristics that the flow needs. If there is a lack of
reliability then it simply means losing any packet or losing an
acknowledgement due to which retransmission is needed.
• Reliability becomes more important for electronic mail, file transfer, and
for internet access.
Delay
• Another characteristic of the flow is the delay in transmission between the
source and destination. During audio conferencing, telephony, video
conferencing, and remote conferencing there should be a minimum delay.
Quality of service(QOS)
Jitter
• Jitter is the variation in time delay between when a signal is transmitted
and when it is received over a network connection.
• It's measured in milliseconds (ms) and is calculated by taking the average
difference between the expected arrival time of each packet and its actual
arrival time.
• Higher the value of jitter means there is a large delay and the low jitter
means the variation is small.
• Occurs as the result of network congestion, timing drift, and route changes.
And also, too much jitter can degrade the quality of audio communication.
Quality of service(QOS)
Bandwidth
• It is the maximum rate of data transfer across a network path.
• It's also known as network bandwidth, data bandwidth, or digital
bandwidth.
• QoS optimizes a network by managing its bandwidth and setting the
priorities for those applications which require more resources as
compared to other applications.
• It's commonly measured in bits per second (bps), but organizations
and internet service providers (ISPs) often measure it in megabits per
second (Mbps) or gigabits per second (Gbps).
Congestion
• Congestion is an important issue that can arise in packet switched
network.
• Congestion is a situation in Communication Networks in which too
many packets are present in a part of the subnet, performance
degrades.
• Congestion in a network may occur when the load on the network
(i.e. the number of packets sent to the network) is greater than the
capacity of the network (i.e. the number of packets a network can
handle.). Network congestion occurs in case of traffic overloading.
• In other words when too much traffic is offered, congestion sets in
and performance degrades sharply
Congestion Control
• Congestion Control refers to techniques and mechanisms that can
either prevent congestion, before it happens, or remove congestion.
Open Loop Congestion Control
Retransmission Policy
• The sender retransmits a packet, if it feels that the packet it has sent is lost
or corrupted.
• However retransmission in general may increase the congestion in the
network. But we need to implement good retransmission policy to prevent
congestion.
• The retransmission policy and the retransmission timers need to be
designed to optimize efficiency and at the same time prevent the
congestion.
• If the sender feels that a sent packet is lost or corrupted, the packet
needs to be retransmitted according to timers.
Open Loop Congestion Control
Window Policy
• To implement window policy, selective reject window method is used for
congestion control.
• Selective Reject method is preferred over Go-back-n window as in Go-back-
n method, when timer for a packet times out, several packets are resent,
although some may have arrived safely at the receiver. Thus, this
duplication may make congestion worse.
• Selective reject method sends only the specific lost or damaged packets.
• In a window of packets sent , retransmits the specific packet that may
have been lost instead of entire window.
Open Loop Congestion Control
Acknowledgement Policy
• The acknowledgement policy imposed by the receiver may also affect
congestion.
• If the receiver does not acknowledge every packet it receives it may slow
down the sender and help prevent congestion.
• Acknowledgments also add to the traffic load on the network. Thus, by
sending fewer acknowledgements we can reduce load on the network.
To implement it, several approaches can be used:
• A receiver may send an acknowledgement only if it has a packet to be
sent.
• A receiver may send an acknowledgement when a timer expires.
• A receiver may also decide to acknowledge only N packets at a time.
Open Loop Congestion Control
Discarding Policy
• A router may discard less sensitive packets when congestion is likely
to happen.
• Such a discarding policy may prevent congestion and at the same
time may not harm the integrity of the transmission.
• In case of audio file transmission, routers can discard less sensitive
packets to prevent congestion and also maintain the quality of the
audio file.
Open Loop Congestion Control
Admission Policy
• An admission policy, which is a quality-of-service mechanism, can also
prevent congestion circuit networks.
• Switches in a flow first check the resource requirement of a flow before
admitting it to the network.
• A router can deny establishing a virtual circuit connection if there is
congestion in the “network or if there is a possibility of future congestion.
• This QoS policy is to check the resource requirement of a network flow.
The router should deny establishing a virtual network connection if there
is a chance of congestion
Closed loop congestion control
• Closed loop congestion control mechanisms try to remove the
congestion after it happens.
The various methods used for closed loop congestion control are:
Close Loop Congestion Control
Backpressure
• Backpressure is a technique in which a congested node stops
receiving packets from upstream node.
• It is a node-to-node congestion control technique that propagate in
the opposite direction of data flow .
• The backpressure technique can be applied only to virtual circuit
where each node has information of its above upstream node.
Close Loop Congestion Control
• In the diagram the 3rd node is congested and stops receiving packets
as a result 2nd node may be get congested due to slowing down of
the output data flow. Similarly 1st node may get congested and
inform the source to slow down.
Close Loop Congestion Control
Choke Packet
• In this method of congestion control, congested router or node sends
a special type of packet called choke packet to the source to inform it
about the congestion.
• Here, congested node does not inform its upstream node about the
congestion as in backpressure method.
• In choke packet method, congested node sends a warning directly to
the source station i.e. the intermediate nodes through which the
packet has traveled are not warned.
Close Loop Congestion Control
Close Loop Congestion Control
Implicit Signaling
• In implicit signaling, there is no communication between the
congested nodes and the source. The source guesses that there is
congestion in a network.
• For example when sender sends several packets and there is no
acknowledgment for a while, one assumption is that there is a
congestion.
Close Loop Congestion Control
Explicit signaling
• In explicit signaling, if a node experiences congestion it can explicitly
sends a packet to the source or destination to inform about
congestion.
• The difference between choke packet and explicit signaling is that the
signal is included in the packets that carry data rather than creating a
different packet as in case of choke packet technique.
• Explicit signaling can occur in either forward or backward direction.
Congestion Control Principles
• Before the network can make Quality of service guarantees, it must
know what traffic is being guaranteed.
• One of the main causes of congestion is that traffic is often bursty.
• Traffic Shaping is a mechanism to control the amount and the rate of
traffic sent to the network.
• There are 2 types of traffic shaping algorithms:
• Leaky Bucket
• Token Bucket
Leaky Bucket
• The input rate can vary, but the output rate remains constant by FIFO
queue.
• Similarly, in networking, a technique called leaky bucket can smooth
out bursty traffic. Bursty chunks are stored in the bucket and sent out
at an average rate.
Leaky Bucket
• Assume the capacity of the bucket is c tokens and tokens enter the
bucket at the rate of r token per second.The system removes one
token for every packet of data sent.
• The maximum number of packet that can enter the network during
any time interval of length t is shown below
Maximum number of packets = rt + c
• The maximum average rate for the token bucket is shown below.
Maximum average rate = (rt + c)/t packets per second
Token Bucket
Network Layer Protocols
• A network protocol is an accepted set of rules that govern data
communication between different devices in the network.
Types of Protocols
The protocols can be broadly classified into three major categories-
1.Communication
2.Management
3.Security
Network Layer Protocols
Communication
• Communication protocols are really important for the functioning of
a network. They are so crucial that it is not possible to have computer
networks without them. These protocols formally set out the rules
and formats through which data is transferred. These protocols
handle syntax, semantics, error detection, synchronization, and
authentication.
Example: HTTP, TCP, UDP, BGP, ARP, IP, DHCP
Network Layer Protocols
Management
These protocols assist in describing the procedures and policies that
are used in monitoring, maintaining, and managing the computer
network. These protocols also help in communicating these
requirements across the network to ensure stable communication.
Network management protocols can also be used for troubleshooting
connections between a host and a client.
Example: ICMP,IGMP, FTP, Telnet
Network Layer Protocols
Security
• These protocols secure the data in passage over a network. These
protocols also determine how the network secures data from any
unauthorized attempts to extract or review data.
• These protocols make sure that no unauthorized devices, users, or
services can access the network data. Primarily, these protocols
depend on encryption to secure data.
Example: HTTPS
Network Layer Protocols
IP(Internet Protocol):
• It is a communication network protocol
• In IP data is sent from one host to another over the internet.
• It is used for addressing and routing data packets so that they can
reach their destination.
Network Layer Protocols
ARP(Address Resolution Protocol):
• ARP is a protocol that helps in mapping Logical addresses to the
Physical addresses acknowledged in a local network.
• It is a communication network protocol
• For mapping and maintaining a correlation between these logical and
physical addresses a table known as ARP cache is used.
Network Layer Protocols
ICMP(nternet Control Message Protocol ):
• It is a layer 3 protocol that is used by network devices to forward
operational information and error messages.
• It is used for reporting congestions, network errors, diagnostic
purposes, and timeouts.
• It is a Management Protocols.
Network Layer Protocols
IGMP(The Internet Group Management Protocol)
• It sets up one-to-many network connections.
• IGMP helps set up multicasting, meaning multiple computers can
receive data packets directed at one IP address.
• It is a Management Protocols
Network Layer Protocols
RARP(Reverse Address Resolution Protocol)
• A protocol used to map a physical (MAC) address to an IP address.
• RARP is used to convert the Ethernet address to an IP address.
• It is available for the LAN technologies like FDDI, token ring LANs, etc.
• RARP was widely used in the past, it has largely been replaced by
newer protocols such as DHCP (Dynamic Host Configuration Protocol).
• DHCP is a networking protocol that automatically assigns IP
addresses, subnet masks, Domain Name System (DNS) addresses, and
other network parameters to devices that connect to a network.
IP header
• IP Header is meta information at the beginning of an IP packet.
• It displays information such as the IP version, the packet’s length, the
source, and the destination.
• IPV4 header format is 20 to 60 bytes in length.
• It contains information need for routing and delivery.
• It consists of 13 fields such as Version, Header length, total distance,
identification, flags, checksum, source IP address, destination IP
address. It provides essential data need to transmit the data.
IP header
IP header
• Version: The first IP header field is a 4-bit version indicator. In IPv4, the
value of its four bits is set to 0100, which indicates 4 in binary. However, if
the router does not support the specified version, this packet will be
dropped.
• Internet Header Length: Internet header length, shortly known as IHL, is 4
bits in size. It is also called HELEN (Header Length). This IP component is
used to show how many 32-bit words are present in the header.
• Type of Service: Type of Service is also called Differentiated Services Code
Point or DSCP. This field is provided features related to the quality of
service for data streaming or VoIP calls. The first 3 bits are the priority bits.
It is also used for specifying how you can handle Datagram.
IP header
• Total length: The total length is measured in bytes. The minimum size
of an IP datagram is 20 bytes and the maximum, it can be 65535
bytes . HELEN and Total length can be used to calculate the dimension
of the payload. All hosts are required to be able to read 576-byte
datagrams. However, if a datagram is too large for the hosts in the
network, the fragmentation method is widely used.
• Identification: Identification is a packet that is used to identify
fragments of an IP datagram uniquely. Some have recommended
using this field for other things like adding information for packet
tracing, etc.
IP header
• IP Flags: Flag is a three-bit field that helps you to control and identify
fragments. The following can be their possible configuration:
Bit 0: is reserved and has to be set to zero
Bit 1: means do not fragment
Bit 2: means more fragments.
• Fragment Offset: Fragment Offset represents the number of Data
Bytes ahead of the particular fragment in the specific Datagram. It is
specified in terms of the number of 8 bytes, which has a maximum
value of 65,528 bytes.
IP header
• Time to live: It is an 8-bit field that indicates the maximum time the
Datagram will be live in the internet system. The time duration is measured
in seconds, and when the value of TTL is zero, the Datagram will be erased.
Every time a datagram is processed its TTL value is decreased by one
second. TTL are used so that datagrams are not delivered and discarded
automatically. The value of TTL can be 0 to 255.
• Protocol: This IPv4 header is reserved to denote that internet protocol is
used in the latter portion of the Datagram. For Example, 6 number digit is
mostly used to indicate TCP, and 17 is used to denote the UDP protocol.
IP header
• Header Checksum: The next component is a 16 bits header checksum
field, which is used to check the header for any errors. The IP header
is compared to the value of its checksum. When the header checksum
is not matching, then the packet will be discarded.
• Source Address: The source address is a 32-bit address of the source
used for the IPv4 packet.
• Destination address: The destination address is also 32 bit in size
stores the address of the receiver.
IP header
• IP Options: It is an optional field of IPv4 header used when the value
of IHL (Internet Header Length) is set to greater than 5. It contains
values and settings related with security, record route and time
stamp, etc. You can see that list of options component ends with an
End of Options or EOL in most cases.
• Data: This field stores the data from the protocol layer, which has
handed over the data to the IP layer.
IP Address
• An IP (Internet Protocol) address is a numerical label assigned to the
devices connected to a computer network that uses the IP for
communication.
• It also helps you to develop a virtual connection between a
destination and a source.
• The IP address is also called IP number or internet address.
• An IP address consists of four numbers, each number contains one to
three digits, with a single dot (.) separates each number or set of
digits.
IP Address
• Internet Protocol address (IP address) is a set of rules and a method
designed to allow the device to access the internet and serve as a
unique identification medium.
Parts of IP address
• Flags: 3 flags of 1 bit each : reserved bit (must be zero), do not fragment flag,
more fragments flag (same order)
• Fragment Offset: Represents the number of Data Bytes ahead of the particular
fragment in the particular Datagram. Specified in terms of number of 8 bytes,
which has the maximum value of 65,528 bytes.