Network Layer

Download as pdf or txt
Download as pdf or txt
You are on page 1of 109

UNIT 3

NETWORK LAYER
NETWORK LAYER

• The Network Layer is the third layer of the OSI model.


• It handles the service requests from the transport layer and further
forwards the service request to the data link layer.
• The main role of the network layer is to move the packets from
sending host to the receiving host.
• The primary function of the network layer is to enable different
networks to be interconnected. It does this by forwarding packets to
network routers, which rely on algorithms to determine the best
paths for the data to travel.
NETWORK LAYER

Features of Network Layer


• The main responsibility of the Network layer is to carry the data packets
from the source to the destination without changing or using them.
• If the packets are too large for delivery, they are fragmented i.e., broken
down into smaller packets.
• It decides the route to be taken by the packets to travel from the source to
the destination among the multiple routes available in a network (also
called routing).
• The source and destination addresses are added to the data packets inside
the network layer.
NETWORK LAYER

Switching
• Switching is the process of transferring data packets from one device
to another in a network, or from one network to another, using
specific devices called switches.
• A computer user experiences switching all the time .
• For example, accessing the Internet from your computer device,
whenever a user requests a webpage to open, the request is
processed through switching of data packets only.
NETWORK LAYER

Network Switching
• A switch is a dedicated piece of computer hardware that facilitates the
process of switching i.e., incoming data packets and transferring them to
their destination.
• A switch works at the Data Link layer of the OSI Model. A switch primarily
handles the incoming data packets from a source computer or network and
decides the appropriate port through which the data packets will reach
their target computer or network.
• A switch decides the port through which a data packet shall pass with the
help of its destination MAC(Media Access Control) Address. A switch does
this effectively by maintaining a switching table, (also known as
forwarding table).
NETWORK LAYER
NETWORK LAYER

Process of Switching
• The switch receives a data frame or packet from a computer
connected to its ports.
• MAC Address Extraction: The switch reads the header of the data
frame and collects the destination MAC Address from it.
NETWORK LAYER

• Forwarding Decision and Switching Table Update:


• If the switch matches the destination MAC Address of the frame to the
MAC address in its switching table, it forwards the data frame to the
respective port.
• If the destination MAC Address does not exist in its forwarding table, it
follows the flooding process, in which it sends the data frame to all its
ports except the one it came from and records all the MAC Addresses to
which the frame was delivered. This way, the switch finds the new MAC
Address and updates its forwarding table.
• Frame Transition: Once the destination port is found, the switch sends the
data frame to that port and forwards it to its target computer/network.
Network Switching-Types

Circuit Switching
• Circuit switching is a communication method where a dedicated
communication path, or circuit, is established between two devices
before data transmission begins.
• The circuit remains dedicated to the communication for the duration
of the session, and no other devices can use it while the session is in
progress.
• Circuit switching is commonly used in voice communication and
some types of data communication.
Circuit Switching
Circuit Switching
Advantages of Circuit Switching:
• Guaranteed bandwidth: Circuit switching provides a dedicated path for
communication, ensuring that bandwidth is guaranteed for the duration of
the call.
• Low latency: Circuit switching provides low latency because the path is
predetermined, and there is no need to establish a connection for each
packet.
• Predictable performance: Circuit switching provides predictable
performance because the bandwidth is reserved, and there is no
competition for resources.
• Suitable for real-time communication: Circuit switching is suitable for real-
time communication, such as voice and video, because it provides low
latency and predictable performance.
Circuit Switching
Disadvantages of Circuit Switching:
• Inefficient use of bandwidth: Circuit switching is inefficient because
the bandwidth is reserved for the entire duration of the call, even
when no data is being transmitted.
• Limited scalability: Circuit switching is limited in its scalability
because the number of circuits that can be established is finite, which
can limit the number of simultaneous calls that can be made.
• High cost: Circuit switching is expensive because it requires dedicated
resources, such as hardware and bandwidth, for the duration of the
call.
Packet switching
Packet Switching
• Packet switching is a communication method where data is divided
into smaller units called packets and transmitted over the network.
• Each packet contains the source and destination addresses, as well as
other information needed for routing.
• The packets may take different paths to reach their destination, and
they may be transmitted out of order or delayed due to network
congestion.
Packet switching
Packet switching
Advantages of Packet Switching:
• Efficient use of bandwidth: Packet switching is efficient because
bandwidth is shared among multiple users, and resources are
allocated only when data needs to be transmitted.
• Flexible: Packet switching is flexible and can handle a wide range of
data rates and packet sizes.
• Scalable: Packet switching is highly scalable and can handle large
amounts of traffic on a network.
• Lower cost: Packet switching is less expensive than circuit switching
because resources are shared among multiple users.
Packet switching
Disadvantages of Packet Switching:
• Higher latency: Packet switching has higher latency than circuit
switching because packets must be routed through multiple nodes,
which can cause delay.
• Packet loss: Packet switching can result in packet loss due to
congestion on the network or errors in transmission.
• Unsuitable for real-time communication: Packet switching is not
suitable for real-time communication, such as voice and video,
because of the potential for latency and packet loss.
Message Switching

Message Switching
• Message switching was a technique developed as an alternative to circuit
switching before packet switching was introduced with no dedicated path.
• In message switching, end-users communicate by sending and receiving
messages that included the entire data to be shared. Messages are the
smallest individual unit.
• Also, the sender and receiver are not directly connected. There are a
number of intermediate nodes that transfer data and ensure that the
message reaches its destination.
• Message switched data networks are hence called hop-by-hop systems.
Message Switching

They provide 2 distinct characteristics:


Store and forward – The intermediate nodes have the responsibility of transferring
the entire message to the next node.
• Each node must have storage capacity. A message will only be delivered if the
next hop and the link connecting it are both available, otherwise, it’ll be stored
indefinitely.
• A store-and-forward switch forwards a message only if sufficient resources are
available and the next hop is accepting data. This is called the store-and-forward
property.
Message delivery – This implies wrapping the entire information in a single
message and transferring it from the source to the destination node.
• Each message must have a header that contains the message routing information,
including the source and destination.
Message Switching
Message Switching

Advantages of Message Switching


• As message switching is able to store the message for which communication
channel is not available, it helps in reducing the traffic congestion in the network.
• In message switching, the data channels are shared by the network devices.
• It makes traffic management efficient by assigning priorities to the messages.
• Because the messages are delivered via a store and forward method, it is possible
to include priority in them.
• It allows for infinite message lengths.
• Unlike circuit switching, it does not necessitate the actual connection of source
and destination devices.
Message Switching

Disadvantages of Message Switching


• Message switching cannot be used for real-time applications as
storing messages causes delay.
• In message switching, the message has to be stored for which every
intermediate device in the network requires a large storing capacity.
• People are frequently unaware of whether or not messages are
correctly conveyed. This could cause problems in social relationships.
• The type of message switching does not create a dedicated path
between the devices. It is not dependable communication because
there is no direct relationship between sender and receiver.
Message Switching

Applications
• The store-and-forward method was implemented in telegraph
message switching centres.
• Many major networks and systems are packet-switched or circuit-
switched networks, their delivery processes can be based on message
switching.
• For example, in most electronic mail systems the delivery process is
based on message switching, while the network is in fact either
circuit-switched or packet-switched.
Routing Algorithms
Routing
• It is the process of establishing the routes that data packets must
follow to reach the destination.
• In this process, a routing table is created which contains information
regarding routes that data packets follow.
• Various routing algorithms are used for the purpose of deciding which
route an incoming data packet needs to be transmitted on to reach
the destination efficiently.
Routing Algorithms
• A router is a networking device that forwards the packet based on the
information available in the packet header and forwarding table.
• The routing protocols use the metric/measurements such as hop
count, bandwidth, delay, current load on the path, etc to determine
the best path for the packet delivery
Optimality Principle
• The optimality principle is a routing algorithm concept that helps
ensure data is transmitted along the most efficient path.
• It states that if router J is on the optimal path from router I to router
K, then the optimal path from J to K also falls along the same route.
• i.e.,
If a better route could be found between router J and router K, the
path from router I to router K via J would be updated via this route.
Thus, the optimal path from J to K will again lie on the optimal path
from I to K.
Optimality Principle
• The optimal path from one router to another may be the least cost
path, the least distance path, the least time path, the least hops path,
or a combination of any of the above.
Optimality Principle-Example
• Consider a network of routers, {G, H, I, J, K, L, M, N} as shown in the
figure. Let the optimal route from I to K be as shown via the green
path, i.e. via the route I-G-J-L-K.
• According to the optimality principle, the optimal path from J to K
with be along the same route, i.e. J-L-K.
Optimality Principle-Examplec
• Now, suppose we find a better route from J to K is found, say along J-
M-N-K. Consequently, we will also need to update the optimal route
from I to K as I-G-J-M-N-K, since the previous route ceases to be
optimal in this situation.
• This new optimal path is shown line orange lines in the following
figure
Optimality principle - Benefits
• It helps to minimize network congestion by ensuring that data is
transmitted over the most efficient path.
• This is particularly important in large-scale networks where
congestion can significantly impact the performance of the network.

Note : Optimality principle is not always practical or feasible in all


situations. For example, in some cases, the optimal route may be
unavailable due to network failures or other issues. In such cases,
routing algorithms may use alternative paths that are not optimal but
still meet the minimum requirements for data transmission
Shortest Path Routing Alg
• It refers to the algorithms that help to find the shortest path between
a sender and receiver for routing the data packets through the
network in terms of shortest distance, minimum cost, and minimum
time.
• It is mainly for building a graph or subnet containing routers as nodes
and edges as communication lines connecting the nodes.
>>Hop count is one of the parameters that is used to measure the
distance.
• Hop count: It is the number that indicates how many routers are
covered. If the hop count is 6, there are 6 routers/nodes and the
edges connecting them.
Shortest Path Routing Alg
Dijkstra’s Algorithm
• The Dijkstra’s Algorithm is used to find the minimum distance
between a node and all other nodes in a given graph.
• Consider node as a router and graph as a network.
• It uses weight of edge .ie, distance between the nodes to find a
minimum distance route.
Dijkstra’s Algorithm

Algorithm:
Step 1: Mark the source node current distance as 0 and all others as infinity.
Step 2: Set the node with the smallest current distance among the non-
visited nodes as the current node.
Step 3: For each neighbor, N, of the current node:
• Calculate the potential new distance by adding the current distance of the
current node with the weight of the edge connecting the current node to
N.
• If the potential new distance is smaller than the current distance of node
N, update N’s current distance with the new distance.
Dijkstra’s Algorithm

Step 4: Make the current node as visited node.


Step 5: If we find any unvisited node, go to step 2 to find the next node
which has the smallest current distance and continue this process.
Dijkstra’s Algorithm
It's used in many real-world applications
• Navigation: GPS navigation, guiding robots, and route planning for
transportation networks
• Computer networks: Routing data and managing traffic flow
• Artificial intelligence: Game playing and search engines
• Social networking: Identifying connections
• Telecommunications: Establishing connections
• Mapping: Pinpointing locations
• Scheduling: Resource allocation
Bellman Ford Algorithm
Step 1: First we Initialize all vertices v in a distance array dist[] as INFINITY.
Step 2: Then we pick a random vertex as vertex 0 and assign dist[0] =0.
Step 3: Then iteratively update the minimum distance to each node (dist[v])
by comparing it with the sum of the distance from the source node (dist[u])
and the edge weight (weight) N-1 times.
Step 4: To identify the presence of negative edge cycles, with the help of
following cases do one more round of edge relaxation.
• We can say that a negative cycle exists if for any edge uv the sum of
distance from the source node (dist[u]) and the edge weight (weight) is less
than the current distance to the largest node(dist[v])
• It indicates the absence of negative edge cycle if none of the edges satisfies
case1.
Bellman Ford Algorithm
Calculate the shortest distances iteratively.
Repeat |V|- 1 times for each node except s :
Repeat for each edge connecting vertices u and v :
If (dist[u] + weight of edge u-v) < dist[v] , Then
Update dist[v] = dist[u] + weight of edge u-v
• The array dist[] contains the shortest path from s to every other node.
Bellman Ford Algorithm
Applications
✓ Network Routing: To find the shortest paths in routing tables, helping data
packets navigate efficiently across networks.
✓ GPS Navigation: GPS devices use Bellman-Ford to calculate the shortest or
fastest routes between locations, aiding navigation apps , devices and
autonomous vehicle
✓ Transportation and Logistics: To determine the optimal paths for vehicles in
transportation and logistics, minimizing fuel consumption and travel time.
Disadvantage
➢ The bellman ford algorithm does not produce a correct answer if the sum of the
edges of a cycle is negative.
Routing Alg-Flooding
Flooding
• Network flooding is a static(non-adaptive)routing algorithm that
sends an incoming packet to every outgoing link except the one it
arrived on.
• This technique is used to quickly distribute routing protocol updates
to every node in a large network.
Flooding
Flooding

• Flooding tends to create an infinite number of duplicate data packets,


unless some measures are adopted to damp packet generation.
• It is wasteful if a single destination needs the packet, since it delivers
the data packet to all nodes irrespective of the destination.
Types of Flooding
• Uncontrolled flooding − Here, each router unconditionally transmits
the incoming data packets to all its neighbours.
• Controlled flooding − They use some methods to control the
transmission of packets to the neighbouring nodes. The two popular
algorithms for controlled flooding are Sequence Number Controlled
Flooding (SNCF) and Reverse Path Forwarding (RPF).
• Selective flooding − Here, the routers don't transmit the incoming
packets only along those paths which are heading towards
approximately in the right direction, instead of every available paths.
Flooding

Advantages of Flooding
• It is very simple to setup and implement, since a router may know
only its neighbours.
• All nodes which are directly or indirectly connected are visited. So,
there are no chances for any node to be left out. This is a main criteria
in case of broadcast messages.
• The shortest path is always chosen by flooding.
Quality of service(QOS)
• It is basically the ability to provide different priority to different
applications, users, or data flows, or in order to guarantee a certain
level of performance to the flow of data.
• QoS is basically the overall performance of the computer network.
Mainly the performance of the network is seen by the user of the
Network.
Quality of service(QOS)
Reliability
• It is one of the main characteristics that the flow needs. If there is a lack of
reliability then it simply means losing any packet or losing an
acknowledgement due to which retransmission is needed.
• Reliability becomes more important for electronic mail, file transfer, and
for internet access.
Delay
• Another characteristic of the flow is the delay in transmission between the
source and destination. During audio conferencing, telephony, video
conferencing, and remote conferencing there should be a minimum delay.
Quality of service(QOS)
Jitter
• Jitter is the variation in time delay between when a signal is transmitted
and when it is received over a network connection.
• It's measured in milliseconds (ms) and is calculated by taking the average
difference between the expected arrival time of each packet and its actual
arrival time.
• Higher the value of jitter means there is a large delay and the low jitter
means the variation is small.
• Occurs as the result of network congestion, timing drift, and route changes.
And also, too much jitter can degrade the quality of audio communication.
Quality of service(QOS)
Bandwidth
• It is the maximum rate of data transfer across a network path.
• It's also known as network bandwidth, data bandwidth, or digital
bandwidth.
• QoS optimizes a network by managing its bandwidth and setting the
priorities for those applications which require more resources as
compared to other applications.
• It's commonly measured in bits per second (bps), but organizations
and internet service providers (ISPs) often measure it in megabits per
second (Mbps) or gigabits per second (Gbps).
Congestion
• Congestion is an important issue that can arise in packet switched
network.
• Congestion is a situation in Communication Networks in which too
many packets are present in a part of the subnet, performance
degrades.
• Congestion in a network may occur when the load on the network
(i.e. the number of packets sent to the network) is greater than the
capacity of the network (i.e. the number of packets a network can
handle.). Network congestion occurs in case of traffic overloading.
• In other words when too much traffic is offered, congestion sets in
and performance degrades sharply
Congestion Control
• Congestion Control refers to techniques and mechanisms that can
either prevent congestion, before it happens, or remove congestion.
Open Loop Congestion Control

Open Loop Congestion Control


• In this method, policies are used to prevent the congestion before it
happens.
• Congestion control is handled either by the source or by the
destination.
Open Loop Congestion Control

Retransmission Policy
• The sender retransmits a packet, if it feels that the packet it has sent is lost
or corrupted.
• However retransmission in general may increase the congestion in the
network. But we need to implement good retransmission policy to prevent
congestion.
• The retransmission policy and the retransmission timers need to be
designed to optimize efficiency and at the same time prevent the
congestion.
• If the sender feels that a sent packet is lost or corrupted, the packet
needs to be retransmitted according to timers.
Open Loop Congestion Control

Window Policy
• To implement window policy, selective reject window method is used for
congestion control.
• Selective Reject method is preferred over Go-back-n window as in Go-back-
n method, when timer for a packet times out, several packets are resent,
although some may have arrived safely at the receiver. Thus, this
duplication may make congestion worse.
• Selective reject method sends only the specific lost or damaged packets.
• In a window of packets sent , retransmits the specific packet that may
have been lost instead of entire window.
Open Loop Congestion Control

Acknowledgement Policy
• The acknowledgement policy imposed by the receiver may also affect
congestion.
• If the receiver does not acknowledge every packet it receives it may slow
down the sender and help prevent congestion.
• Acknowledgments also add to the traffic load on the network. Thus, by
sending fewer acknowledgements we can reduce load on the network.
To implement it, several approaches can be used:
• A receiver may send an acknowledgement only if it has a packet to be
sent.
• A receiver may send an acknowledgement when a timer expires.
• A receiver may also decide to acknowledge only N packets at a time.
Open Loop Congestion Control

Discarding Policy
• A router may discard less sensitive packets when congestion is likely
to happen.
• Such a discarding policy may prevent congestion and at the same
time may not harm the integrity of the transmission.
• In case of audio file transmission, routers can discard less sensitive
packets to prevent congestion and also maintain the quality of the
audio file.
Open Loop Congestion Control

Admission Policy
• An admission policy, which is a quality-of-service mechanism, can also
prevent congestion circuit networks.
• Switches in a flow first check the resource requirement of a flow before
admitting it to the network.
• A router can deny establishing a virtual circuit connection if there is
congestion in the “network or if there is a possibility of future congestion.
• This QoS policy is to check the resource requirement of a network flow.
The router should deny establishing a virtual network connection if there
is a chance of congestion
Closed loop congestion control
• Closed loop congestion control mechanisms try to remove the
congestion after it happens.
The various methods used for closed loop congestion control are:
Close Loop Congestion Control

Backpressure
• Backpressure is a technique in which a congested node stops
receiving packets from upstream node.
• It is a node-to-node congestion control technique that propagate in
the opposite direction of data flow .
• The backpressure technique can be applied only to virtual circuit
where each node has information of its above upstream node.
Close Loop Congestion Control

• In the diagram the 3rd node is congested and stops receiving packets
as a result 2nd node may be get congested due to slowing down of
the output data flow. Similarly 1st node may get congested and
inform the source to slow down.
Close Loop Congestion Control

• The backpressure technique can be applied only to virtual circuit networks.


In such virtual circuit each node knows the upstream node from which a
data flow is coming.
• In this method of congestion control, the congested node stops receiving
data from the immediate upstream node or nodes.
• This may cause the upstream node on nodes to become congested, and
they, in turn, reject data from their upstream node or nodes.
• As shown in fig node 3 is congested and it stops receiving packets and
informs its upstream node 2 to slow down. Node 2 in turns may be
congested and informs node 1 to slow down. Now node 1 may create
congestion and informs the source node to slow down. In this way the
congestion is alleviated. Thus, the pressure on node 3 is moved backward
to the source to remove the congestion.
Close Loop Congestion Control

Choke Packet
• In this method of congestion control, congested router or node sends
a special type of packet called choke packet to the source to inform it
about the congestion.
• Here, congested node does not inform its upstream node about the
congestion as in backpressure method.
• In choke packet method, congested node sends a warning directly to
the source station i.e. the intermediate nodes through which the
packet has traveled are not warned.
Close Loop Congestion Control
Close Loop Congestion Control

Implicit Signaling
• In implicit signaling, there is no communication between the
congested nodes and the source. The source guesses that there is
congestion in a network.
• For example when sender sends several packets and there is no
acknowledgment for a while, one assumption is that there is a
congestion.
Close Loop Congestion Control

Explicit signaling
• In explicit signaling, if a node experiences congestion it can explicitly
sends a packet to the source or destination to inform about
congestion.
• The difference between choke packet and explicit signaling is that the
signal is included in the packets that carry data rather than creating a
different packet as in case of choke packet technique.
• Explicit signaling can occur in either forward or backward direction.
Congestion Control Principles
• Before the network can make Quality of service guarantees, it must
know what traffic is being guaranteed.
• One of the main causes of congestion is that traffic is often bursty.
• Traffic Shaping is a mechanism to control the amount and the rate of
traffic sent to the network.
• There are 2 types of traffic shaping algorithms:
• Leaky Bucket
• Token Bucket
Leaky Bucket

• Suppose we have a bucket in which we are pouring water, at random


points in time, but we have to get water at a fixed rate, to achieve this
we will make a hole at the bottom of the bucket.
• This will ensure that the water coming out is at some fixed rate, and
also if the bucket gets full, then we will stop pouring water into it.
Leaky Bucket

• The input rate can vary, but the output rate remains constant by FIFO
queue.
• Similarly, in networking, a technique called leaky bucket can smooth
out bursty traffic. Bursty chunks are stored in the bucket and sent out
at an average rate.
Leaky Bucket

• When the host has to send a packet , packet is thrown in bucket.


• Bucket leaks at constant rate.
• Bursty traffic is converted into uniform traffic by leaky bucket.
• In practice bucket is a finite queue outputs at finite rate.
• When bucket overflows, the packets gets discarded and host has to
resend the lost packet.
Token Bucket

1. In regular intervals tokens are thrown into the bucket f.


2. The bucket has a maximum capacity f.
3. If the packet is ready, then a token is removed from the bucket, and
the packet is sent.
4. Suppose, if there is no token in the bucket, the packet cannot be
sent.
Token Bucket
Token Bucket

• Assume the capacity of the bucket is c tokens and tokens enter the
bucket at the rate of r token per second.The system removes one
token for every packet of data sent.
• The maximum number of packet that can enter the network during
any time interval of length t is shown below
Maximum number of packets = rt + c
• The maximum average rate for the token bucket is shown below.
Maximum average rate = (rt + c)/t packets per second
Token Bucket
Network Layer Protocols
• A network protocol is an accepted set of rules that govern data
communication between different devices in the network.
Types of Protocols
The protocols can be broadly classified into three major categories-
1.Communication
2.Management
3.Security
Network Layer Protocols
Communication
• Communication protocols are really important for the functioning of
a network. They are so crucial that it is not possible to have computer
networks without them. These protocols formally set out the rules
and formats through which data is transferred. These protocols
handle syntax, semantics, error detection, synchronization, and
authentication.
Example: HTTP, TCP, UDP, BGP, ARP, IP, DHCP
Network Layer Protocols
Management
These protocols assist in describing the procedures and policies that
are used in monitoring, maintaining, and managing the computer
network. These protocols also help in communicating these
requirements across the network to ensure stable communication.
Network management protocols can also be used for troubleshooting
connections between a host and a client.
Example: ICMP,IGMP, FTP, Telnet
Network Layer Protocols
Security
• These protocols secure the data in passage over a network. These
protocols also determine how the network secures data from any
unauthorized attempts to extract or review data.
• These protocols make sure that no unauthorized devices, users, or
services can access the network data. Primarily, these protocols
depend on encryption to secure data.
Example: HTTPS
Network Layer Protocols
IP(Internet Protocol):
• It is a communication network protocol
• In IP data is sent from one host to another over the internet.
• It is used for addressing and routing data packets so that they can
reach their destination.
Network Layer Protocols
ARP(Address Resolution Protocol):
• ARP is a protocol that helps in mapping Logical addresses to the
Physical addresses acknowledged in a local network.
• It is a communication network protocol
• For mapping and maintaining a correlation between these logical and
physical addresses a table known as ARP cache is used.
Network Layer Protocols
ICMP(nternet Control Message Protocol ):
• It is a layer 3 protocol that is used by network devices to forward
operational information and error messages.
• It is used for reporting congestions, network errors, diagnostic
purposes, and timeouts.
• It is a Management Protocols.
Network Layer Protocols
IGMP(The Internet Group Management Protocol)
• It sets up one-to-many network connections.
• IGMP helps set up multicasting, meaning multiple computers can
receive data packets directed at one IP address.
• It is a Management Protocols
Network Layer Protocols
RARP(Reverse Address Resolution Protocol)
• A protocol used to map a physical (MAC) address to an IP address.
• RARP is used to convert the Ethernet address to an IP address.
• It is available for the LAN technologies like FDDI, token ring LANs, etc.
• RARP was widely used in the past, it has largely been replaced by
newer protocols such as DHCP (Dynamic Host Configuration Protocol).
• DHCP is a networking protocol that automatically assigns IP
addresses, subnet masks, Domain Name System (DNS) addresses, and
other network parameters to devices that connect to a network.
IP header
• IP Header is meta information at the beginning of an IP packet.
• It displays information such as the IP version, the packet’s length, the
source, and the destination.
• IPV4 header format is 20 to 60 bytes in length.
• It contains information need for routing and delivery.
• It consists of 13 fields such as Version, Header length, total distance,
identification, flags, checksum, source IP address, destination IP
address. It provides essential data need to transmit the data.
IP header
IP header
• Version: The first IP header field is a 4-bit version indicator. In IPv4, the
value of its four bits is set to 0100, which indicates 4 in binary. However, if
the router does not support the specified version, this packet will be
dropped.
• Internet Header Length: Internet header length, shortly known as IHL, is 4
bits in size. It is also called HELEN (Header Length). This IP component is
used to show how many 32-bit words are present in the header.
• Type of Service: Type of Service is also called Differentiated Services Code
Point or DSCP. This field is provided features related to the quality of
service for data streaming or VoIP calls. The first 3 bits are the priority bits.
It is also used for specifying how you can handle Datagram.
IP header
• Total length: The total length is measured in bytes. The minimum size
of an IP datagram is 20 bytes and the maximum, it can be 65535
bytes . HELEN and Total length can be used to calculate the dimension
of the payload. All hosts are required to be able to read 576-byte
datagrams. However, if a datagram is too large for the hosts in the
network, the fragmentation method is widely used.
• Identification: Identification is a packet that is used to identify
fragments of an IP datagram uniquely. Some have recommended
using this field for other things like adding information for packet
tracing, etc.
IP header
• IP Flags: Flag is a three-bit field that helps you to control and identify
fragments. The following can be their possible configuration:
Bit 0: is reserved and has to be set to zero
Bit 1: means do not fragment
Bit 2: means more fragments.
• Fragment Offset: Fragment Offset represents the number of Data
Bytes ahead of the particular fragment in the specific Datagram. It is
specified in terms of the number of 8 bytes, which has a maximum
value of 65,528 bytes.
IP header
• Time to live: It is an 8-bit field that indicates the maximum time the
Datagram will be live in the internet system. The time duration is measured
in seconds, and when the value of TTL is zero, the Datagram will be erased.
Every time a datagram is processed its TTL value is decreased by one
second. TTL are used so that datagrams are not delivered and discarded
automatically. The value of TTL can be 0 to 255.
• Protocol: This IPv4 header is reserved to denote that internet protocol is
used in the latter portion of the Datagram. For Example, 6 number digit is
mostly used to indicate TCP, and 17 is used to denote the UDP protocol.
IP header
• Header Checksum: The next component is a 16 bits header checksum
field, which is used to check the header for any errors. The IP header
is compared to the value of its checksum. When the header checksum
is not matching, then the packet will be discarded.
• Source Address: The source address is a 32-bit address of the source
used for the IPv4 packet.
• Destination address: The destination address is also 32 bit in size
stores the address of the receiver.
IP header
• IP Options: It is an optional field of IPv4 header used when the value
of IHL (Internet Header Length) is set to greater than 5. It contains
values and settings related with security, record route and time
stamp, etc. You can see that list of options component ends with an
End of Options or EOL in most cases.
• Data: This field stores the data from the protocol layer, which has
handed over the data to the IP layer.
IP Address
• An IP (Internet Protocol) address is a numerical label assigned to the
devices connected to a computer network that uses the IP for
communication.
• It also helps you to develop a virtual connection between a
destination and a source.
• The IP address is also called IP number or internet address.
• An IP address consists of four numbers, each number contains one to
three digits, with a single dot (.) separates each number or set of
digits.
IP Address
• Internet Protocol address (IP address) is a set of rules and a method
designed to allow the device to access the internet and serve as a
unique identification medium.
Parts of IP address

IP Address is divided into two parts:


Prefix: The prefix part of IP address identifies the physical network to
which the computer is attached. . Prefix is also known as a network
address.
Suffix: The suffix part identifies the individual computer on the
network. The suffix is also called the host address
IPV4
• An IPV4 address is an 32 bit address that unique and universally
define the connection of a device(computer or router) to the internet.
• The address space of IPV4 IS 2^32 i.e. 4 billion.

• Example:10.10.10.1 (32 bit address, 4 bytes with 8 bit each).

• Address range:0.0.0.0 to 255.255.255.255(2^8)


Ipv4 address notation
• Binary notation (base 2)
• Dotted-decimal notation (base 256)
Invalid IPV4 ADDRESS
• 221.34.7.8.20
• 75.41.305.14
• 11100010.23.14.67
• 111.56.045.78
Classful addressing in IPV4
• It is concept that divides the address space into 5 classes(2^8).
• Class A
• Class B
• Class C
• Class D
• Class E
Classful addressing in IPV4
• Each of these classes has a valid range of IP addresses.
• Classes D and E are reserved for multicast and experimental purposes
respectively. The order of bits in the first octet determines the classes
of the IP address. The IPv4 address is divided into two parts:
Network ID
Host ID
• The class of IP address is used to determine the bits used for network
ID and host ID and the number of total networks and hosts possible in
that particular class.
Classful addressing in IPV4
Class A
• IP addresses belonging to class A are assigned to the networks that
contain a large number of hosts.
• The network ID is 8 bits long.
• The host ID is 24 bits long.
Classful addressing in IPV4
Class B
• IP address belonging to class B is assigned to networks that range
from medium-sized to large-sized networks.
• The network ID is 16 bits long.
• The host ID is 16 bits long.
Classful addressing in IPV4
Class C
• IP addresses belonging to class C are assigned to small-sized
networks.
• The network ID is 24 bits long.
• The host ID is 8 bits long.
Classful addressing in IPV4
Class D
• IP address belonging to class D is reserved for multi-casting.
• The higher-order bits of the first octet of IP addresses belonging to
class D is always set to 1110.
• The remaining bits are for the address that interested hosts recognize.
Classful addressing in IPV4
Class E
• IP addresses belonging to class E are reserved for experimental and
research purposes.
IPV4 HEADER
IPV4 HEADER
• VERSION: Version of the IP protocol (4 bits), which is 4 for IPv4 ie
0100

• HLEN: IP header length (4 bits), which is the number of 32 bit words


in the header. The minimum value for this field is 5 and the maximum
is 15.

• Type of service: Low Delay(D), High Throughput(T), Reliability(R) (8


bits) (QOS),Min cost(C)
IPV4 HEADER
• Identification: Unique Packet Id for identifying the group of fragments of a single
IP datagram (16 bits)

• Flags: 3 flags of 1 bit each : reserved bit (must be zero), do not fragment flag,
more fragments flag (same order)

• Fragment Offset: Represents the number of Data Bytes ahead of the particular
fragment in the particular Datagram. Specified in terms of number of 8 bytes,
which has the maximum value of 65,528 bytes.

• Time to live: Datagram’s lifetime (8 bits), It prevents the datagram to loop


through the network by restricting the number of Hops taken by a Packet before
delivering to the Destination.
IPV4 HEADER
• Protocol: Name of the protocol to which the data is to be passed (8 bits)

• Header Checksum: 16 bits header checksum for checking errors in the


datagram header

• Source IP address: 32 bits IP address of the sender

• Destination IP address: 32 bits IP address of the receiver

• Option: Optional information such as source route, record route. Used by


the Network administrator to check whether a path is working or not.
IPV6
• The primary reason to make the change is due to IPv6 addressing.
• IPv4 is based on 32-bit addressing, limiting it to a total of 4.3 billion
addresses. IPv6 is based on 128-bit addressing and, which is 340
trillion addresses.
• Having more addresses has grown in importance with the expansion
of smart devices and connectivity.
• IPv6 provides more than enough globally unique IP addresses for
every networked device currently on the planet, helping ensure
providers can keep pace with the expected proliferation of IP-based
devices.
IPV6
• IPv6 was developed by Internet Engineering Task Force (IETF) to deal
with the problem of IPv4 .
• IPv6 is a 128-bits address having an address space of 2128, which is
way bigger than IPv4.
• IPv6 use binary and Hexa-Decimal format separated by colon (:) .
Addressing methods
• In IPv6 representation, we have three addressing methods :
• Unicast
• Multicast
• Anycast
Addressing methods
Unicast Address
Unicast Address identifies a single network interface. A packet sent to a
unicast address is delivered to the interface identified by that address.
Multicast Address
• Multicast Address is used by multiple hosts, called as groups, acquires
a multicast destination address.
• These hosts need not be geographically together. If any packet is sent
to this multicast address, it will be distributed to all interfaces
corresponding to that multicast address
Addressing methods
Anycast Address
• Anycast Address is assigned to a group of interfaces. Any packet sent
to an anycast address will be delivered to only one member interface
(mostly nearest host possible).

Note: Broadcast is not defined in IPv6.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy