Introduction To Computer Networks
Introduction To Computer Networks
Introduction To Computer Networks
easy access. You can access Classroom and Theory from the left panel.
Introduction to Computer Networks
Computer Networks is a communication network established between many
electronic devices, not necessarily computers only for sharing resources and data.
Such a network is established using physical links (such as cables, fiber, etc.) or can
be wireless. (Wi-Fi, Bluetooth, etc.). We shall discuss the basic terminologies of
computer networks in this tutorial, and then understand it's requirements and what
goals it needs to attain.
Open system - A system that is connected to the network and is ready for
communication.
Closed system - A system that is not connected to the network and can’t be
communicated with.
The layout pattern using which devices are interconnected is called as network
topology. Such as Bus, Star, Mesh, Ring, Daisy chain.
OSI - OSI stands for Open Systems Interconnection. It is a reference model that
specifies standards for communications protocols and also the functionalities of
each layer.
Protocol - A protocol is the set of rules or algorithms which define the way how
two entities can communicate across the network and there exists different protocol
defined at each layer of the OSI model. A few of such protocols are TCP, IP, UDP,
ARP, DHCP, FTP and so on.
Network Criteria - The criteria that have to be met by a computer network are:
o Transit time is the time for a message to travel from one device to
another
o Response time is the elapsed time between an inquiry and a response.
o Frequency of failure
o Recovery from failures
o Robustness during catastrophe
3. Security - It means protecting data from unauthorized access.
4. Flexible access - Files can be accessed from any computer in the network.
The project can be begun on one computer and finished on another.
Transmission Modes
Transmission Modes determine how data is transferred between two devices in a
computer network. There are 3 transmission modes in computer networks given
below:
Network Topologies
The arrangement of nodes in a network generally follows some pattern or
organization. Each of these patterns have their set of advantages/disadvantages.
Such arrangements are called collectively referred to as network topologies. Some
of the popular network topologies are as follows:
Mesh
Key points:
Star
Key points:
1. Easy & Cheap Installation (n cables required). Also device needs to have only
1 port.
2. Single point-of-failure (central node).
Bus
Key points:
Ring
Key points:
1. Easy & Cheap Installation (1 line).
2. Difficulty in Troubleshooting.
3. Addition/Removal of nodes disturbs the topology.
Hybrid
Key points:
OSI Model It stands for Open Systems Interconnection. It comprises of 7 layers with
the following responsibilites (starting from the lowest layer):
The packet received from the Network layer is further divided into frames
depending on the frame size of NIC(Network Interface Card). DLL also encapsulates
Sender and Receiver’s MAC address in the header.
Here the router is used to convey the connection in wireless form. This is then
connected to the internet. All the 1-to-1 connection is again done using DLL. The
setup is called WLAN as they are all connected in Wireless Local Area Network.
This network might have a collision.
1. Framing: Framing is a function of the data link layer. It provides a way for a
sender to transmit a set of bits that are meaningful to the receiver. This can
be accomplished by attaching special bit patterns to the beginning and end
of the frame.
2. Physical addressing: After creating frames, Data link layer adds physical
addresses (MAC address) of sender and/or receiver in the header of each
frame.
3. Error Detection: Data link layer provides the mechanism of error control in
which it detects and retransmits damaged or lost frames.
4. Error and Flow Control: The data rate must be constant on both sides else
the data may get corrupted thus, flow control coordinates that amount of
data that can be sent before receiving acknowledgement.
5. Access control: When a single communication channel is shared by multiple
devices, MAC sub-layer of data link layer helps to determine which device
has control over the channel at a given time.
* Packet in Data Link layer is referred as Frame.
** Data Link layer is handled by the NIC (Network Interface Card) and device drivers
of host machines.
*** Switch & Bridge are Data Link Layer devices.
Transmission and Propagation Delay
Delays in Packet switching :
1. Transmission Delay
2. Propagation Delay
3. Queuing Delay
4. Processing Delay
Transmission Delay :
Time taken to put a packet onto link. In other words, it is simply time required to put
data bits on the wire/communication medium. It depends on length of packet and
bandwidth of network.
For example,
Let the link bandwidth of a network be 100 bits/second and the packet length be
1000 bits. Therefore the tarnsmission delay for the following network will be
1000/100 = 10 seconds.
Propagation delay : Time is taken by the first bit to travel from sender to receiver
end of the link. In other words, it is simply the time required for bits to reach the
destination from the start point. Factors on which Propagation delay depends are
Distance and propagation speed.
Propagation delay = distance/transmission speed = d/s
Lets take another example with the distance as 30 km, velocity being 3 x 10^8
meter/second.
Therefore, the propagation delay = distance/velocity = (30 * 10^3)/(3 * 10^8) = 0.1
milli-second.
Queuing Delay : Queuing delay is the time a job waits in a queue until it can be
executed. It depends on congestion. It is the time difference between when the
packet arrived Destination and when the packet data was processed or executed. It
may be caused by mainly three reasons i.e. originating switches, intermediate
switches or call receiver servicing switches.
L=size of packet
R=bandwidth
Processing Delay : Processing delay is the time it takes routers to process the
packet header. Processing of packets helps in detecting bit-level errors that occur
during transmission of a packet to the destination. Processing delays in high-speed
routers are typically on the order of microseconds or less.
In simple words, it is just the time taken to process packets.
(N-1)*(Transmission delay)
Characteristics
• Used in Connection-oriented communication.
• It offers error and flow control
• It is used in Data Link and Transport Layers
• Stop and Wait ARQ mainly implements Sliding Window Protocol concept
with Window Size 1
Useful Terms:
• Propagation Delay: Amount of time taken by a packet to make a physical
journey from one router to another router.
Propagation Delay = (Distance between routers) / (Velocity of propagation)
Receiver:
Rule 1) Send acknowledgement after receiving and consuming of the data packet.
Rule 2) After consuming packet acknowledgement need to be sent (Flow Control)
Problems :
1. Lost Data
2. Lost Acknowledgement:
The Stop and Wait ARQ solves main three problems but may cause big
performance issues as sender always waits for acknowledgement even if it has
next packet ready to send. Consider a situation where you have a high bandwidth
connection and propagation delay is also high (you are connected to some server in
some other country through a high-speed connection). To solve this problem, we
can send more than one packet at a time with a larger sequence number.
So Stop and Wait ARQ may work fine where propagation delay is very less, for
example, LAN connections, but performs badly for distant connections like satellite
connection.
Efficiency: Stop and Wait is a flow control protocol. In which the sender sends one
packet and waits for the receiver to acknowledge and then it will send the next
packet. In case if the acknowledgement is not received, the sender will retransmit
the packet. This is the simplest one and easy to implement. but the main
disadvantage is the efficiency is very low.
Since,
Tp(ack) = Tp(data)
And,
Tq = 0 and Tpro = 0
Hence,
Where,
= Tt / (Tt + 2*Tp)
= 1 / (1+2*(Tp/Tt))
= 1 / (1+2*a)
where,
a = Tp / Tt
Throughput: Number of bits send per second, which is also known as Effective
Bandwidth or Bandwidth utilization.
Throughput,
= L/(Tt + 2*Tp)
= ((L/BW)*BW)/(Tt + 2*Tp)
= Tt/(Tt + 2*Tp) * BW
= 1/(1 + 2a) * BW
Hence, Throughput
= η * BW
where,
BW : BandWidth
= 1/(1 + 2*(d/v)*(BW/L))
where,
v = velocity
Example: Given,
Tt = 1ms
Tp = 2ms
Bandwidth = 6 Mbps
Efficiency(η)
= 1/(1 + a)
= 1/(1 + (2/1))
= 1/3
= 33.33 %
Throughput
= η * BW
= (1/3) * 6
= 2 Mbps
Note: As we can observe from the above given formula of Efficiency that:
1. On increasing the distance between source and receiver the Efficiency will
decrease. Hence, Stop and Wait is only suitable for small area network like
LAN. It is not suitable for MAN or WAN, as the efficiency will be very low.
2. If we increase the size of the Data packet, the efficiency is going to increase.
Hence, it is suitable not for small packets. Big data packets can be send by
Stop and Wait efficiently.
Go Back N
Sliding Window Protocol is actually a theoretical concept in which we have only
talked about what should be the sender window size (1+2a) in order to increase the
efficiency of stop and wait arq. Now we will talk about the practical
implementations in which we take care of what should be the size of receiver
window. Practically it is implemented in two protocols namely :
1. Go Back N (GBN)
2. Selective Repeat (SR)
In this article, we will explain you about the first protocol which is GBN in terms of
three main characteristic features and in the last part we will be discussing SR as
well as comparison of both these protocols
Sender Window Size (WS) It is N itself. If we say the protocol is GB10, then Ws =
10. N should be always greater than 1 in order to implement pipelining. For N = 1, it
reduces to Stop and Wait protocol.
Now what exactly happens in GBN, we will explain with the help of an example.
Consider the diagram given below. We have a sender window size of 4. Assume
that we have lots of sequence numbers just for the sake of explanation. Now the
sender has sent the packets 0, 1, 2 and 3. After acknowledging the packets 0 and 1,
the receiver is now expecting packet 2 and sender window has also slide to further
transmit the packets 4 and 5. Now suppose the packet 2 is lost in the network, the
Receiver will discard all the packets which sender has transmitted after packet 2 as
it is expecting sequence number of 2. On the sender side for every packet send
there is a time out timer which will expire for packet number 2. Now from the last
transmitted packet 5 senders will go back to the packet number 2 in the current
window and transmit all the packets till packet number 5. That’s why it is called Go
Back N. Go back means the sender has to go back N places from the last
transmitted packet in the unacknowledged window and not from the point where
the packet is lost.
Acknowledgements
There are 2 kinds of acknowledgements namely :
The Stop and Wait ARQ offers error and flow control, but may cause big
performance issues as sender always waits for acknowledgement even if it has
next packet ready to send. Consider a situation where you have a high bandwidth
connection and propagation delay is also high (you are connected to some server in
some other country though a high speed connection), you can't use this full speed
due to limitations of stop and wait.
Sliding Window protocol handles this efficiency issue by sending more than one
packet at a time with a larger sequence numbers. The idea is same as pipelining in
architectures.
Few Terminologies :
Transmission Delay (Tt) - Time to transmit the packet from host to the outgoing
link. If B is the Bandwidth of the link and D is the Data Size to transmit
Tt = D/B
Propagation Delay (Tp) - It is the time taken by the first bit transferred by the host
onto the outgoing link to reach the destination. It depends on the distance d and the
wave propagation speed s (depends on the characteristics of the medium).
Tp = d/s
Efficiency - It is defined as the ratio of total useful time to the total cycle time of a
packet. For stop and wait protocol,
Since acknowledgements are very less in size, their transmission delay can be
neglected.
Capacity of link - If a channel is Full Duplex, then bits can be transferred in both
the directions and without any collisions. Number of bits a channel/Link can hold at
maximum is its capacity.
Concept Of Pipelining
In Stop and Wait protocol, only 1 packet is transmitted onto the link and then
sender waits for acknowledgement from the receiver. The problem in this setup is
that efficiency is very less as we are not filling the channel with more packets after
1st packet has been put onto the link. Within the total cycle time of Tt + 2*Tp units,
we will now calculate the maximum number of packets that sender can transmit on
the link before getting an acknowledgement.
In the picture given below, after sender has transmitted packet 0, it will
immediately transmit packets 1, 2, 3. Acknowledgement for 0 will arrive after 2*1.5
= 3ms. In Stop and Wait, in time 1 + 2*1.5 = 4ms, we were transferring one packet
only. Here we keep a window of packets which we have transmitted but not yet
acknowledged.
After we have received the Ack for packet 0, window slides and the next packet can
be assigned sequence number 0. We reuse the sequence numbers which we have
acknowledged so that header size can be kept minimum as shown in the diagram
given below.
Selective Repeat Protocol
Why Selective Repeat Protocol? The go-back-n protocol works well if errors are
less, but if the line is poor it wastes a lot of bandwidth on retransmitted frames. An
alternative strategy, the selective repeat protocol, is to allow the receiver to accept
and buffer the frames following a damaged or lost one.
Selective Repeat attempts to retransmit only those packets that are actually lost
(due to errors) :
Retransmission requests :
• Implicit - The receiver acknowledges every good packet, packets that are not
ACKed before a time-out are assumed lost or in error.Notice that this
approach must be used to be sure that every packet is eventually received.
• Explicit - An explicit NAK (selective reject) can request retransmission of just
one packet. This approach can expedite the retransmission but is not strictly
needed.
• One or both approaches are used in practice.
• In Selective Repeat ARQ, the size of the sender and receiver window must be
at most one-half of 2^m.
Figure - the sender only retransmits frames, for which a NAK is received
Efficiency = N/(1+2a)
Where a = Propagation delay / Transmission delay
Buffers = N + N
Sequence number = N(sender side) + N ( Receiver Side)
2. Go Back N - The sender sends N packets which is equal to the window size.
Once the entire window is sent, the sender then waits for a cumulative ACK
to send more packets. On the receiver end, it receives only in-order packets
and discards out-of-order packets. As in case of packet loss, the entire
window would be re-transmitted.
3. Selective Repeat - The sender sends packet of window size N and the
receiver acknowledges all packet whether they were received in order or not.
In this case, the receiver maintains a buffer to contain out-of-order packets
and sorts them.The sender selectively re-transmits the lost packet and
moves the window forward.
Differences:
Stop and
Properties Go Back N Selective Repeat
Wait
Sender window size 1 N N
Receiver Window size 1 1 N
Minimum Sequence number 2 N+1 2N
Efficiency 1/(1+2*a) N/(1+2*a) N/(1+2*a)
Type of Acknowledgement Individual Cumulative Individual
In-order delivery Out-of-order delivery as
Supported order at Receiving end -
only well
Number of retransmissions in case of packet
1 N 1
drop
Where,
Network Layer
Network layer works for the transmission of data from one host to the other located
in different networks. It also takes care of packet routing i.e. selection of the
shortest path to transmit the packet, from the number of routes available. The
sender & receiver's IP address are placed in the header by the network layer.
The functions of the Network layer are:
1. Routing: The network layer protocols determine which route is suitable from
source to destination. This function of network layer is known as routing.
2. Logical Addressing: In order to identify each device on internetwork
uniquely, network layer defines an addressing scheme. The sender &
receiver’s IP address are placed in the header by network layer. Such an
address distinguishes each device uniquely and universally.
Before understanding the working at the Networking layer, let's get familiar with a
few technical devices that has a great role to play in this system:
1. Switch - A switch is a multiport bridge with a buffer and a design that can
boost its efficiency(a large number of ports imply less traffic) and
performance. The switch is a data link layer device. The switch can perform
error checking before forwarding data, that makes it very efficient as it does
not forward packets that have errors and forward good packets selectively to
correct port only. In other words, switch divides collision domain of hosts,
but broadcast domain remains the same.
2. Routers - A router is a device like a switch that routes data packets based on
their IP addresses. The router is mainly a Network Layer device. Routers
normally connect LANs and WANs together and have a dynamically
updating routing table based on which they make decisions on routing the
data packets. Router divide broadcast domains of hosts connected through it.
Types of Hub
o Active Hub:- These are the hubs which have their own power supply
and can clean, boost and relay the signal along with the network. It
serves both as a repeater as well as wiring centre. These are used to
extend the maximum distance between nodes.
o Passive Hub:- These are the hubs which collect wiring from nodes
and power supply from active hub. These hubs relay signals onto the
network without cleaning and boosting them and can't be used to
extend the distance between nodes.
6. Bridge - A bridge operates at data link layer. A bridge is a repeater, with add
on the functionality of filtering content by reading the MAC addresses of
source and destination. It is also used for interconnecting two LANs working
on the same protocol. It has a single input and single output port, thus
making it a 2 port device.
Types of Bridges
o Transparent Bridges:- These are the bridge in which the stations are
completely unaware of the bridge's existence i.e. whether or not a
bridge is added or deleted from the network, reconfiguration of the
stations is unnecessary. These bridges make use of two processes i.e.
bridge forwarding and bridge learning.
2) It helps in the delivery of packets from source host to the destination host.
3) The network layer is basically used when we want to send data over a different
network.
4) In this logical addressing is used ie. when data is to be sent in the same network
we need an only physical address but if we wish to send data outside network we
need a logical address.
5) It helps in routing ie. routers and switches are connected at this layer to route the
packets to its final destination.
All address information is only transferred during setup phase. Once the
route to destination is discovered, entry is added to switching table of each
intermediate node. During data transfer, packet header (local header) may
contain information such as length, timestamp, sequence number etc.
Connection-oriented switching is very useful in switched WAN. Some
popular protocols which use Virtual Circuit Switching approach are X.25,
Frame-Relay, ATM and MPLS(Multi-Protocol Label Switching).
A---R1---R2---B
To send a packet from A to B there are delays since this is a Store and
Forward network.
Class A The Network ID is of 8-bits, leaving the host part with 24-bits. The 1st bit
of Network part is always set to 0.
Subnet Mask - 255.0.0.0.
Size of Network - (224 - 2) = 16,777,214 Host IDs (2 is subtracted because of x.0.0.0
is reserved for Network ID and x.255.255.255 is used for limited-broadcasting)
No. of unique networks - (27 - 2) = 127 Networks (2 is subtracted because 0.0.0.0
and 127.x.y.z are reserved for special purposes)
Class B The Network ID is of 16-bits, leaving the host part with 16-bits. However,
the higher-order bits of Network part is always set to 10.
Subnet Mask - 255.255.0.0.
Size of Network - (216 - 2) = 65534 Host IDs (2 is subtracted because of x.y.0.0 is
reserved for Network ID and x.y.255.255 is used for limited-broadcasting)
No. of unique networks - 214 = 16384 Networks
Class C The Network ID is of 24-bits, leaving the host part with 8-bits. However,
the higher-order bits of Network part is always set to 110.
Subnet Mask - 255.255.0.0.
Size of Network - (28 - 2) = 254 Host IDs (2 is subtracted because of x.y.z.0 is
reserved for Network ID and x.y.z.255 is used for limited-broadcasting)
No. of unique networks - 221 = 2097152 Networks
Class D Class-D addresses are reserved for multi-casting. The higher-order bits are
set to 1110. Remaining bits are reserved for interested hosts. Class-D networks
don't have any subnet mask, as there is no concept of a subnet for this class of IPs.
Class E IP addresses belonging to class E are reserved for experimental, research &
military applications. IP ranges from 240.0.0.0 – 255.255.255.254. This class too
doesn’t have any subnet mask. The higher order bits of first octet of class E are
always set to 1111.
Flow-control Protocols
Flow Control is required in Computer Networks because, most of the time the
sender has no idea about the capacity of buffer at the receiving end, and thus may
transmit packets exceeding the current capacity causing them to get dropped at the
receiver end. Thus, the flow control mechanism is required for re-transmission in
case packets get lost. Some of the popular schemes are as follows:
Stop & Wait (ARQ) In this scheme, the sender waits for ACK (acknowledgment)
from the receiver before transmitting the next packet. If it doesn't receive ACK for a
certain packet within a pre-defined timeout (ARQ variant ~ Automatic Repeat
Request), it re-transmits said packet (assuming it got dropped).
In this scheme, packets are sent one-by-one (inefficient).
Go-back-N In this scheme, the sender sends all the packets equating to the receiver
window size (say n) all at once. The receiver then sends ACKn+1 (requesting the next
packet ~ (n+1)th). GBN uses cumulative acknowledgment. If any of the transmitted
packets get lost, all the subsequent packets are dropped at the receiver end.
Instead, a NACK (negative-acknowledgment indicating lost packet no.) is
transmitted. Thereafter, all packets starting from the lost packet is re-transmitted.
As can be seen, if packet #1 gets lost, the whole window will be re-transmitted by
going back n places, hence the name. It is also not much useful as unnecessarily we
are repeating transmission of the whole window. We can do better as in Selective
Repeat.
Selective Repeat In this scheme, when a packet gets lost, the receiver sends a
NACK, however, unlike GBN, it still receives subsequent packets (GBN drops them
as shown in the diagram above). Upon the reception of NACK, only that particular
packet is re-transmitted.
Error Detection
Due to noise in network and signal interference, bit values may get changed during
transmission leading to so called errors. They need to be detected at the Data-Link
layer, and upon detection re-transmission is requested or correction is done (as in
Hamming Code). Some of the common error-detection schemes are given below:
Parity Check
Parity check works by counting the no. of 1s in the bit-representation and then
appending 1 in case there exists odd no. of ones, or 0 in case of even no. of 1s.
Thus, the total no. of 1s become even. Hence, this scheme is also called even-parity
check. Thus, if due to error any bit changes, the total no. of 1s will become odd.
2D Parity Check
Parity check bits are calculated for each row, which is equivalent to a simple parity
check bit. Parity check bits are also calculated for all columns, then both are sent
along with the data. At the receiving end these are compared with the parity bits
calculated on the received data.
Checksum
The procedure for usage of checksum is as follows:
FOR CLASS A:
ADDRESS RANGE: 10.0.0.0-10.255.255.255
FOR CLASS B:
ADDRESS RANGE: 172.16.0.0-172.31.255.255
FOR CLASS C:
ADDRESS RANGE: 192.168.0.0-192.168.255.255
Network Address Translation (NAT): To access the Internet, one public IP address
is needed, but we can use a private IP address in our private network. The idea of
NAT is to allow multiple devices to access the Internet through a single public
address. To achieve this, the translation of private IP address to a public IP address
is required. Network Address Translation (NAT) is a process in which one or more
local IP address is translated into one or more Global IP address and vice versa in
order to provide Internet access to the local hosts. Also, it does the translation of
port numbers i.e. masks the port number of the host with another port number, in
the packet that will be routed to the destination. It then makes the corresponding
entries of IP address and port number in the NAT table. NAT generally operates on
router or firewall.
If NAT run out of addresses, i.e., no address is left in the pool configured then the
packets will be dropped and an Internet Control Message Protocol (ICMP) host
unreachable packet to the destination is sent.
NAT inside and outside addresses - Inside refers to the addresses which must be
translated. Outside refers to the addresses which are not in control of an
organisation. These are the network Addresses in which the translation of the
addresses will be done.
Network Address Translation (NAT) Types - There are 3 ways to configure NAT:
1. Static NAT - In this, a single unregistered (Private) IP address is mapped
with a legally registered (Public) IP address i.e one-to-one mapping between
local and global address. This is generally used for Web hosting. These are
not used in organisations as there are many devices who will need Internet
access and to provide Internet access, the public IP address is needed.
Suppose, if there are 3000 devices who need access to the Internet, the
organisation have to buy 3000 public addresses that will be very costly.
Advantages of NAT -
Subnetting
When a bigger network is divided into smaller networks, in order to maintain
security, then that is known as Subnetting. so, maintenance is easier for smaller
networks.
In order to find the Network ID (NID) of a Subnet, one must be fully acquainted
with the Subnet mask. Subnet Mask is used to find which IP address belongs to
which Subnet. It is a 32-bit number, containing 0's and 1's. Here network id part
and Subnet ID part is represented by all 1's and host ID part is represented by all
0's.
Example: If Network id of a entire network = 193.1.2.0 (it is class C IP). For more
about class C IP see Classful Addressing.
Example-1:
If IP address = 193.1.2.129 (convert it into binary form)
= 11000001.00000001.00000010.10000001
= 11000001.00000001.00000010.01000011
Subnet Mask = 11111111.11111111.11111111.11000000
Dividing Networks Now, let's talk about dividing a network into two parts:
so to divide a network into two parts, you need to choose one bit for each Subnet
from the host ID part.
Note: It is a class C IP so, there are 24 bits in the network id part and 8 bits in the
host id part.
• For Subnet-1: The first bit which is chosen from the host id part is zero and
the range will be from (193.1.2.00000000 till you get all 1's in the host ID
part i.e, 193.1.2.01111111) except for the first bit which is chosen zero for
subnet id part.
193.1.2.0 to 193.1.2.127
• For Subnet-2: The first bit chosen from the host id part is one and the range
will be from (193.1.2.100000000 till you get all 1's in the host ID part i.e,
193.1.2.11111111).
193.1.2.128 to 193.1.2.255
Note:
1. To divide a network into four (22) parts you need to choose two bits from
host id part for each subnet i.e, (00, 01, 10, 11).
2. To divide a network into eight (23) parts you need to choose three bits from
host id part for each subnet i.e, (000, 001, 010, 011, 100, 101, 110, 111) and
so on.
In the above diagram entire network is divided into four parts, which means there
are four subnets each having two bits for Subnet ID part.
The above IP is class C, so it has 24 bits in network id part and 8 bits in host id part
but you choose two bits for subnet id from host id part, so now there are two bits in
subnet id part and six bits in host id part, i.e.,
Therefore,
= 255.255.255.192
If any given IP address performs bitwise AND operation with the subnet mask, then
you get the network id of the subnet to which the given IP belongs.
1. In case of the single network, only three steps are required in order to reach a
Process i.e Source Host to Destination Network, Destination Network to
Destination Host and then Destination Host to Process.
But in the case of Subnetting four steps are required for Inter-Network
Communication. i.e Source Host to Destination Network, Destination
Network to proper Subnet, then Subnet to Host and finally Host to Process.
Hence, it increases Time complexity. In the case of Subnet, more time is
required for communication or data transfer.
2. In the case of Single Network only two IP addresses are wasted to represent
Network Id and Broadcast address but in case of Subnetting two IP
addresses are wasted for each Subnet.
Example: If a Network has four Subnets, it means 8 IP addresses are going
to waste.
3. Network Id for S1: 200.1.2.0
Hence, we can say that Network size will also decrease. We can't use our
Network completely.
13. Cost of the overall Network also increases. Subnetting requires internal
routers, Switches, Hubs, Bridges etc. which are very costly.
14. Subnetting and network management require an experienced network
administrator. This adds to the overall cost as well.
Introduction of Variable Length Subnet Mask (VLSM): In this the subnet, the design
uses more than one mask in the same network which means more than one mask is
used for different subnets of a single class A, B, C or a network. It is used to
increase the usability of subnets as they can be of variable size. It is also defined as
the process of subnetting of a subnet.
Classless Addressing (CIDR), Subnetting & Supernetting
As we have already learned about Classful Addressing, so in this article, we are
going to learn about Classless Inter-Domain Routing. which is also known as
Classless addressing. In the Classful addressing the no of Hosts within a network
always remains the same depending upon the class of the Network.
CIDR scheme doesn't fix the network and host parts into categories. It instead
allows us to subnet the whole address space to get a network from the whole
address space according to our requirement (no wastage).
Now, let's suppose an Organization requires 214 hosts, then it must have to
purchase a Class B network. In this case, 49152 Hosts will be wasted. This is the
major drawback of Classful Addressing.
In order to reduce the wastage of IP addresses a new concept of Classless Inter-
Domain Routing is introduced. Now a days IANA is using this technique to provide
the IP addresses. Whenever any user asks for IP addresses, IANA is going to assign
that many IP addresses to the User.
a . b . c . d / n
20.10.50.100/20
All three rules are followed by this Block. Hence, it is a valid IP address block.
Subnetting & Subnet Mask Subnetting is the process of dividing the whole
address-space into a block of contiguous IP addresses. The no. of host devices that
can be accommodated and the Network ID can be found out using Subnet Mask. A
subnet mask is a 32-bit binary value which upon taking bitwise-AND with the IP-
address gives us the Network ID bits. We give the IP address and define the
number of bits for mask along with it (usually followed by a ‘/’ symbol), like,
192.168.1.1/28. Here, the subnet mask is found by putting the given number of bits
out of 32 as 1, like, in the given address, we need to put 28 out of 32 bits as 1 and
the rest as 0, and so, the subnet mask would be 255.255.255.240.
As another example:
Given IP Address – 172.16.21.23/25,
IP: 10101100.00010000.00010101.00010111
Subnet Mask: 11111111.11111111.11111111.10000000 (binary) ~
255.255.255.128
Taking AND, we get Network ID: 172.16.21.0
No. of usable hosts: (232-25 - 2) = (27 - 2) = 126 (excluding Network ID and broadcast
address).
N1: 200.1.0.0/24
N2: 200.1.1.0/24
N3: 200.1.2.0/24
N4: 200.1.3.0/24
We see that all the addresses are contiguous. N1 ranges from 200.1.0.0 to
200.1.0.255. Adding 0.0.0.1 to the last address yields 200.1.1.0, which is the start
address of N2. Similarly, for all the subsequent networks N2, N3 & N4.
The sizes of all the networks are also same ~ 28 (a power of 2 as well). The 1st IP
address is also divisible by the total size. Here, 200.1.0.0 is the 1st IP address and
the total size of supernet will be 4*28 ~ 210, implying last 10 bits should be 0.
200.1.0.0 = 11001000.00000001.00000000.00000000.
Hence, we can group them together into a supernet as: 200.1.0.0/22.
ARP (Address Resolution Protocol) & Reverse-ARP
In a computer network, we have 2 addresses associated with a device (physical &
logical). Physical Address is permanent and fixed (although can be changed, but
shouldn't be done ~ MAC spoofing) for a device and doesn't change if the device
changes network. Logical Addresses (IP) is transient and changes once device
leaves current network and joins another. To finally transmit data from one device
to another, however physical/MAC address is required (at Data-Link-layer). But,
all a Network Layer knows is the logical address/IP of the next-hop-
device. ARP (Address-Resolution-Protocol) is the de-facto method of acquiring
the physical address of next-hop from it's logical address. Similarly, Reverse-ARP
is the process of getting the IP address from the device's physical address.
ARP To get the MAC address of the target machine, the sender broadcasts a
special ARP-message over it's immediate neighbors, requesting the MAC address.
The contents of this message are:
• Sender IP address
• Sender MAC address
• Destinaton MAC address (filled as all 0s initially)
• Destination IP addresss
Upon reception of this ARP message, the device associated with the destination IP,
fills it's MAC address into the destination MAC space (filled with 0s), and unicasts it
to the sender (Sender MAC is provided for this purpose). All other machines simply
ignore the request. Finally, the sender recieves the reply and gets to know the
destination MAC address.
• Process to process delivery - While Data Link Layer requires the MAC
address (48 bits address contained inside the Network Interface Card of
every host machine) of source-destination hosts to correctly deliver a frame
and Network layer requires the IP address for appropriate routing of packets
, in a similar way Transport Layer requires a Port number to correctly deliver
the segments of data to the correct process amongst the multiple processes
running on a particular host. A port number is a 16 bit address used to
identify any client-server program uniquely.
• Data integrity and Error correction - Transport layer checks for errors in the
messages coming from application layer by using error detection codes,
computing checksums, it checks whether the received data is not corrupted
and uses the ACK and NACK services to inform the sender if the data has
arrived or not and checks for the integrity of data.
• Flow control - The transport layer provides a flow control mechanism
between the adjacent layers of the TCP/IP model. TCP also prevents data
loss due to a fast sender and slow receiver by imposing some flow control
techniques. It uses the method of sliding window protocol which is
accomplished by the receiver by sending a window back to the sender
informing the size of data it can receive.
1. SYN: In the first step, sender sends a segment with SYN message (containing
Synchronize Sequence Number) expressing it's wish to establish a
connection. The Sequence No. determines what segment it wants to start the
communication with.
UDP on the other hand is a connection-less protocol, which doesn't care about
reliability. Thus, if some packets get lost, they are skipped. It is thus used in
applications where it doesn't matter if we lose out some data. e.g. Video
Streaming/Call , VoIP (Voice-over-IP), Multiplayer Games. UDP's main focus is
transmission speed (thus re-transmissions are not done in this protocol). More
differences between TCP and UDP are given below:
• No routing overhead for router CPU which means a cheaper router can be
used to do routing.
• It adds security because the only administrator can allow routing to
particular networks only.
• No bandwidth usage between routers.
Disadvantage -
Configuration -
R1 having IP address 172.16.10.6/30 on s0/0/1, 192.168.10.1/24 on fa0/0.
R2 having IP address 172.16.10.2/30 on s0/0/0, 192.168.20.1/24 on fa0/0.
R3 having IP address 172.16.10.5/30 on s0/1, 172.16.10.1/30 on s0/0,
10.10.10.1/24 on fa0/0.
Here, provided the route for 192.168.10.0 network where 192.168.10.0 is its
network I'd and 172.16.10.2 and 172.16.10.6 are the next-hop address.
Now, configuring for R2:
2. Default Routing - This is the method where the router is configured to send all
packets towards a single router (next hop). It doesn't matter to which network the
packet belongs, it is forwarded out to router which is configured for default routing.
It is generally used with stub routers. A stub router is a router which has only one
route to reach all other networks.
Configuration - Using the same topology which we have used for the static routing
before.
In this topology, R1 and R2 are stub routers so we can configure default routing for
both these routers.
Configuring default routing for R1:
R1(config)#ip route 0.0.0.0 0.0.0.0 172.16.10.5
1. The routers should have the same dynamic protocol running in order to
exchange routes.
2. When a router finds a change in the topology then router advertises it to all
other routers.
Advantages -
• Easy to configure.
• More effective at selecting the best route to a destination remote network
and also for discovering remote network.
Disadvantage -
Features -
• Updates of network are exchanged periodically.
• Updates (routing information) is always broadcast.
• Full routing tables are sent in updates.
• Routers always trust on routing information received from neighbor routers.
This is also known as routing on rumors.
Disadvantages -
2. Link State Routing Protocol - These protocols know more about the
Internetwork than any other distance vector routing protocol. These re also known
as SPF (Shortest Path First) protocol. OSPF is an example of a link-state routing
protocol.
Features -
• Hello messages, also known as keep-alive messages are used for neighbor
discovery and recovery.
• Concept of triggered updates are used i.e updates are triggered only when
there is a topology change .
• Only that much updates are exchanged which is requested by the neighbor
router.
1. Neighbor table- the table which contains information about the neighbors of
the router only, i.e, to which adjacency has been formed.
2. Topology table- This table contains information about the whole topology
i.e contains both best and backup routes to particular advertised network.
3. Routing table- This table contains all the best routes to the advertised
network.
Advantages -
• As it maintains separate tables for both best route and the backup routes (
whole topology) therefore it has more knowledge of the internetwork than
any other distance vector routing protocol.
• Concept of triggered updates are used therefore no more unnecessary
bandwidth consumption is seen like in distance vector routing protocol.
• Partial updates are triggered when there is a topology change, not a full
update like distance vector routing protocol where the whole routing table is
exchanged.
1. Unicast -
This type of information transfer is useful when there is a participation of a single
sender and a single recipient. So, in short, you can term it as a one-to-one
transmission. For example, a device having IP address 10.1.2.0 in a network wants
to send the traffic stream(data packets) to the device with IP address 20.12.4.2 in
the other network, then unicast comes into the picture. This is the most common
form of data transfer over the networks.
2. Broadcast -
Broadcasting transfer (one-to-all) techniques can be classified into two types :
This mode is mainly utilized by television networks for video and audio distribution.
One important protocol of this class in Computer Networks is Address Resolution
Protocol (ARP) that is used for resolving IP address into physical address which is
necessary for underlying communication.
3. Multicast -
In multicasting, one/more senders and one/more recipients participate in data
transfer traffic. In this method traffic recline between the boundaries of unicast
(one-to-one) and broadcast (one-to-all). Multicast lets server’s direct single copies
of data streams that are then simulated and routed to hosts that request it. IP
multicast requires the support of some other protocols like IGMP (Internet Group
Management Protocol), Multicast routing for its working. Also in Classful IP
addressing Class D is reserved for multicast groups.
4. Anycast -
Anycast is communication between a single sender and the nearest of several
receivers in a group. It is a traffic routing algorithm used for the speedy delivery of
website content that advertises individual IP addresses on multiple nodes. User
requests are directed to specific nodes based on such factors as the capacity and
health of your server, as well as the distance between it and the website visitor.
Anycast packet forwarding is a mechanism where multiple hosts can have the same
logical address. When a packet destined to this logical address is received, it is sent
to the host which is nearest in the routing topology.
Since we know that there is a cost of every link that is created in routing algorithms,
let's understand how could one decide this costs of links.
1. Bandwidth: This is the most used criteria as efficient use of bandwidth is the
primary goal of any algorithms.
4. Path Costs: Various paths result in various costs, sp one needs to choose
wisely.
5. Load: Due to heavy congestions, the cost of load increases.
6. Maximum Transmission unit: The packet size that can be transferred over a
link also affects the cost of link.
4. Should handle the changes well like there should be a fast convergence
when any of the following situation arises
Bellman Ford Basics - Each router maintains a Distance Vector table containing the
distance between itself and ALL possible destination nodes. Distances, based on a
chosen metric, are computed using information from the neighbours' distance
vectors.
• Distance to itself = 0
• Distance to ALL other routers = infinity number.
Note -
• From time-to-time, each node sends its own distance vector estimate to
neighbors.
• When a node x receives new DV estimate from any neighbor v, it saves v’s
distance vector and it updates its own DV using B-F equation:
Example - Consider 3-routers X, Y and Z as shown in figure. Each router have their
routing table. Every routing table will contain distance to the destination nodes.
Consider router X , X will share it routing table to neighbors and neighbors will
share it routing table to it to X and distance from node X to destination will be
calculated using bellmen- ford equation.
As we can see that distance will be less going from X to Z when Y is intermediate
node(hop) so it will be update in routing table X.
Similarly for Z also -
Finally the routing table for all -
Advantages of Distance Vector routing -
7. Used Poison reverse and Split Horizon for the count to infinite
problem.
So in this example, the Bellman-Ford algorithm will converge for each router,
they will have entries for each other. B will know that it can get to C at a cost
of 1, and A will know that it can get to C via B at a cost of 2.
If the link between B and C is disconnected, then B will know that it can no
longer get to C via that link and will remove it from its table. Before it can
send any updates it's possible that it will receive an update from A which
will be advertising that it can get to C at a cost of 2. B can get to A at a cost
of 1, so it will update a route to C via A at a cost of 3. A will then receive
updates from B later and update its cost to 4. They will then go on feeding
each other bad information toward infinity which is called as Count to
Infinity problem.
Let's look at this example and figure out what happens in each step:
Split horizon: If the link between B and C goes down, and B had received a
route from A, B could end up using that route via A. A would send the packet
right back to B, creating a loop. But according to Split horizon Rule, Node A
does not advertise its route for C (namely A to B to C) back to B. On the
surface, this seems redundant since B will never route via node A because
the route costs more than the direct route from B to C.
Consider the following network topology showing Split horizon-
o In addition to these, we can also use split horizon with route poisoning
where above both technique will be used combinedly to achieve
efficiency and less increase the size of routing announcements.
11. Reliable Flooding - Initially, each router knows the hop-cost to it's
directly connected neighbors. This information should be relayed to all
the routers (for building the graph). All the routers thus need to share
this neighbor information will all the other neighbors. This is done by
sending a short
message called link-state-advertisement. LSA consists of a
▪ Sequence No.
▪ Router ID
▪ Neighbor Hop-cost
12. Route Calculation - After flooding is completed, all the routers have
the required information to build the network graph. Thus, each router
then runs independently Djikstra to get the shortest-path and the
corresponding next-hop for each destination router.
DNS Lookup
DNS calls are an extra overhead which serves us no good in loading the actual
website. Thus, it would be very beneficial if we can cache DNS IP values for
frequently visited websites in the user-system itself. Thus, comes the concept of
DNS caching. Before making a call to the actual DNS server, the browser looks up
the DNS cache of the system. The DNS cache looks as -
We can display the DNS cache information in Windows CMD as -
1
2
3
ipconfig /displaydns
Run
If no entry in Cache is found, then we call the DNS server. DNS server IP is either
provided by the ISP, or there are public DNS servers provided by Google
(8.8.8.8/8.8.4.4) or OpenDNS (208.67.222.222/208.67.220.220). These settings
can be set/adjusted in Network Settings of the system. The system performs a DNS
call with the address provided. The type of packets used is generally UDP because
we require a lot of DNS calls and UDP packets are smaller in size (max. 512 bytes)
as compared to TCP. Also, DNS requests are done on a separate port no. ~ 53.
DNS Resolution
We shall understand this with an example. Say we search www.youtube.com.
DNS resolution occurs from end to start of the address. i.e. for our query, com ->
youtube -> www . THe DNS resolver first requests the root-DNS with com as the
search query.
What is the root-DNS Server? Root-DNS is the topmost level DNS server which
contains the addresses of tLD (top-level-domain) name-servers such as com, org,
gov. We have queried .com, so it returns the address of .com tLD server.
We thereafter query the .com-tLD server with the request for youtube. This name
server looks up into its database and other similar tLDs and returns back the list of
name servers matching youtube. Youtube is owned by Google, so we are returned
name server list ns1.google.com-ns4.google.com.
We then query one of these Google name servers for youtube, and we get back the
IP address for the website which is geographically closest to our location.
(Nowadays, the same website is hosted in a distributed fashion over multiple
regions). Diagrammatically -
NOTE - All the IP addresses shown in the image is just for illustration. They are by
no means accurate to the example discussed.
Above diagram shows the hierarchy of DNS servers and the various levels of
resolution. After the browser receives the correct IP address for the requested
website, it establishes a TCP connection which we describe below -
2. Single Bundle File with API end-points - Modern systems have a single JS
bundle file which contains the basic HTML structure to load in-case each
URL is requested. Whenever we request a website, the whole bundle is
downloaded into the user system. Accordingly, when we browse to different
web-pages, API calls are made and the response data is injected into the
HTML template (served by the bundle). Understanding this requires good
knowledge of modern front-end frameworks (such as React, Angular, etc.)
and REST-API backend frameworks (Nodejs, Django, etc.). Hence, it is out of
the scope of this article to explain the overall mechanism in detail.
DHCP is based on a client-server model and based on discovery, offer, request, and
ACK.
DHCP port number for server is 67 and for the client is 68. It is a Client-server
protocol which uses UDP services. IP address is assigned from a pool of addresses.
In DHCP, the client and the server exchange mainly 4 DHCP messages in order to
make a connection, also called DORA process, but there are 8 DHCP messages in
the process.
2. DHCP offer message - The server will respond to host in this message
specifying the unleased IP address and other TCP configuration information.
This message is broadcasted by the server. Size of the message is 342 bytes.
If there are more than one DHCP servers present in the network then client
host will accept the first DHCP OFFER message it receives. Also, a server ID
is specified in the packet in order to identify the server.
Also the server has provided the offered IP address 192.16.32.51 and lease
time of 72 hours(after this time the entry of host will be erased from the
server automatically) . Also the client identifier is PC MAC address
(08002B2EAF2A) for all the messages.
Note - This message is broadcast after the ARP request broadcast by the PC
to find out whether any other host is not using that offered IP. If there is no
reply, then the client host broadcast the DHCP request message for the
server showing the acceptance of IP address and Other TCP/IP
Configuration.
4. DHCP acknowledgement message - In response to the request message
received, the server will make an entry with specified client ID and bind the
IP address offered with lease time. Now, the client will have the IP address
provided by server.
Now the server will make an entry of the client host with the offered IP
address and lease time. This IP address will not be provided by server to any
other host. The destination MAC address is FFFFFFFFFFFF and the
destination IP address is 255.255.255.255 and the source IP address is
172.16.32.12 and the source MAC address is 00AA00123456 (server MAC
address).
8. DHCP inform - If a client address has obtained IP address manually then the
client uses a DHCP inform to obtain other local configuration parameters,
such as domain name. In reply to the dhcp inform message, DHCP server
generates DHCP ack message with local configuration suitable for the client
without allocating a new IP address. This DHCP ack message is unicast to
the client.
Note - All the messages can be unicast also by dhcp relay agent if the server is
present in different network.
The DHCP protocol gives the network administrator a method to configure the
network from a centralised area.
With the help of DHCP, easy handling of new users and reuse of IP address can be
achieved.