CN

Download as pdf or txt
Download as pdf or txt
You are on page 1of 171

Computer networks

Unit-1
Data communication Components: Representation of data, Data Flow, Network Topologies,
Protocols, OSI Reference Model, TCP/IP Reference Model. Physical Layer: Transmission
Media – Guided and Unguided Transmission Media.

DATA COMMUNICATIONS When we communicate, we are sharing information. This


sharing can be local or remote. Between individuals, local communication usually occurs face to
face, while remote communication takes place over distance. The term telecommunication,
which includes telephony, telegraphy, and television, means communication at a distance (tele is
Greek for "far"). The word data refers to information presented in whatever form is agreed upon
by the parties creating and using the data. Data communications are the exchange of data
between two devices via some form of transmission medium such as a wire cable. For data
communications to occur, the communicating devices must be part of a communication system
made up of a combination of hardware (physical equipment) and software (programs). The
effectiveness of a data communications system depends on four fundamental characteristics:
delivery, accuracy, timeliness, and jitter.

1) Delivery. The system must deliver data to the correct destination. Data must be received by
the intended device or user and only by that device or user.
2) Accuracy. The system must deliver the data accurately. Data that have been altered in
transmission and left uncorrected are unusable.

3) Timeliness. The system must deliver data in a timely manner. Data delivered late are useless.
In the case of video and audio, timely delivery means delivering data as they are produced, in the
same order that they are produced, and without significant delay. This kind of delivery is called
real-time transmission.
4)Jitter. Jitter refers to the variation in the packet arrival time. It is the uneven delay in the
delivery of audio or video packets. For example, let us assume that video packets are sent every
3D ms. Ifsome of the packets arrive with 3D-ms delay and others with 4D-ms delay, an uneven
quality in the video is the result.

Components of Data Communication

1)Message. The message is the information (data) to be communicated. Popular forms of


information include text, numbers, pictures, audio, and video.
2)Sender. The sender is the device that sends the data message. It can be a computer,
workstation, telephone handset, video camera, and so on.
3) Receiver. The receiver is the device that receives the message. It can be a computer,
workstation, telephone handset, television, and so on.

4)Transmission medium. The transmission medium is the physical path by which a message
travels from sender to receiver. Some examples of transmission media include twisted-pair wire,
coaxial cable, fiber-optic cable, and radio waves.

5)Protocol. A protocol is a set of rules that govern data communications. It represents an


agreement between the communicating devices. Without a protocol, two devices may be
connected but not communicating, just as a person speaking French cannot be understood by a
person who speaks only Japanese.

~Data Representation
Information today comes in different forms such as text, numbers, images, audio, and video.
Text In data communications, text is represented as a bit pattern, a sequence of bits (Os or Is).
Different sets of bit patterns have been designed to represent text symbols. Each set is called a
code, and the process of representing symbols is called coding. Today, the prevalent coding
system is called Unicode, which uses 32 bits to represent a symbol or character used in any
language in the world. The American Standard Code for Information Interchange (ASCII),
developed some decades ago in the United States, now constitutes the first 127 characters in
Unicode and is also referred to as Basic Latin. Appendix A includes part of the Unicode.

Numbers:Numbers are also represented by bit patterns. However, a code such as ASCII is not
used to represent numbers; the number is directly converted to a binary number to simplify
mathematical operations. Appendix B discusses several different numbering systems.

Images :Images are also represented by bit patterns. In its simplest form, an image is composed
of a matrix of pixels (picture elements), where each pixel is a small dot. The size of the pixel
depends on the resolution. For example, an image can be divided into 1000 pixels or 10,000
pixels. In the second case, there is a better representation of the image (better resolution), but
more memory is needed to store the image. After an image is divided into pixels, each pixel is
assigned a bit pattern. The size and the value of the pattern depend on the image. For an image
made of only blackand-white dots (e.g., a chessboard), a I-bit pattern is enough to represent a
pixel. If an image is not made of pure white and pure black pixels, you can increase the size of
the bit pattern to include gray scale. For example, to show four levels of gray scale, you can use
2-bit patterns. A black pixel can be represented by 00, a dark gray pixel by 01, a light gray pixel
by 10, and a white pixel by 11. There are several methods to represent color images. One method
is called RGB, so called because each color is made of a combination of three primary colors:
red, green, and blue. The intensity of each color is measured, and a bit pattern is assigned to it.
Another method is called YCM, in which a color is made of a combination of three other primary
colors: yellow, cyan, and magenta.

Audio:Audio refers to the recording or broadcasting of sound or music. Audio is by nature


different from text, numbers, or images. It is continuous, not discrete. Even when we use a
microphone to change voice or music to an electric signal, we create a continuous signal. In
Chapters 4 and 5, we learn how to change sound or music to a digital or an analog signal.
Video:Video refers to the recording or broadcasting of a picture or movie. Video can either be
produced as a continuous entity (e.g., by a TV camera), or it can be a combination of images,
each a discrete entity, arranged to convey the idea of motion. Again we can change video to a
digital or an analog signal.

~Data Flow:
Transmission mode means transferring data between two devices. It is also known as a
communication mode. Buses and networks are designed to allow communication to occur
between individual devices that are interconnected.
There are three types of transmission mode:-

These are explained as following below.


1. Simplex Mode –
In Simplex mode, the communication is unidirectional, as on a one-way street. Only one of the
two devices on a link can transmit, the other can only receive. The simplex mode can use the
entire capacity of the channel to send data in one direction.
Example: Keyboard and traditional monitors. The keyboard can only introduce input, the
monitor can only give the output.

Advantages:
● Simplex mode is the easiest and most reliable mode of communication.
● It is the most cost-effective mode, as it only requires one communication channel.
● There is no need for coordination between the transmitting and receiving devices,
which simplifies the communication process.
● Simplex mode is particularly useful in situations where feedback or response is not
required, such as broadcasting or surveillance.
Disadvantages:
● Only one-way communication is possible.
● There is no way to verify if the transmitted data has been received correctly.
● Simplex mode is not suitable for applications that require bidirectional
communication.
2. Half-Duplex Mode –
In half-duplex mode, each station can both transmit and receive, but not at the same time. When
one device is sending, the other can only receive, and vice versa. The half-duplex mode is used
in cases where there is no need for communication in both directions at the same time. The entire
capacity of the channel can be utilized for each direction.
Example: Walkie-talkie in which message is sent one at a time and messages are sent in both
directions.
Channel capacity=Bandwidth * Propagation Delay

Advantages:
● Half-duplex mode allows for bidirectional communication, which is useful in
situations where devices need to send and receive data.
● It is a more efficient mode of communication than simplex mode, as the channel can
be used for both transmission and reception.
● Half-duplex mode is less expensive than full-duplex mode, as it only requires one
communication channel.
Disadvantages:
● Half-duplex mode is less reliable than Full-Duplex mode, as both devices cannot
transmit at the same time.
● There is a delay between transmission and reception, which can cause problems in
some applications.
● There is a need for coordination between the transmitting and receiving devices,
which can complicate the communication process.
3. Full-Duplex Mode –
In full-duplex mode, both stations can transmit and receive simultaneously. In full_duplex mode,
signals going in one direction share the capacity of the link with signals going in another
direction, this sharing can occur in two ways:
● Either the link must contain two physically separate transmission paths, one for
sending and the other for receiving.
● Or the capacity is divided between signals traveling in both directions.

Full-duplex mode is used when communication in both directions is required all the time. The
capacity of the channel, however, must be divided between the two directions.
Example: Telephone Network in which there is communication between two persons by a
telephone line, through which both can talk and listen at the same time.
Channel Capacity=2* Bandwidth*propagation Delay

Advantages:
● Full-duplex mode allows for simultaneous bidirectional communication, which is
ideal for real-time applications such as video conferencing or online gaming.
● It is the most efficient mode of communication, as both devices can transmit and
receive data simultaneously.
● Full-duplex mode provides a high level of reliability and accuracy, as there is no need
for error correction mechanisms.
Disadvantages:
● Full-duplex mode is the most expensive mode, as it requires two communication
channels.
● It is more complex than simplex and half-duplex modes, as it requires two physically
separate transmission paths or a division of channel capacity.
● Full-duplex mode may not be suitable for all applications, as it requires a high level
of bandwidth and may not be necessary for some types of communication.
Network Topologies
Network topology refers to the arrangement of different elements like nodes, links, and devices
in a computer network. It defines how these components are connected and interact with each
other. Understanding various types of network topologies helps in designing efficient and robust
networks. Common types include bus, star, ring, mesh, and tree topologies, each with its own
advantages and disadvantages. In this article, we are going to discuss different types of network
topology their advantages and disadvantages in detail.

~Types of Network Topology


The arrangement of a network that comprises nodes and connecting lines via sender and receiver
is referred to as Network Topology. The various network topologies are:
● Point to Point Topology
● Mesh Topology
● Star Topology
● Bus Topology
● Ring Topology
● Tree Topology
● Hybrid Topology
Point to Point Topology
Point-to-point topology is a type of topology that works on the functionality of the sender and
receiver. It is the simplest communication between two nodes, in which one is the sender and the
other one is the receiver. Point-to-Point provides high bandwidth.

Point to Point Topology


Mesh Topology
In a mesh topology, every device is connected to another device via a particular channel. In Mesh
Topology, the protocols used are AHCP (Ad Hoc Configuration Protocols), DHCP (Dynamic
Host Configuration Protocol), etc.

Mesh Topology
Figure 1: Every device is connected to another via dedicated channels. These channels are
known as links.
● Suppose, the N number of devices are connected with each other in a mesh topology,
the total number of ports that are required by each device is N-1. In Figure 1, there
are 5 devices connected to each other, hence the total number of ports required by
each device is 4. The total number of ports required = N * (N-1).
● Suppose, N number of devices are connected with each other in a mesh topology, then
the total number of dedicated links required to connect them is NC2 i.e. N(N-1)/2. In
Figure 1, there are 5 devices connected to each other, hence the total number of links
required is 5*4/2 = 10.
Advantages of Mesh Topology
● Communication is very fast between the nodes.
● Mesh Topology is robust.
● The fault is diagnosed easily. Data is reliable because data is transferred among the
devices through dedicated channels or links.
● Provides security and privacy.
Disadvantages of Mesh Topology
● Installation and configuration are difficult.
● The cost of cables is high as bulk wiring is required, hence suitable for less number of
devices.
● The cost of maintenance is high.
A common example of mesh topology is the internet backbone, where various internet service
providers are connected to each other via dedicated channels. This topology is also used in
military communication systems and aircraft navigation systems.
For more, refer to the Advantages and Disadvantages of Mesh Topology.
Star Topology
In Star Topology, all the devices are connected to a single hub through a cable. This hub is the
central node and all other nodes are connected to the central node. The hub can be passive in
nature i.e., not an intelligent hub such as broadcasting devices, at the same time the hub can be
intelligent known as an active hub. Active hubs have repeaters in them. Coaxial cables or RJ-45
cables are used to connect the computers. In Star Topology, many popular Ethernet LAN
protocols are used as CD(Collision Detection), CSMA (Carrier Sense Multiple Access), etc.

Star Topology
Figure 2: A star topology having four systems connected to a single point of connection i.e. hub.
Advantages of Star Topology
● If N devices are connected to each other in a star topology, then the number of cables
required to connect them is N. So, it is easy to set up.
● Each device requires only 1 port i.e. to connect to the hub, therefore the total number
of ports required is N.
● It is Robust. If one link fails only that link will affect and not other than that.
● Easy to fault identification and fault isolation.
● Star topology is cost-effective as it uses inexpensive coaxial cable.
Disadvantages of Star Topology
● If the concentrator (hub) on which the whole topology relies fails, the whole system
will crash down.
● The cost of installation is high.
● Performance is based on the single concentrator i.e. hub.
A common example of star topology is a local area network (LAN) in an office where all
computers are connected to a central hub. This topology is also used in wireless networks where
all devices are connected to a wireless access point.
For more, refer to the Advantages and Disadvantages of Star Topology.
Bus Topology
Bus Topology is a network type in which every computer and network device is connected to a
single cable. It is bi-directional. It is a multi-point connection and a non-robust topology because
if the backbone fails the topology crashes. In Bus Topology, various MAC (Media Access
Control) protocols are followed by LAN ethernet connections like TDMA, Pure Aloha, CDMA,
Slotted Aloha, etc.

Bus Topology
Figure 3: A bus topology with shared backbone cable. The nodes are connected to the channel
via drop lines.
Advantages of Bus Topology
● If N devices are connected to each other in a bus topology, then the number of cables
required to connect them is 1, known as backbone cable, and N drop lines are
required.
● Coaxial or twisted pair cables are mainly used in bus-based networks that support up
to 10 Mbps.
● The cost of the cable is less compared to other topologies, but it is used to build small
networks.
● Bus topology is familiar technology as installation and troubleshooting techniques are
well known.
● CSMA is the most common method for this type of topology.
Disadvantages of Bus Topology
● A bus topology is quite simpler, but still, it requires a lot of cabling.
● If the common cable fails, then the whole system will crash down.
● If the network traffic is heavy, it increases collisions in the network. To avoid this,
various protocols are used in the MAC layer known as Pure Aloha, Slotted Aloha,
CSMA/CD, etc.
● Adding new devices to the network would slow down networks.
● Security is very low.
A common example of bus topology is the Ethernet LAN, where all devices are connected to a
single coaxial cable or twisted pair cable. This topology is also used in cable television networks.
For more, refer to the Advantages and Disadvantages of Bus Topology.
Ring Topology
In a Ring Topology, it forms a ring connecting devices with exactly two neighboring devices. A
number of repeaters are used for Ring topology with a large number of nodes, because if
someone wants to send some data to the last node in the ring topology with 100 nodes, then the
data will have to pass through 99 nodes to reach the 100th node. Hence to prevent data loss
repeaters are used in the network.
The data flows in one direction, i.e. it is unidirectional, but it can be made bidirectional by
having 2 connections between each Network Node, it is called Dual Ring Topology. In-Ring
Topology, the Token Ring Passing protocol is used by the workstations to transmit the data.

Ring Topology
Figure 4: A ring topology comprises 4 stations connected with each forming a ring.
The most common access method of ring topology is token passing.
● Token passing: It is a network access method in which a token is passed from one
node to another node.
● Token: It is a frame that circulates around the network.
Operations of Ring Topology
1. One station is known as a monitor station which takes all the responsibility for
performing the operations.
2. To transmit the data, the station has to hold the token. After the transmission is done,
the token is to be released for other stations to use.
3. When no station is transmitting the data, then the token will circulate in the ring.
4. There are two types of token release techniques: Early token release releases the
token just after transmitting the data and Delayed token release releases the token
after the acknowledgment is received from the receiver.
Advantages of Ring Topology
● The data transmission is high-speed.
● The possibility of collision is minimum in this type of topology.
● Cheap to install and expand.
● It is less costly than a star topology.
Disadvantages of Ring Topology
● The failure of a single node in the network can cause the entire network to fail.
● Troubleshooting is difficult in this topology.
● The addition of stations in between or the removal of stations can disturb the whole
topology.
● Less secure.
For more, refer to the Advantages and Disadvantages of Ring Topology.
Tree Topology
This topology is the variation of the Star topology. This topology has a hierarchical flow of data.
In Tree Topology, protocols like DHCP and SAC (Standard Automatic Configuration ) are used.

Tree Topology
Figure 5: In this, the various secondary hubs are connected to the central hub which contains the
repeater. This data flow from top to bottom i.e. from the central hub to the secondary and then to
the devices or from bottom to top i.e. devices to the secondary hub and then to the central hub. It
is a multi-point connection and a non-robust topology because if the backbone fails the topology
crashes.
Advantages of Tree Topology
● It allows more devices to be attached to a single central hub thus it decreases the
distance that is traveled by the signal to come to the devices.
● It allows the network to get isolated and also prioritize from different computers.
● We can add new devices to the existing network.
● Error detection and error correction are very easy in a tree topology.
Disadvantages of Tree Topology
● If the central hub gets fails the entire system fails.
● The cost is high because of the cabling.
● If new devices are added, it becomes difficult to reconfigure.
A common example of a tree topology is the hierarchy in a large organization. At the top of the
tree is the CEO, who is connected to the different departments or divisions (child nodes) of the
company. Each department has its own hierarchy, with managers overseeing different teams
(grandchild nodes). The team members (leaf nodes) are at the bottom of the hierarchy, connected
to their respective managers and departments.
For more, refer to the Advantages and Disadvantages of Tree Topology.
Hybrid Topology
This topological technology is the combination of all the various types of topologies we have
studied above. Hybrid Topology is used when the nodes are free to take any form. It means these
can be individuals such as Ring or Star topology or can be a combination of various types of
topologies seen above. Each individual topology uses the protocol that has been discussed
earlier.
Hybrid Topology
The above figure shows the structure of the Hybrid topology. As seen it contains a combination
of all different types of networks.
Advantages of Hybrid Topology
● This topology is very flexible.
● The size of the network can be easily expanded by adding new devices.
Disadvantages of Hybrid Topology
● It is challenging to design the architecture of the Hybrid Network.
● Hubs used in this topology are very expensive.
● The infrastructure cost is very high as a hybrid network requires a lot of cabling and
network devices.
A common example of a hybrid topology is a university campus network. The network may have
a backbone of a star topology, with each building connected to the backbone through a switch or
router. Within each building, there may be a bus or ring topology connecting the different rooms
and offices. The wireless access points also create a mesh topology for wireless devices. This
hybrid topology allows for efficient communication between different buildings while providing
flexibility and redundancy within each building.

~Protocols
In computer networks, communication occurs between entities in different systems. An entity is
anything capable of sending or receiving information. However, two entities cannot simply send
bit streams to each other and expect to be understood. For communication to occur, the entities
must agree on a protocol. A protocol is a set of rules that govern data communications. A
protocol defines what is communicated, how it is communicated, and when it is communicated.
The key elements of a protocol are syntax, semantics, and timing.
o Syntax. The term syntax refers to the structure or format of the data, meaning the order in
which they are presented. For example, a simple protocol might expect the first 8 bits of data to
be the address of the sender, the second 8 bits to be the address of the receiver, and the rest of the
stream to be the message itself.
o Semantics. The word semantics refers to the meaning of each section of bits. How is a
particular pattern to be interpreted, and what action is to be taken based on that interpretation?
For example, does an address identify the route to be taken or the final destination of the
message?
o Timing. The term timing refers to two characteristics: when data should be sent and how fast
they can be sent. For example, if a sender produces data at 100 Mbps but the receiver can process
data at only 1 Mbps, the transmission will overload the receiver and some data will be lost
Types of Network Protocols
In most cases, communication across a network like the Internet uses the OSI model. The OSI
model has a total of seven layers. Secured connections, network management, and network
communication are the three main tasks that the network protocol performs. The purpose of
protocols is to link different devices.
The protocols can be broadly classified into three major categories:
● Network Communication
● Network Management
● Network Security
1. Network Communication
Communication protocols are really important for the functioning of a network. They are so
crucial that it is not possible to have computer networks without them. These protocols formally
set out the rules and formats through which data is transferred. These protocols handle syntax,
semantics, error detection, synchronization, and authentication. Below mentioned are some
network communication protocol:
Hypertext Transfer Protocol(HTTP)
It is a layer 7 protocol that is designed for transferring a hypertext between two or more systems.
HTTP works on a client-server model, most of the data sharing over the web is done through
using HTTP.
Transmission Control Protocol(TCP)
TCP layouts a reliable stream delivery by using sequenced acknowledgment. It is a
connection-oriented protocol i.e., it establishes a connection between applications before sending
any data. It is used for communicating over a network. It has many applications such as emails,
FTP, streaming media, etc.
User Datagram Protocol(UDP)
It is a connectionless protocol that lay-out a basic but unreliable message service. It adds no flow
control, reliability, or error-recovery functions. UPD is functional in cases where reliability is not
required. It is used when we want faster transmission, for multicasting and broadcasting
connections, etc.
Border Gateway Protocol(BGP)
BGP is a routing protocol that controls how packets pass through the router in an independent
system one or more networks run by a single organization and connect to different networks. It
connects the endpoints of a LAN with other LANs and it also connects endpoints in different
LANs to one another.
Address Resolution Protocol(ARP)
ARP is a protocol that helps in mapping logical addresses to the physical addresses
acknowledged in a local network. For mapping and maintaining a correlation between these
logical and physical addresses a table known as ARP cache is used.
Internet Protocol(IP)
It is a protocol through which data is sent from one host to another over the internet. It is used for
addressing and routing data packets so that they can reach their destination.
Dynamic Host Configuration Protocol(DHCP)
it’s a protocol for network management and it’s used for the method of automating the process of
configuring devices on IP networks. A DHCP server automatically assigns an IP address and
various other configurational changes to devices on a network so they can communicate with
other IP networks. it also allows devices to use various services such as NTP, DNS, or any other
protocol based on TCP or UDP.
2. Network Management
These protocols assist in describing the procedures and policies that are used in monitoring,
maintaining, and managing the computer network. These protocols also help in communicating
these requirements across the network to ensure stable communication. Network management
protocols can also be used for troubleshooting connections between a host and a client.
Internet Control Message Protocol(ICMP)
It is a layer 3 protocol that is used by network devices to forward operational information and
error messages. ICMP is used for reporting congestions, network errors, diagnostic purposes, and
timeouts.
Simple Network Management Protocol(SNMP)
It is a layer 7 protocol that is used for managing nodes on an IP network. There are three main
components in the SNMP protocol i.e., SNMP agent, SNMP manager, and managed device.
SNMP agent has the local knowledge of management details, it translates those details into a
form that is compatible with the SNMP manager. The manager presents data acquired from
SNMP agents, thus helping in monitoring network glitches, and network performance, and
troubleshooting them.
Gopher
It is a type of file retrieval protocol that provides downloadable files with some description for
easy management, retrieving, and searching of files. All the files are arranged on a remote
computer in a stratified manner. Gopher is an old protocol and it is not much used nowadays.
File Transfer Protocol(FTP)
FTP is a Client/server protocol that is used for moving files to or from a host computer, it allows
users to download files, programs, web pages, and other things that are available on other
services.
Post Office Protocol(POP3)
It is a protocol that a local mail client uses to get email messages from a remote email server
over a TCP/IP connection. Email servers hosted by ISPs also use the POP3 protocol to hold and
receive emails intended for their users. Eventually, these users will use email client software to
look at their mailbox on the remote server and to download their emails. After the email client
downloads the emails, they are generally deleted from the servers.
Telnet
It is a protocol that allows the user to connect to a remote computer program and to use it i.e., it
is designed for remote connectivity. Telnet creates a connection between a host machine and a
remote endpoint to enable a remote session.
3. Network Security
These protocols secure the data in passage over a network. These protocols also determine how
the network secures data from any unauthorized attempts to extract or review data. These
protocols make sure that no unauthorized devices, users, or services can access the network data.
Primarily, these protocols depend on encryption to secure data.
Secure Socket Layer(SSL)
It is a network security protocol mainly used for protecting sensitive data and securing internet
connections. SSL allows both server-to-server and client-to-server communication. All the data
transferred through SSL is encrypted thus stopping any unauthorized person from accessing it.
Hypertext Transfer Protocol(HTTPS)
It is the secured version of HTTP. this protocol ensures secure communication between two
computers where one sends the request through the browser and the other fetches the data from
the web server.
Transport Layer Security(TLS)
It is a security protocol designed for data security and privacy over the internet, its functionality
is encryption, checking the integrity of data i.e., whether it has been tampered with or not, and
authentication. It is generally used for encrypted communication between servers and web apps,
like a web browser loading a website, it can also be used for encryption of messages, emails, and
VoIP.
Some Other Protocols
Internet Message Access Protocol (IMAP)
● ICMP protocol is used to retrieve message from the mail server. By using ICMP mail
user can view and manage mails on his system.
Session Initiation Protocol (SIP)
● SIP is used in video, voice, and messaging application. This protocol is used to
initiating, Managing, Terminating the session between two users while they are
communicating.
Real-Time Transport Protocol (RTP)
● This protocol is used to forward audio, video over IP network. This protocol is used
with SIP protocol to send audio, video at real-time.
Rout Access Protocol (RAP)
● RAP is used in network management. It helps to user for accessing the nearest router
for communication. RAP is less efficient as compared to SNMP.
Point To Point Tunnelling Protocol (PPTP)
● It is used to implement VPN ( Virtual Private Network ). PPTP protocol append PPP
frame in IP datagram for transmission through IP based network.
Trivial File Transfer Protocol (TFTP)
● TFTP is the simplified version of FTP. TFTP is also used to transfer file over internet

~OSI Model
The OSI model, created in 1984 by ISO, is a reference framework that explains the process of
transmitting data between computers. It is divided into seven layers that work together to carry
out specialised network functions, allowing for a more systematic approach to networking.

OSI Model
Data Flow In OSI Model
When we transfer information from one device to another, it travels through 7 layers of OSI
model. First data travels down through 7 layers from the sender’s end and then climbs back 7
layers on the receiver’s end.
Data flows through the OSI model in a step-by-step process:
● Application Layer: Applications create the data.
● Presentation Layer: Data is formatted and encrypted.
● Session Layer: Connections are established and managed.
● Transport Layer: Data is broken into segments for reliable delivery.
● Network Layer: Segments are packaged into packets and routed.
● Data Link Layer: Packets are framed and sent to the next device.
● Physical Layer: Frames are converted into bits and transmitted physically.
Each layer adds specific information to ensure the data reaches its destination correctly, and
these steps are reversed upon arrival.

Let’s look at it with an Example:


Luffy sends an e-mail to his friend Zoro.
Step 1: Luffy interacts with e-mail application like Gmail, outlook, etc. Writes his email to
send. (This happens in Layer 7: Application layer)
Step 2: Mail application prepares for data transmission like encrypting data and formatting it for
transmission. (This happens in Layer 6: Presentation Layer)
Step 3: There is a connection established between the sender and receiver on the internet. (This
happens in Layer 5: Session Layer)
Step 4: Email data is broken into smaller segments. It adds sequence number and error-checking
information to maintain the reliability of the information. (This happens in Layer 4: Transport
Layer)
Step 5: Addressing of packets is done in order to find the best route for transfer. (This happens in
Layer 3: Network Layer)
Step 6: Data packets are encapsulated into frames, then MAC address is added for local devices
and then it checks for error using error detection. (This happens in Layer 2: Data Link Layer)
Step 7: Lastly Frames are transmitted in the form of electrical/ optical signals over a physical
network medium like ethernet cable or WiFi.
After the email reaches the receiver i.e. Zoro, the process will reverse and decrypt the e-mail
content. At last, the email will be shown on Zoro’s email client.
What Are The 7 Layers of The OSI Model?
The OSI model consists of seven abstraction layers arranged in a top-down order:
1. Physical Layer
2. Data Link Layer
3. Network Layer
4. Transport Layer
5. Session Layer
6. Presentation Layer
7. Application Layer
Physical Layer – Layer 1
The lowest layer of the OSI reference model is the physical layer. It is responsible for the actual
physical connection between the devices. The physical layer contains information in the form of
bits. It is responsible for transmitting individual bits from one node to the next. When receiving
data, this layer will get the signal received and convert it into 0s and 1s and send them to the
Data Link layer, which will put the frame back together.

Functions of the Physical Layer


● Bit Synchronization: The physical layer provides the synchronization of the bits by
providing a clock. This clock controls both sender and receiver thus providing
synchronization at the bit level.
● Bit Rate Control: The Physical layer also defines the transmission rate i.e. the
number of bits sent per second.
● Physical Topologies: Physical layer specifies how the different, devices/nodes are
arranged in a network i.e. bus, star, or mesh topology.
● Transmission Mode: Physical layer also defines how the data flows between the two
connected devices. The various transmission modes possible are Simplex, half-duplex
and full-duplex.
Note:
● Hub, Repeater, Modem, and Cables are Physical Layer devices.
● Network Layer, Data Link Layer, and Physical Layer are also known as Lower Layers
or Hardware Layers.
Data Link Layer (DLL) – Layer 2
The data link layer is responsible for the node-to-node delivery of the message. The main
function of this layer is to make sure data transfer is error-free from one node to another, over the
physical layer. When a packet arrives in a network, it is the responsibility of the DLL to transmit
it to the Host using its MAC address.
The Data Link Layer is divided into two sublayers:
● Logical Link Control (LLC)
● Media Access Control (MAC)
The packet received from the Network layer is further divided into frames depending on the
frame size of the NIC(Network Interface Card). DLL also encapsulates Sender and Receiver’s
MAC address in the header.
The Receiver’s MAC address is obtained by placing an ARP(Address Resolution Protocol)
request onto the wire asking “Who has that IP address?” and the destination host will reply with
its MAC address.
Functions of the Data Link Layer
● Framing: Framing is a function of the data link layer. It provides a way for a sender
to transmit a set of bits that are meaningful to the receiver. This can be accomplished
by attaching special bit patterns to the beginning and end of the frame.
● Physical Addressing: After creating frames, the Data link layer adds physical
addresses (MAC addresses) of the sender and/or receiver in the header of each
frame.
● Error Control: The data link layer provides the mechanism of error control in which
it detects and retransmits damaged or lost frames.
● Flow Control: The data rate must be constant on both sides else the data may get
corrupted thus, flow control coordinates the amount of data that can be sent before
receiving an acknowledgment.
● Access Control: When a single communication channel is shared by multiple
devices, the MAC sub-layer of the data link layer helps to determine which device
has control over the channel at a given time.
Note:
● Packet in the Data Link layer is referred to as Frame.
● Data Link layer is handled by the NIC (Network Interface Card) and device drivers of
host machines.
● Switch & Bridge are Data Link Layer devices.
Network Layer – Layer 3
The network layer works for the transmission of data from one host to the other located in
different networks. It also takes care of packet routing i.e. selection of the shortest path to
transmit the packet, from the number of routes available. The sender & receiver’s IP addresses
are placed in the header by the network layer.
Functions of the Network Layer
● Routing: The network layer protocols determine which route is suitable from source
to destination. This function of the network layer is known as routing.
● Logical Addressing: To identify each device inter-network uniquely, the network
layer defines an addressing scheme. The sender & receiver’s IP addresses are placed
in the header by the network layer. Such an address distinguishes each device
uniquely and universally.
Note:
● Segment in the Network layer is referred to as Packet.
● Network layer is implemented by networking devices such as routers and switches.
Transport Layer – Layer 4
The transport layer provides services to the application layer and takes services from the network
layer. The data in the transport layer is referred to as Segments. It is responsible for the
end-to-end delivery of the complete message. The transport layer also provides the
acknowledgment of the successful data transmission and re-transmits the data if an error is
found.
At the sender’s side: The transport layer receives the formatted data from the upper layers,
performs Segmentation, and also implements Flow and error control to ensure proper data
transmission. It also adds Source and Destination port numbers in its header and forwards the
segmented data to the Network Layer.
Note: The sender needs to know the port number associated with the receiver’s application.
Generally, this destination port number is configured, either by default or manually. For
example, when a web application requests a web server, it typically uses port number 80,
because this is the default port assigned to web applications. Many applications have default
ports assigned.
At the receiver’s side: Transport Layer reads the port number from its header and forwards the
Data which it has received to the respective application. It also performs sequencing and
reassembling of the segmented data.
Functions of the Transport Layer
● Segmentation and Reassembly: This layer accepts the message from the (session)
layer, and breaks the message into smaller units. Each of the segments produced has a
header associated with it. The transport layer at the destination station reassembles
the message.
● Service Point Addressing: To deliver the message to the correct process, the
transport layer header includes a type of address called service point address or port
address. Thus by specifying this address, the transport layer makes sure that the
message is delivered to the correct process.
Services Provided by Transport Layer
● Connection-Oriented Service
● Connectionless Service
1. Connection-Oriented Service: It is a three-phase process that includes:
● Connection Establishment
● Data Transfer
● Termination/disconnection
In this type of transmission, the receiving device sends an acknowledgment, back to the source
after a packet or group of packets is received. This type of transmission is reliable and secure.
2. Connectionless service: It is a one-phase process and includes Data Transfer. In this type of
transmission, the receiver does not acknowledge receipt of a packet. This approach allows for
much faster communication between devices. Connection-oriented service is more reliable than
connectionless Service.
Note:
● Data in the Transport Layer is called Segments.
● Transport layer is operated by the Operating System. It is a part of the OS and
communicates with the Application Layer by making system calls.
● The transport layer is called as Heart of the OSI model.
● Device or Protocol Use : TCP, UDP NetBIOS, PPTP
Session Layer – Layer 5
This layer is responsible for the establishment of connection, maintenance of sessions, and
authentication, and also ensures security.
Functions of the Session Layer
● Session Establishment, Maintenance, and Termination: The layer allows the two
processes to establish, use, and terminate a connection.
● Synchronization: This layer allows a process to add checkpoints that are considered
synchronization points in the data. These synchronization points help to identify the
error so that the data is re-synchronized properly, and ends of the messages are not cut
prematurely and data loss is avoided.
● Dialog Controller: The session layer allows two systems to start communication
with each other in half-duplex or full-duplex.
Note:
● All the below 3 layers(including Session Layer) are integrated as a single layer in the
TCP/IP model as the “Application Layer”.
● Implementation of these 3 layers is done by the network application itself. These are
also known as Upper Layers or Software Layers.
● Device or Protocol Use : NetBIOS, PPTP.
Example
Let us consider a scenario where a user wants to send a message through some Messenger
application running in their browser. The “Messenger” here acts as the application layer which
provides the user with an interface to create the data. This message or so-called Data is
compressed, optionally encrypted (if the data is sensitive), and converted into bits (0’s and 1’s)
so that it can be transmitted.

Communication in Session Layer


Presentation Layer – Layer 6
The presentation layer is also called the Translation layer. The data from the application layer is
extracted here and manipulated as per the required format to transmit over the network.
Functions of the Presentation Layer
● Translation: For example, ASCII to EBCDIC.
● Encryption/ Decryption: Data encryption translates the data into another form or
code. The encrypted data is known as the ciphertext and the decrypted data is known
as plain text. A key value is used for encrypting as well as decrypting data.
● Compression: Reduces the number of bits that need to be transmitted on the network.
Note: Device or Protocol Use: JPEG, MPEG, GIF.
Application Layer – Layer 7
At the very top of the OSI Reference Model stack of layers, we find the Application layer which
is implemented by the network applications. These applications produce the data to be
transferred over the network. This layer also serves as a window for the application services to
access the network and for displaying the received information to the user.
Example: Application – Browsers, Skype Messenger, etc.
Note: The application Layer is also called Desktop Layer.
Device or Protocol Use : SMTP.
Functions of the Application Layer
The main functions of the application layer are given below.
● Network Virtual Terminal(NVT): It allows a user to log on to a remote host.
● File Transfer Access and Management(FTAM): This application allows a user to
access files in a remote host, retrieve files in a remote host, and manage or
control files from a remote computer.
● Mail Services: Provide email service.
● Directory Services: This application provides distributed database sources
and access for global information about various objects and services.
Note: The OSI model acts as a reference model and is not implemented on the Internet because
of its late invention. The current model being used is the TCP/IP model.
OSI Model – Layer Architecture

Information
Layer
Layer Name Responsibility Form (Data Device or Protocol
No
Unit)

Helps in
identifying the
Application
7 client and Message SMTP
Layer
synchronizing
communication.

Data from the


application layer is
Presentation extracted and
6 Message JPEG, MPEG, GIF
Layer manipulated in the
required format for
transmission.

Establishes
Connection,
Message (or
Session Maintenance,
5 encrypted Gateway
Layer Ensures
message)
Authentication and
Ensures security.

Take Service from


Network Layer
Transport
4 and provide it to Segment Firewall
Layer
the Application
Layer.
Transmission of
data from one host
Network
3 to another, located Packet Router
Layer
in different
networks.

Node to Node
Data Link
2 Delivery of Frame Switch, Bridge
Layer
Message.

Establishing
Physical Physical Hub, Repeater, Modem,
1 Bits
Layer Connections Cables
between Devices.

~TCP/IP
The TCP/IP model is a fundamental framework for computer networking. It stands for
Transmission Control Protocol/Internet Protocol, which are the core protocols of the Internet.
This model defines how data is transmitted over networks, ensuring reliable communication
between devices. It consists of four layers: the Link Layer, the Internet Layer, the Transport
Layer, and the Application Layer. Each layer has specific functions that help manage different
aspects of network communication, making it essential for understanding and working with
modern networks.
TCP/IP was designed and developed by the Department of Defense (DoD) in the 1960s and is
based on standard protocols. The TCP/IP model is a concise version of the OSI model. It
contains four layers, unlike the seven layers in the OSI model. In this article, we are going to
discuss the TCP/IP model in detail.
What Does TCP/IP Do?
The main work of TCP/IP is to transfer the data of a computer from one device to another. The
main condition of this process is to make data reliable and accurate so that the receiver will
receive the same information which is sent by the sender. To ensure that, each message reaches
its final destination accurately, the TCP/IP model divides its data into packets and combines
them at the other end, which helps in maintaining the accuracy of the data while transferring
from one end to another end.
Difference Between TCP and IP

TCP (Transmission
Feature IP (Internet Protocol)
Control Protocol)
Ensures reliable, ordered,
Provides addressing and
and error-checked delivery
Purpose routing of packets across
of data between
networks.
applications.

Type Connection-oriented Connectionless

Routes packets of data


Manages data transmission
from the source to the
Function between devices, ensuring
destination based on IP
data integrity and order.
addresses.

No, IP itself does not


Yes, includes error
handle errors; relies on
Error Handling checking and recovery
upper-layer protocols like
mechanisms.
TCP.

Yes, includes flow control


Flow Control No
mechanisms.

Yes, manages network


Congestion Control No
congestion.

Breaks data into smaller Breaks data into packets


Data Segmentation packets and reassembles but does not handle
them at the destination. reassembly.

Smaller, typically 20
Header Size Larger, 20-60 bytes
bytes

Does not guarantee


Provides reliable data
Reliability delivery, reliability, or
transfer
order.
Transmission Yes, acknowledges receipt
No
Acknowledgment of data packets.

How Does the TCP/IP Model Work?


Whenever we want to send something over the internet using the TCP/IP Model, the TCP/IP
Model divides the data into packets at the sender’s end and the same packets have to be
recombined at the receiver’s end to form the same data, and this thing happens to maintain the
accuracy of the data. TCP/IP model divides the data into a 4-layer procedure, where the data first
go into this layer in one order and again in reverse order to get organized in the same way at the
receiver’s end.
For more, you can refer to TCP/IP in Computer Networking.
Layers of TCP/IP Model
● Application Layer
● Transport Layer(TCP/UDP)
● Network/Internet Layer(IP)
● Network Access Layer
The diagrammatic comparison of the TCP/IP and OSI model is as follows:

TCP/IP and OSI


1. Network Access Layer
It is a group of applications requiring network communications. This layer is responsible for
generating the data and requesting connections. It acts on behalf of the sender and the Network
Access layer on the behalf of the receiver. During this article, we will be talking on the behalf of
the receiver.
The packet’s network protocol type, in this case, TCP/IP, is identified by network access layer.
Error prevention and “framing” are also provided by this layer. Point-to-Point Protocol (PPP)
framing and Ethernet IEEE 802.2 framing are two examples of data-link layer protocols.
2. Internet Layer
This layer parallels the functions of OSI’s Network layer. It defines the protocols which are
responsible for the logical transmission of data over the entire network. The main protocols
residing at this layer are as follows:
● IP: IP stands for Internet Protocol and it is responsible for delivering packets from the
source host to the destination host by looking at the IP addresses in the packet
headers. IP has 2 versions: IPv4 and IPv6. IPv4 is the one that most websites are
using currently. But IPv6 is growing as the number of IPv4 addresses is limited in
number when compared to the number of users.
● ICMP: ICMP stands for Internet Control Message Protocol. It is encapsulated within
IP datagrams and is responsible for providing hosts with information about network
problems.
● ARP: ARP stands for Address Resolution Protocol. Its job is to find the hardware
address of a host from a known IP address. ARP has several types: Reverse ARP,
Proxy ARP, Gratuitous ARP, and Inverse ARP.
The Internet Layer is a layer in the Internet Protocol (IP) suite, which is the set of protocols that
define the Internet. The Internet Layer is responsible for routing packets of data from one device
to another across a network. It does this by assigning each device a unique IP address, which is
used to identify the device and determine the route that packets should take to reach it.
Example: Imagine that you are using a computer to send an email to a friend. When you click
“send,” the email is broken down into smaller packets of data, which are then sent to the Internet
Layer for routing. The Internet Layer assigns an IP address to each packet and uses routing tables
to determine the best route for the packet to take to reach its destination. The packet is then
forwarded to the next hop on its route until it reaches its destination. When all of the packets
have been delivered, your friend’s computer can reassemble them into the original email
message.
In this example, the Internet Layer plays a crucial role in delivering the email from your
computer to your friend’s computer. It uses IP addresses and routing tables to determine the best
route for the packets to take, and it ensures that the packets are delivered to the correct
destination. Without the Internet Layer, it would not be possible to send data across the Internet.
3. Transport Layer
The TCP/IP transport layer protocols exchange data receipt acknowledgments and retransmit
missing packets to ensure that packets arrive in order and without error. End-to-end
communication is referred to as such. Transmission Control Protocol (TCP) and User Datagram
Protocol are transport layer protocols at this level (UDP).
● TCP: Applications can interact with one another using TCP as though they were
physically connected by a circuit. TCP transmits data in a way that resembles
character-by-character transmission rather than separate packets. A starting point that
establishes the connection, the whole transmission in byte order, and an ending point
that closes the connection make up this transmission.
● UDP: The datagram delivery service is provided by UDP, the other transport layer
protocol. Connections between receiving and sending hosts are not verified by UDP.
Applications that transport little amounts of data use UDP rather than TCP because it
eliminates the processes of establishing and validating connections.
4. Application Layer
This layer is analogous to the transport layer of the OSI model. It is responsible for end-to-end
communication and error-free delivery of data. It shields the upper-layer applications from the
complexities of data. The three main protocols present in this layer are:
● HTTP and HTTPS: HTTP stands for Hypertext transfer protocol. It is used by the
World Wide Web to manage communications between web browsers and servers.
HTTPS stands for HTTP-Secure. It is a combination of HTTP with SSL(Secure
Socket Layer). It is efficient in cases where the browser needs to fill out forms, sign
in, authenticate, and carry out bank transactions.
● SSH: SSH stands for Secure Shell. It is a terminal emulations software similar to
Telnet. The reason SSH is preferred is because of its ability to maintain the encrypted
connection. It sets up a secure session over a TCP/IP connection.
● NTP: NTP stands for Network Time Protocol. It is used to synchronize the clocks on
our computer to one standard time source. It is very useful in situations like bank
transactions. Assume the following situation without the presence of NTP. Suppose
you carry out a transaction, where your computer reads the time at 2:30 PM while the
server records it at 2:28 PM. The server can crash very badly if it’s out of sync.
The host-to-host layer is a layer in the OSI (Open Systems Interconnection) model that is
responsible for providing communication between hosts (computers or other devices) on a
network. It is also known as the transport layer.
Some common use cases for the host-to-host layer include:
● Reliable Data Transfer: The host-to-host layer ensures that data is transferred
reliably between hosts by using techniques like error correction and flow control. For
example, if a packet of data is lost during transmission, the host-to-host layer can
request that the packet be retransmitted to ensure that all data is received correctly.
● Segmentation and Reassembly: The host-to-host layer is responsible for breaking
up large blocks of data into smaller segments that can be transmitted over the
network, and then reassembling the data at the destination. This allows data to be
transmitted more efficiently and helps to avoid overloading the network.
● Multiplexing and Demultiplexing: The host-to-host layer is responsible for
multiplexing data from multiple sources onto a single network connection, and then
demultiplexing the data at the destination. This allows multiple devices to share the
same network connection and helps to improve the utilization of the network.
● End-to-End Communication: The host-to-host layer provides a connection-oriented
service that allows hosts to communicate with each other end-to-end, without the
need for intermediate devices to be involved in the communication.
Example: Consider a network with two hosts, A and B. Host A wants to send a file to host B.
The host-to-host layer in host A will break the file into smaller segments, add error correction
and flow control information, and then transmit the segments over the network to host B. The
host-to-host layer in host B will receive the segments, check for errors, and reassemble the file.
Once the file has been transferred successfully, the host-to-host layer in host B will acknowledge
receipt of the file to host A.
In this example, the host-to-host layer is responsible for providing a reliable connection between
host A and host B, breaking the file into smaller segments, and reassembling the segments at the
destination. It is also responsible for multiplexing and demultiplexing the data and providing
end-to-end communication between the two hosts.
Transmission media refer to the physical pathways through which data is transmitted from one
device to another within a network. These pathways can be wired or wireless. The choice of
medium depends on factors like distance, speed, and interference. In this article, we will discuss
the transmission media.

Transmission Media – Guided and Unguided Transmission Media.

What is Transmission Media?


A transmission medium is a physical path between the transmitter and the receiver i.e. it is the
channel through which data is sent from one place to another. Transmission Media is broadly
classified into the following types:

1. Guided Media
Guided Media is also referred to as Wired or Bounded transmission media. Signals being
transmitted are directed and confined in a narrow pathway by using physical links.
Features:
● High Speed
● Secure
● Used for comparatively shorter distances
There are 3 major types of Guided Media:
Twisted Pair Cable
It consists of 2 separately insulated conductor wires wound about each other. Generally, several
such pairs are bundled together in a protective sheath. They are the most widely used
Transmission Media. Twisted Pair is of two types:
● Unshielded Twisted Pair (UTP): UTP consists of two insulated copper wires twisted
around one another. This type of cable has the ability to block interference and does
not depend on a physical shield for this purpose. It is used for telephonic applications.

Unshielded Twisted Pair


Advantages of Unshielded Twisted Pair
Least expensive
● Easy to install
● High-speed capacity
Disadvantages of Unshielded Twisted Pair
Susceptible to external interference
● Lower capacity and performance in comparison to STP
● Short distance transmission due to attenuation

Shielded Twisted Pair


Shielded Twisted Pair (STP): This type of cable consists of a special jacket (a copper braid
covering or a foil shield) to block external interference. It is used in fast-data-rate Ethernet and in
voice and data channels of telephone lines.
Advantages of Shielded Twisted Pair
● Better performance at a higher data rate in comparison to UTP
● Eliminates crosstalk
● Comparatively faster
Disadvantages of Shielded Twisted Pair
● Comparatively difficult to install and manufacture
● More expensive
● Bulky
Coaxial Cable
It has an outer plastic covering containing an insulation layer made of PVC or Teflon and 2
parallel conductors each having a separate insulated protection cover. The coaxial cable transmits
information in two modes: Baseband mode(dedicated cable bandwidth) and Broadband
mode(cable bandwidth is split into separate ranges). Cable TVs and analog television networks
widely use Coaxial cables.

Advantages of Coaxial Cable


Coaxial cables support high bandwidth.
● It is easy to install coaxial cables.
● Coaxial cables have better cut-through resistance so they are more reliable and
durable.
● Less affected by noise or cross-talk or electromagnetic inference.
● Coaxial cables support multiple channels
Disadvantages of Coaxial Cable
● Coaxial cables are expensive.
● The coaxial cable must be grounded in order to prevent any crosstalk.
● As a Coaxial cable has multiple layers it is very bulky.
● There is a chance of breaking the coaxial cable and attaching a “t-joint” by hackers,
this compromises the security of the data.
Optical Fiber Cable
Optical Fibre Cable uses the concept of refraction of light through a core made up of glass or
plastic. The core is surrounded by a less dense glass or plastic covering called the cladding. It is
used for the transmission of large volumes of data. The cable can be unidirectional or
bidirectional. The WDM (Wavelength Division Multiplexer) supports two modes, namely
unidirectional and bidirectional mode.

Advantages of Optical Fibre Cable


● Increased capacity and bandwidth
● Lightweight
● Less signal attenuation
● Immunity to electromagnetic interference
● Resistance to corrosive materials
Disadvantages of Optical Fibre Cable
● Difficult to install and maintain
● High cost
● Fragile
Applications of Optical Fibre Cable
● Medical Purpose: Used in several types of medical instruments.
● Defence Purpose: Used in transmission of data in aerospace.
● For Communication: This is largely used in formation of internet cables.
● Industrial Purpose: Used for lighting purposes and safety measures in designing the
interior and exterior of automobiles.
Stripline
Stripline is a transverse electromagnetic (TEM) transmission line medium invented by Robert M.
Barrett of the Air Force Cambridge Research Centre in the 1950s. Stripline is the earliest form of
the planar transmission line. It uses a conducting material to transmit high-frequency waves it is
also called a waveguide. This conducting material is sandwiched between two layers of the
ground plane which are usually shorted to provide EMI immunity.
Microstripline
In this, the conducting material is separated from the ground plane by a layer of dielectric.
2. Unguided Media
It is also referred to as Wireless or Unbounded transmission media. No physical medium is
required for the transmission of electromagnetic signals.
Features of Unguided Media
● The signal is broadcasted through air
● Less Secure
● Used for larger distances
There are 3 types of Signals transmitted through unguided media:
Radio Waves
Radio waves are easy to generate and can penetrate through buildings. The sending and receiving
antennas need not be aligned. Frequency Range:3KHz – 1GHz. AM and FM radios and cordless
phones use Radio waves for transmission.

Further Categorized as Terrestrial and Satellite.


Microwaves
It is a line of sight transmission i.e. the sending and receiving antennas need to be properly
aligned with each other. The distance covered by the signal is directly proportional to the height
of the antenna. Frequency Range:1GHz – 300GHz. Micro waves are majorly used for mobile
phone communication and television distribution.

Microwave Transmission
Infrared
Infrared waves are used for very short distance communication. They cannot penetrate through
obstacles. This prevents interference between systems. Frequency Range:300GHz – 400THz. It
is used in TV remotes, wireless mouse, keyboard, printer, etc.
Difference between Radio Waves Vs Micro Waves Vs Infrared Waves

Basis Radiowave Microwave Infrared wave

These are These are These are


Direction omni-directional unidirectional in unidirectional in
in nature. nature. nature.

At low frequency, At low frequency,


they can penetrate they can penetrate
They cannot
through solid through solid
penetrate through
Penetration objects and walls objects and walls.
any solid object
but high frequency at high frequency,
and walls.
they bounce off they cannot
the obstacle. penetrate.

Frequency range: Frequency range:


Frequency range:
Frequency range 1 GHz to 300 300 GHz to 400
3 KHz to 1GHz.
GHz. GHz.

These offers poor These offers These offers high


Security
security. medium security. security.
Attenuation is Attenuation is Attenuation is
Attenuation
high. variable. low.

Some frequencies Some frequencies


in the radio-waves in the microwaves There is no need
Government require require of government
License government government license to use
license to use license to use these waves.
these. these.

Setup and usage Setup and usage Usage Cost is


Usage Cost
Cost is moderate. Cost is high. very less.

These are not


These are used in These are used in
used in long
Communication long distance long distance
distance
communication. communication.
communication.

Factors Considered for Designing the Transmission Media


● Bandwidth: Assuming all other conditions remain constant, the greater a medium’s
bandwidth, the faster a signal’s data transmission rate.
● Transmission Impairment: Transmission Impairment occurs when the received
signal differs from the transmitted signal. Signal quality will be impacted as a result
of transmission impairment.
● Interference: Interference is defined as the process of disturbing a signal as it travels
over a communication medium with the addition of an undesired signal.
Causes of Transmission Impairment

Transmission Impairment
● Attenuation – It means loss of energy. The strength of signal decreases with
increasing distance which causes loss of energy in overcoming resistance of medium.
This is also known as attenuated signal. Amplifiers are used to amplify the attenuated
signal which gives the original signal back and compensate for this loss.
● Distortion – It means changes in the form or shape of the signal. This is generally
seen in composite signals made up with different frequencies. Each frequency
component has its own propagation speed travelling through a medium. And thats
why it delay in arriving at the final destination Every component arrive at different
time which leads to distortion. Therefore, they have different phases at receiver end
from what they had at senders end.
● Noise – The random or unwanted signal that mixes up with the original signal is
called noise. There are several types of noise such as induced noise, crosstalk noise,
thermal noise and impulse noise which may corrupt the signal.

UNIT II
LAN: Wired LAN, Wireless LANs, Techniques for Bandwidth utilization: Multiplexing -
Frequency division, Time division and Wave division. Data Link Layer: Services, Framing,
Error Control: Parity bit method, Block coding, CRC, Hamming code, and Flow Control.

LAN:
What is a Local Area Network?
The full form of LAN is Local-area Network. It is a computer network that covers a small area
such as a building or campus up to a few kilometers in size. LANs are commonly used to
connect personal computers and workstations in company offices to share common resources,
like printers, and exchange information. If we connect LAN in a real-life example then the
family is the best example each family member is connected to each other in the same way each
device is connected to the network. Several experimental and early commercial LAN
technologies were developed in the 1970s. Cambridge Ring is a type of LAN that was developed
at Cambridge University in 1974.
Local Area Network
How do LANs Work?
A router serves as the hub where the majority of LANs connect to the Internet. Home LANs
often utilise a single router, but bigger LANs may also use network switches to transmit packets
more effectively.
LANs nearly always connect devices to the network via Ethernet, WiFi, or both of these
technologies. Ethernet is a way to connect devices to the Local Area Network ethernet define the
physical and data link layer of the OSI model. WiFi is a protocol that is used to connect devices
to the Local Area Network wirelessely.
There are many devices that is connected to the LAN for example Servers, desktop computers,
laptops, printers, Internet of Things (IoT) devices, and even game consoles. LANs are usually
used in offices to give internal staff members shared access to servers or printers that are linked
to the network.
Wireless Local Area Network (WLAN):
A Wireless Local Area Network (WLAN) is a type of network that uses wireless technology,
such as Wi-Fi, to connect devices in the same area. WLANs use wireless access points to
transmit data between devices, allowing for greater mobility and flexibility.
Advantages of WLAN:
● Mobility: WLANs provide greater device mobility and flexibility, as devices can
connect wirelessly from anywhere within the network range.
● Easy Installation: WLANs are easier to install than LANs, as they do not require
physical cabling and switches.
● Range: WLANs can cover a larger area than LANs, allowing for greater device
connectivity and flexibility.
Disadvantages of WLAN:
● Security: WLANs are less secure than LANs, as wireless signals can be intercepted
by unauthorized users and devices.
● Speed: WLANs provide slower data transfer rates than LANs, typically around 54
Mbps, which can result in slower data transfer between devices.
● Interference: WLANs are susceptible to interference from other wireless devices,
which can cause connectivity issues.
Similarities between LAN and WLAN:
● Both provide connectivity: The primary purpose of both LAN and WLAN is to
provide connectivity between devices, allowing them to share data and resources.
● Both use the same protocols: LANs and WLANs use the same protocols for data
transfer, such as TCP/IP and Ethernet, which ensures compatibility between devices.
● Both can support multiple devices: Both LANs and WLANs can support multiple
devices simultaneously, allowing multiple users to share data and resources.
● Both can be secured: Both LANs and WLANs can be secured using encryption and
authentication methods, ensuring that only authorized users have access to the
network.
● Both require network hardware: Both LANs and WLANs require network hardware,
such as routers, switches, and access points, to function properly.
● Both can be used for internet connectivity: Both LANs and WLANs can be used to
connect to the internet, providing access to online resources and services.
Let’s discuss about LAN and WLAN:
LAN WLAN

WLAN stands for Wireless Local Area


LAN stands for Local Area Network.
Network.

LAN connections include both wired WLAN connections are completely


and wireless connections. wireless.

LAN network is a collection of WLAN network is a collection of


computers or other such network computers or other such network devices
devices in a particular location that are in a particular location that are connected
connected together by communication together wirelessly by communication
elements or network elements. elements or network elements.
LAN is free from external attacks like
Whereas, WLAN is vulnerable to
interruption of signals, cyber criminal
external attacks.
attacks and so on.

LAN is secure. WLAN is not secure.

LAN network has lost its popularity


due to the arrival of latest wireless WLAN is popular.
networks.

Wired LAN needs physical access like


Work on connecting wires to the
connecting the wires to the switches or
switches and routers are neglected.
routers.

In LAN, devices are connected locally For WLAN Ethernet cable is not
with Ethernet cable. necessary.

Mobility limited. Outstanding mobility.

It varies due to external factors like


It may or may not vary with external
environment and quality of cables. Most
factors like environment and quality of
of the external factors affect the signal
cables.
transmission.

LAN is less expensive. WLAN is more expensive.

Example: Computers connected in a Example: Laptops, cellphones, tablets


college. connected to a wireless router or hotspot.

Conclusion:
Both LANs and WLANs have their advantages and disadvantages, depending on the specific
requirements. LANs are generally faster and more secure, while WLANs provide greater
mobility and flexibility. Choosing the right network for your needs depends on your specific
requirements, such as speed, security, and device mobility.

~Techniques for Bandwidth utilization


Multiplexing in data communications works in a similar way. It’s a method that combines
multiple signals or data streams into one signal over a shared medium. This process allows for
efficient use of resources and can significantly increase the amount of data that can be sent over a
network.
Multiplexing
Multiplexing is the sharing of a medium or bandwidth. It is the process in which multiple
signals coming from multiple sources are combined and transmitted over a single
communication/physical line.

Uses of Multiplexing
Multiplexing is used for a variety of purposes in data communications to enhance the efficiency
and capacity of networks. Here are some of the main uses:
● Efficient Utilization of Resources: Multiplexing allows multiple signals to share the
same communication channel, making the most of the available bandwidth. This is
especially important in environments where bandwidth is limited.
● Telecommunications: In telephone networks, multiplexing enables the simultaneous
transmission of multiple telephone calls over a single line, enhancing the capacity of
the network.
● Internet and Data Networks: Multiplexing is used in internet communications to
transmit data from multiple users over a single network line, improving the efficiency
and speed of data transfer.
● Satellite Communications: Multiplexing helps in efficiently utilizing the available
bandwidth on satellite transponders, allowing multiple signals to be transmitted and
received simultaneously.
Types of Multiplexing
There are five different types of multiplexing techniques, each designed to handle various types
of data and communication needs. These techniques include:
● Frequency Division Multiplexing (FDM)
● Time-Division Multiplexing (TDM)
● Wavelength Division Multiplexing (WDM)
● Code-division multiplexing (CDM)
● Space-division multiplexing (SDM)
1. Frequency Division Multiplexing
Frequency division multiplexing is defined as a type of multiplexing where the bandwidth of a
single physical medium is divided into a number of smaller, independent frequency channels.
Frequency Division Multiplexing is used in radio and television transmission.
In FDM, we can observe a lot of inter-channel cross-talk, due to the fact that in this type of
multiplexing the bandwidth is divided into frequency channels. In order to prevent the
inter-channel cross talk, unused strips of bandwidth must be placed between each channel. These
unused strips between each channel are known as guard bands.

2. Time Division Multiplexing


Time-division multiplexing is defined as a type of multiplexing wherein FDM, instead of sharing
a portion of the bandwidth in the form of channels, in TDM, time is shared. Each connection
occupies a portion of time in the link.
In Time Division Multiplexing, all signals operate with the same frequency (bandwidth) at
different times.
There are two types of Time Division Multiplexing :
● Synchronous Time Division Multiplexing
● Statistical (or Asynchronous) Time Division Multiplexing
Synchronous TDM : Synchronous TDM is a type of Time Division Multiplexing where the
input frame already has a slot in the output frame. Time slots are grouped into frames. One frame
consists of one cycle of time slots. Synchronous TDM is not efficient because if the input frame
has no data to send, a slot remains empty in the output frame. In synchronous TDM, we need to
mention the synchronous bit at the beginning of each frame.

Statistical TDM: Statistical TDM is a type of Time Division Multiplexing where the output
frame collects data from the input frame till it is full, not leaving an empty slot like in
Synchronous TDM. In statistical TDM, we need to include the address of each particular data in
the slot that is being sent to the output frame.
Statistical TDM is a more efficient type of time-division multiplexing as the channel capacity is
fully utilized and improves the bandwidth efficiency.

3. Wavelength Division Multiplexing


Wavelength Division Multiplexing (WDM) is a multiplexing technology used to increase the
capacity of optical fiber by transmitting multiple optical signals simultaneously over a single
optical fiber, each with a different wavelength. Each signal is carried on a different wavelength
of light, and the resulting signals are combined onto a single optical fiber for transmission. At the
receiving end, the signals are separated by their wavelengths, demultiplexed and routed to their
respective destinations.
WDM can be divided into two categories: Dense Wavelength Division Multiplexing (DWDM)
and Coarse Wavelength Division Multiplexing (CWDM).
DWDM is used to multiplex a large number of optical signals onto a single fiber, typically up to
80 channels with a spacing of 0.8 nm or less between the channels.
CWDM is used for lower-capacity applications, typically up to 18 channels with a spacing of 20
nm between the channels.
WDM has several advantages over other multiplexing technologies such as Time Division
Multiplexing (TDM). WDM allows for higher data rates and capacity, lower power consumption,
and reduced equipment complexity. WDM is also flexible, allowing for easy upgrades and
expansions to existing networks.
WDM is used in a wide range of applications, including telecommunications, cable TV, internet
service providers, and data centers. It enables the transmission of large amounts of data over long
distances with high speed and efficiency.
Wavelength Division Multiplexing is used on fiber optics to increase the capacity of a single
fiber. It is an analog multiplexing technique. Optical signals from the different sources are
combined to form a wider band of light with the help of multiplexers. At the receiving end, the
De-multiplexer separates the signals to transmit them to their respective destinations.
4. Space-Division Multiplexing (SDM)
Space Division Multiplexing (SDM) is a technique used in wireless communication systems to
increase the capacity of the system by exploiting the physical separation of users.
In SDM, multiple antennas are used at both the transmitter and receiver ends to create parallel
communication channels. These channels are independent of each other, which allows for
multiple users to transmit data simultaneously in the same frequency band without interference.
The capacity of the system can be increased by adding more antennas, which creates more
independent channels.
SDM is commonly used in wireless communication systems such as cellular networks, Wi-Fi,
and satellite communication systems. In cellular networks, SDM is used in the form of Multiple
Input Multiple Output (MIMO) technology, which uses multiple antennas at both the transmitter
and receiver ends to improve the quality and capacity of the communication link.
5. Code-Division Multiplexing (CDM)
Code division multiplexing (CDM) is a technique used in telecommunications to allow multiple
users to transmit data simultaneously over a single communication channel. In CDM, each user
is assigned a unique code that is used to modulate their signal. The modulated signals are then
combined and transmitted over the same channel. At the receiving end, each user’s signal is
demodulated using their unique code to retrieve their original data.
In CDM, each user is assigned a unique spreading code that is used to spread the data signal.
This spreading code is typically a binary sequence that is much longer than the original data
signal. The spreading code is multiplied with the data signal to generate a spread spectrum signal
that has a much wider bandwidth than the original data signal. The spread spectrum signals of all
users are then combined and transmitted over the same channel.
At the receiving end, the received signal is multiplied with the same spreading code used at the
transmitting end to dispread the signal. The resulting dispread signal is then demodulated to
retrieve the original data signal. Because each user’s data signal is spread using a unique code, it
is possible to separate the signals of different users even though they are transmitted over the
same channel.
CDM is commonly used in wireless communication systems such as cellular networks and
satellite communication systems. It allows multiple users to share the same frequency band and
increases the capacity of the communication channel. CDM also provides some level of security
as the signals of different users are difficult to intercept or jam.
Advantages of Multiplexing
● Efficient Use of Bandwidth: You can send more than one signal over a single
channel. This way, you can use the channel’s capacity more efficiently.
● Increased Data Transmission: Multiplexing can significantly boost the amount of
data that can be sent over a network simultaneously, enhancing overall transmission
capacity.
● Scalability: Multiplexing allows networks to easily expand and accommodate more
data streams without requiring significant changes to the existing infrastructure.
● Flexibility: Different types of multiplexing (TDM, FDM, WDM, CDM) can be used
based on the specific needs and characteristics of the communication system,
providing flexibility in network design.
Disadvantages of Multiplexing
● Synchronization Issues: Ensuring that multiple data streams remain properly
synchronized can be challenging, leading to potential data loss or errors if not
managed correctly.
● Latency: Combining multiple signals into one can introduce delays, as each data
stream needs to be processed, synchronized, and demultiplexed at the receiving end.
● Signal Degradation: Over long distances, multiplexed signals can experience
degradation and interference, requiring additional measures such as signal boosters or
repeaters to maintain quality.
● Resource Management: Allocating and managing resources for multiplexing can be
complicated, requiring careful planning and real-time adjustments to avoid congestion
and ensure efficient operation.
Conclusion
Multiplexing is a key technology in data communications that helps to make the most out of
available bandwidth by combining multiple data streams into one. This process allows for more
efficient use of resources, reduces costs, and increases data transmission rates. The main types of
multiplexing include Time Division Multiplexing (TDM), Frequency Division Multiplexing
(FDM), Wavelength Division Multiplexing (WDM), and Code Division Multiplexing (CDM).
Each of these techniques has its own unique way of managing and transmitting data, making
them suitable for different types of communication needs. Overall, multiplexing plays a crucial
role in ensuring that our communication networks are fast, efficient, and capable of handling the
growing demand for data transmission.

~Data Link layer:


The data link layer is the second layer from the bottom in the OSI (Open System
Interconnection) network architecture model. It is responsible for the node-to-node delivery of
data. Its major role is to ensure error-free transmission of information. DLL is also responsible
for encoding, decoding, and organizing the outgoing and incoming data.
This is considered the most complex layer of the OSI model as it hides all the underlying
complexities of the hardware from the other above layers. In this article, we will discuss Data
Link Layer in Detail along with its functions, and sub-layers.
OSI Model: Data Link Layer
Sub-Layers of The Data Link Layer
The data link layer is further divided into two sub-layers, which are as follows:
Logical Link Control (LLC)
This sublayer of the data link layer deals with multiplexing, the flow of data among applications
and other services, and LLC is responsible for providing error messages and acknowledgments
as well.
Media Access Control (MAC)
MAC sublayer manages the device’s interaction, responsible for addressing frames, and also
controls physical media access.
The data link layer receives the information in the form of packets from the Network layer, it
divides packets into frames and sends those frames bit-by-bit to the underlying physical layer.
Functions of The Data-link Layer
There are various benefits of data link layers s let’s look into it.
Framing
The packet received from the Network layer is known as a frame in the Data link layer. At the
sender’s side, DLL receives packets from the Network layer and divides them into small frames,
then, sends each frame bit-by-bit to the physical layer. It also attaches some special bits (for error
control and addressing) at the header and end of the frame. At the receiver’s end, DLL takes bits
from the Physical layer organizes them into the frame, and sends them to the Network layer.
Addressing
The data link layer encapsulates the source and destination’s MAC address/ physical address in
the header of each frame to ensure node-to-node delivery. MAC address is the unique hardware
address that is assigned to the device while manufacturing.
Error Control
Data can get corrupted due to various reasons like noise, attenuation, etc. So, it is the
responsibility of the data link layer, to detect the error in the transmitted data and correct it using
error detection and correction techniques respectively. DLL adds error detection bits into the
frame’s header, so that receiver can check received data is correct or not. It adds reliability to
phyiscal layer by adding mechansims to detect and retransmit damaged or lost frames.
Flow Control
If the receiver’s receiving speed is lower than the sender’s sending speed, then this can lead to an
overflow in the receiver’s buffer and some frames may get lost. So, it’s the responsibility of DLL
to synchronize the sender’s and receiver’s speeds and establish flow control between them.
Access Control
When multiple devices share the same communication channel there is a high probability of
collision, so it’s the responsibility of DLL to check which device has control over the channel
and CSMA/CD and CSMA/CA can be used to avoid collisions and loss of frames in the channel.
Protocols in Data link layer
There are various protocols in the data link layer, which are as follows:
● Synchronous Data Link Protocol (SDLC)
● High-Level Data Link Protocol (HDLC)
● Serial Line Interface Protocol (SLIP)for encoding
● Point to Point Protocol (PPP)
● Link Access Procedure (LAP)
● Link Control Protocol (LCP)
● Network Control Protocol (NCP)
Conclusion
In conclusion, the Data Link Layer is essential for ensuring that data is transferred reliably and
accurately across a network. It handles error detection and correction, manages data frame
sequencing, and provides access to the physical network. By organizing data into frames and
controlling how devices on the network communicate, the Data Link Layer plays a crucial role in
maintaining smooth and efficient network operations.

Frames are the units of digital transmission, particularly in computer networks and
telecommunications. Frames are comparable to the packets of energy called photons in the case
of light energy. Frame is continuously used in Time Division Multiplexing process.
Framing is a point-to-point connection between two computers or devices consisting of a wire in
which data is transmitted as a stream of bits. However, these bits must be framed into discernible
blocks of information. Framing is a function of the data link layer. It provides a way for a sender
to transmit a set of bits that are meaningful to the receiver. Ethernet, token ring, frame relay, and
other data link layer technologies have their own frame structures. Frames have headers that
contain information such as error-checking codes.

At the data link layer, it extracts the message from the sender and provides it to the receiver by
providing the sender’s and receiver’s addresses. The advantage of using frames is that data is
broken up into recoverable chunks that can easily be checked for corruption.
The process of dividing the data into frames and reassembling it is transparent to the user and is
handled by the data link layer.
Framing
Framing is an important aspect of data link layer protocol design because it allows the
transmission of data to be organized and controlled, ensuring that the data is delivered accurately
and efficiently.
Problems in Framing
● Detecting start of the frame: When a frame is transmitted, every station must be
able to detect it. Station detects frames by looking out for a special sequence of bits
that marks the beginning of the frame i.e. SFD (Starting Frame Delimiter).
● How does the station detect a frame: Every station listens to link for SFD pattern
through a sequential circuit. If SFD is detected, sequential circuit alerts station.
Station checks destination address to accept or reject frame.
● Detecting end of frame: When to stop reading the frame.
● Handling errors: Framing errors may occur due to noise or other transmission
errors, which can cause a station to misinterpret the frame. Therefore, error detection
and correction mechanisms, such as cyclic redundancy check (CRC), are used to
ensure the integrity of the frame.
● Framing overhead: Every frame has a header and a trailer that contains control
information such as source and destination address, error detection code, and other
protocol-related information. This overhead reduces the available bandwidth for data
transmission, especially for small-sized frames.
● Framing incompatibility: Different networking devices and protocols may use
different framing methods, which can lead to framing incompatibility issues. For
example, if a device using one framing method sends data to a device using a
different framing method, the receiving device may not be able to correctly interpret
the frame.
● Framing synchronization: Stations must be synchronized with each other to avoid
collisions and ensure reliable communication. Synchronization requires that all
stations agree on the frame boundaries and timing, which can be challenging in
complex networks with many devices and varying traffic loads.
● Framing efficiency: Framing should be designed to minimize the amount of data
overhead while maximizing the available bandwidth for data transmission. Inefficient
framing methods can lead to lower network performance and higher latency.
Types of framing
There are two types of framing:
1. Fixed-size: The frame is of fixed size and there is no need to provide boundaries to the frame,
the length of the frame itself acts as a delimiter.
● Drawback: It suffers from internal fragmentation if the data size is less than the
frame size
● Solution: Padding
2. Variable size: In this, there is a need to define the end of the frame as well as the beginning of
the next frame to distinguish. This can be done in two ways:
1. Length field – We can introduce a length field in the frame to indicate the length of
the frame. Used in Ethernet(802.3). The problem with this is that sometimes the
length field might get corrupted.
2. End Delimiter (ED) – We can introduce an ED(pattern) to indicate the end of the
frame. Used in Token Ring. The problem with this is that ED can occur in the data.
This can be solved by:
1. Character/Byte Stuffing: Used when frames consist of characters. If data contains
ED then, a byte is stuffed into data to differentiate it from ED.
Let ED = “$” –> if data contains ‘$’ anywhere, it can be escaped using ‘\O’ character.
–> if data contains ‘\O$’ then, use ‘\O\O\O$'($ is escaped using \O and \O is escaped
using \O).

Disadvantage – It is very costly and obsolete method.


2. Bit Stuffing: Let ED = 01111 and if data = 01111
–> Sender stuffs a bit to break the pattern i.e. here appends a 0 in data = 011101.
–> Receiver receives the frame.
–> If data contains 011101, receiver removes the 0 and reads the data.
Examples:
● If Data –> 011100011110 and ED –> 0111 then, find data after bit stuffing.
--> 011010001101100
● If Data –> 110001001 and ED –> 1000 then, find data after bit stuffing?
--> 11001010011
framing in the Data Link Layer also presents some challenges, which include:
Variable frame length: The length of frames can vary depending on the data being transmitted,
which can lead to inefficiencies in transmission. To address this issue, protocols such as HDLC
and PPP use a flag sequence to mark the start and end of each frame.
Bit stuffing: Bit stuffing is a technique used to prevent data from being interpreted as control
characters by inserting extra bits into the data stream. However, bit stuffing can lead to issues
with synchronization and increase the overhead of the transmission.
Synchronization: Synchronization is critical for ensuring that data frames are transmitted and
received correctly. However, synchronization can be challenging, particularly in high-speed
networks where frames are transmitted rapidly.
Error detection: Data Link Layer protocols use various techniques to detect errors in the
transmitted data, such as checksums and CRCs. However, these techniques are not foolproof and
can miss some types of errors.
Efficiency: Efficient use of available bandwidth is critical for ensuring that data is transmitted
quickly and reliably. However, the overhead associated with framing and error detection can
reduce the overall efficiency of the transmission.

Parity

Error Detection Codes : The binary information is transferred from one location to another
location through some communication medium. The external noise can change bits from 1 to 0
or 0 to 1.This changes in values changes the meaning of actual message and is called error. For
efficient data transfer, there should be an error detection and correction codes. An error detection
code is a binary code that detects digital errors during transmission. To detect error in the
received message, we add some extra bits to the actual data.
Without addition of redundant bits, it is not possible to detect errors in the received message.
There are 3 ways in which we can detect errors in the received message :
1. Parity Bit
2. CheckSum
3. Cyclic Redundancy Check (CRC)
We’ll be understanding the parity bit method in this article in depth :-
Parity Bit Method : A parity bit is an extra bit included in binary message to make total number
of 1’s either odd or even. Parity word denotes number of 1’s in a binary string. There are two
parity system – even and odd parity checks.
1. Even Parity Check: Total number of 1’s in the given data bit should be even. So if the total
number of 1’s in the data bit is odd then a single 1 will be appended to make total number of 1’s
even else 0 will be appended(if total number of 1’s are already even). Hence, if any error occurs,
the parity check circuit will detect it at the receiver’s end. Let’s understand this with example,
see the below diagram :

Even Parity Check (fig – 1.1)


In the above image, as we can see the data bits are ‘1011000’ and since this is even parity check
that we’re talking about, 1 will be appended as the parity bit (highlighted in red) to make total
count of 1’s even in the data sent. So here, our parity bit is 1. If the total count of 1 in the given
data bits were already even, then 0 would’ve been appended.
2. Odd Parity Check: In odd parity system, if the total number of 1’s in the given binary string
(or data bits) are even then 1 is appended to make the total count of 1’s as odd else 0 is appended.
The receiver knows that whether sender is an odd parity generator or even parity generator.
Suppose if sender is an odd parity generator then there must be an odd number of 1’s in received
binary string. If an error occurs to a single bit that is either bit is changed to 1 to 0 or 0 to 1,
received binary bit will have an even number of 1’s which will indicate an error.
Take reference from fig(1.1) and rather than appending the 1 as parity bit, append 0 because total
number of 1’s are already odd.
Some more examples :-
Message (XYZ) P(Odd) P(Even)

000 1 0

001 0 1

010 0 1

011 1 0

100 0 1
101 1 0

110 1 0

111 0 1

Figure – Error Detection with Odd Parity Bit


Limitations :
1. The limitation of this method is that only error in a single bit would be identified and we also
cannot determine the exact location of error in the data bit.
2. If the number of bits in even parity check increase or decrease (data changed) but remained to
be even then it won’t be able to detect error as the number of bits are still even and same goes for
odd parity check.
See the below image for more details :
Can’t detect error (Even Parity Check)
In the above example, the the data bits has been changed but as we can see the total count of 1’s
remain to be even, the error can’t be detected even though the message’s meaning has been
changed. You can visualize the same for odd parity check the number of 1’s will remain odd,
even if the data bits have been changed and the odd parity check won’t be able to detect error.

In Computer Networks, Hamming code is used for the set of error-correction codes which
may occur when the data is moved from the sender to the receiver. The hamming method
corrects the error by finding the state at which the error has occurred.
Redundant Bits
Redundant bits are extra binary bits that are generated and added to the information-carrying bits
of data transfer to ensure that no bits were lost during the data transfer. The redundancy bits are
placed at certain calculated positions to eliminate the errors and the distance between the two
redundancy bits is called "Hamming Distance".
Error Correction Code − This is the relationship between data bits and redundancy bits to
correct a single-bit error. A-frame consists of M data bits and R redundant bits. Suppose the total
length of the frame be N (N=M+R). An N-bit unit containing data and the check bit is often
referred to as an N-bit codeword.
The following formula is used to find the number of redundant bits.
Number of single-bit errors = M + R
Number of states for no error = 1
So, the number of redundant bits (R) that represent all states (M+R+1) must satisfy −
2𝑅 ≥ 𝑀 + 𝑅 + 1
where R = Redundant bit, and M = data bit.
Steps to find the Hamming Code −
The hamming method uses the extra parity bits to allow the identification of a single-bit error.
​ Step 1 − First write the bit positions starting from 1 in a binary form (1, 10, 11,100, etc.)
​ Step 2 − Mark all the bit positions that are powers of two as parity bits (1, 2, 4, 8, 16, 32,
64, etc.)
​ Step 3 − All other bit positions are for the data to be encoded using (3, 5, 6, 7, 9, 10 and
11, etc.)
Each parity bit calculates the parity for some of the bits in the code word. The position of the
parity determines the sequence of bits that it alternatively checks and skips.
​ Position 1 − Check 1 bit, then skip 1 bit, check 1 bit and then skip 1 bit and so on (Ex −
1,3,5,7,11, etc.)
​ Position 2 − Check 2 bit, then skip 2 bit, check 2 bit, then skip 2 bit (Ex −
2,3,6,7,10,11,14,15, etc.)
​ Position 4 − Check 4 bit, then skip 4 bit, check 4 bit, then skip 4 bit (Ex − 4, 5, 6, 7, 12,
13, 14, 15, etc.)
​ Position 8 − Check 8 bit, then skip 8 bit, check 8 bit, then skip 8 bit (Ex − 8, 9, 10, 11,
12, 13, 14, 15, 24, 25, 26, 27, 28, 29, 30, 31).
Note − Set the parity bit 1 if the total number of 1s in the positions it checks odd or set the parity
bit 0 if the total number of 1s in the positions it checks even.
Example −
Construct the even parity Hamming code word for a data byte 1001101.
The number (1001101) of bits is 7.
The value of r is calculated as −
2𝑅 ≥ 𝑀 + 𝑅 + 1
⇒ 24 ≥ 7 + 4 + 1
Therefore, the number of redundancy bits = 4
Now, let's calculate the required number of parity bits.
We take 𝑃 = 2, then 2𝑃 = 22 = 4 and 𝑛 + 𝑃 + 1 = 4 + 2 + 1 = 7
The 2 parity bits are not sufficient for the 4-bit data.
Now, we will take 𝑃 = 3,then 2𝑃 = 23 = 8 and 𝑛 + 𝑃 + 1 = 4 + 3 + 1 = 8
Therefore, 3 parity bits are sufficient for 4-bit data.
The total bits in the codeword are − 4 + 3 = 7
Position 1: checks the bits 1,3,5,7,9 and 11.
? _1_0 0 1_1 0 1 0.In position 1 even parity so set position 1 to a 0:0_1_0 0 1_1 0 1 0.
0 1 0 1 1 0 0 1 0
Position 2: checks bits 2,3,6,7,10,11.
0 ? 1_0 0 1_1 0 1 0. In position 2 odd parity so set position 2 to.a 1:0 1 1_0 0 1_1 0 1 0
0 1 0 1 1 0 0 1 1 0
Position 4 checks bits 4,5,6,7,12.
0 1 1 ? 0 0 1_1 0 1 0. In position 4 odd parity so set position 4 to.a 1: 0 1 1 1 0 0 1_1 0 1 0
0 1 0 1 1 0 0 1 1 1 0
Position 8 checks bits 8,9,10,11,12.
0 1 1 1 0 0 1 ? 1 0 1 0. In position 8 even parity so set position 8 to.a 1: 0 1 1 1 0 0 1 0 1 0 1 0
0 1 0 1 0 1 0 0 1 1 1 0
Code Word = 011100101010
0 1 1 1 0 0 1 0 1 0 1 0

CRC
The Cyclic Redundancy Checks (CRC) is the most powerful method for Error-Detection and
Correction. It is given as a kbit message and the transmitter creates an (n – k) bit sequence
called frame check sequence. The out coming frame, including n bits, is precisely divisible by
some fixed number. Modulo 2 Arithmetic is used in this binary addition with no carries, just like
the XOR operation.
Redundancy means duplicacy. The redundancy bits used by CRC are changed by splitting the
data unit by a fixed divisor. The remainder is CRC.
Qualities of CRC
​ It should have accurately one less bit than the divisor.
​ Joining it to the end of the data unit should create the resulting bit sequence precisely
divisible by the divisor.
CRC generator and checker
Process
​ A string of n 0s is added to the data unit. The number n is one smaller than the number of
bits in the fixed divisor.
​ The new data unit is divided by a divisor utilizing a procedure known as binary division;
the remainder appearing from the division is CRC.
​ The CRC of n bits interpreted in phase 2 restores the added 0s at the end of the data unit.
Example
Message D = 1010001101 (10 bits)
Predetermined P = 110101 (6 bits)
FCS R = to be calculated 5 bits
Hence, n = 15 K = 10 and (n – k) = 5
The message is generated through 25:accommodating 1010001101000
The product is divided by P.
The remainder is inserted to 25D to provide T = 101000110101110 that is sent.
Suppose that there are no errors, and the receiver gets T perfect. The received frame is divided
by P.

Because of no remainder, there are no errors.


Block Coding
In digital electronics, block coding is a technique of encoding data into a specific format. It is
mainly used to detect and correct errors occurred in the information during transmission and
storage. This is done by adding a block code of redundant information to the main data.
Block coding is mainly employed to create a robust method of data transmission and storage. In
the block coding, data is encoded by splitting it into multiple blocks of a fixed size and applying
encoding techniques to each of these blocks separately.

In block coding, the input data is taken and transformed into a longer block of encoded data by
adding some redundant data to it. This addition redundant data helps to detect and correct errors
that occur during transmission and storage.
Block coding method generally works on binary data which is represented in the form of 0s and
1s. To perform block coding, there are various types of techniques are available, such as parity
check codes, Hamming codes, Reed-Solomon codes, BCH codes, etc. Where, the parity check
codes is the simplest technique to perform block coding. However, this technique has some
limitations, such as it can detect only single-bit errors. The other block coding technique are
much advanced and can detect as well as correct the errors.
Block coding is extensively used in various fields of digital electronics, such as in wireless
communication, satellite data communication, optical fiber communication, digital data storage
devices, and more.
Types of Block Codes used in Digital Electronics
In digital electronics, there are several different types of block codes used to perform block
coding of data. Some common types of block codes are described below:
Parity Check Codes
Parity check codes are the simplest block codes used for error detection in digital electronics. In
this block coding technique, an extra parity bit is included with each block of data. The
calculation of the parity bit is done as per the number of 1s in the block of data. However, the
parity check codes can detect only 1-bit errors, also they cannot correct them.
Hamming Codes
Hamming codes are relatively advanced codes than parity check codes used for block coding in
digital electronics. These codes are able to detect as well as correct 1-bit errors. This method
adds additional redundant bits to each data block to create a specific code-word. The positions of
the redundant bits in the code-word allow for detection and correction of errors in the data.
Reed-Solomon Codes
Reed-Solomon codes are highly advanced codes used for block coding in digital electronic
systems where robust error detection and correction is desired. These codes have ability to detect
and correct multi-bit errors in a data block. The operation of Reed-Solomon Codes is based on
the combination of parity checks and polynomial mathematics, where parity check detects errors
in the data block, while the error locator polynomials correct them. Reed-Solomon codes are
extensively used in the field of digital communication, satellite communication and data storage
devices.
Bose-Chaudhuri-Hocquenghem (BCH) Codes
BCH codes are another type of block codes used for error detection and correct in data blocks.
These codes provide higher flexibility over Reed-Solomon codes in terms of number of errors
that they can correct. BCH codes are mainly used where error correction is required multiple
times like in magnetic storage devices.
Convolution Codes
Convolution codes are another type of block codes used for error correction. These are also
known as turbo codes. These codes involve the use of parallel concatenated convolution codes
for error correction in data block. Convolution codes use an iterative decoding process to provide
excellent error correction capabilities. These codes are primarily used in wireless and deep-space
communications, where noise levels are very high.
Low-Density Parity-Check (LDPC) Codes
LDPC codes are types of error correction codes known for their high performance and low
complexity. These codes are mainly employed in modern digital communication systems like
4G, 5G, Wi-Fi, etc. for error correction.
Advantages of Block Coding in Digital Electronics
Block coding offers several benefits in the field digital electronics. Some key advantages of
block coding in digital electronics are listed below:
​ Block coding improves the integrity of the received data by error detection and correction
occurred during transmission and storage.
​ Block coding improves overall reliability of the data transmission.
​ Block coding increases immunity of the communication channel against noise and
interference.
​ Block coding allows for efficient utilization of storage space and channel bandwidth
through the error correction.
Disadvantages of Block Coding in Digital Electronics
Apart from various advantages, block coding also has some disadvantages which are given
below:
​ Block coding increases redundancy in the data due to addition of extra bits for error
correction.
​ Block coding increases the overall data size of the block code, which consumes extra
storage space or channel bandwidth.
​ Block coding can reduce overall performance of the system, due to additional encoding
and decoding processes.
​ Block coding can cause delays in data transmission.
​ Block coding involves the utilization of complex algorithms and hardware resources that
introduce in its implementation.
Conclusion
Block coding is a method of error detection and correction used in data communication and
storage to ensure the integrity of the data. It involves the addition of redundancy to the original
data that allows for detection and correction of errors occurred during transmission and storage
of the data. Overall, block coding is an essential process in data transmission and storage to
ensure accuracy and reliability of the digital information.
Flow control is design issue at Data Link Layer. It is a technique that generally observes the
proper flow of data from sender to receiver. It is very essential because it is possible for sender to
transmit data or information at very fast rate and hence receiver can receive this information and
process it. This can happen only if receiver has very high load of traffic as compared to sender,
or if receiver has power of processing less as compared to sender. Flow control is basically a
technique that gives permission to two of stations that are working and processing at different
speeds to just communicate with one another. Flow control in Data Link Layer simply restricts
and coordinates number of frames or amount of data sender can send just before it waits for an
acknowledgement from receiver. Flow control is actually set of procedures that explains sender
about how much data or frames it can transfer or transmit before data overwhelms receiver. The
receiving device also contains only limited amount of speed and memory to store data. This is
why receiving device should be able to tell or inform the sender about stopping the transmission
or transferring of data on temporary basis before it reaches limit. It also needs buffer, large block
of memory for just storing data or frames until they are processed.
flow control can also be understand as a speed matching mechanism for two stations.

Approaches to Flow Control : Flow Control is classified into two categories:


● Feedback – based Flow Control : In this control technique, sender simply transmits
data or information or frame to receiver, then receiver transmits data back to sender
and also allows sender to transmit more amount of data or tell sender about how
receiver is processing or doing. This simply means that sender transmits data or
frames after it has received acknowledgements from user.
● Rate – based Flow Control : In this control technique, usually when sender sends or
transfer data at faster speed to receiver and receiver is not being able to receive data at
the speed, then mechanism known as built-in mechanism in protocol will just limit or
restricts overall rate at which data or information is being transferred or transmitted
by sender without any feedback or acknowledgement from receiver.
Techniques of Flow Control in Data Link Layer : There are basically two types of techniques
being developed to control the flow of data
1. Stop-and-Wait Flow Control : This method is the easiest and simplest form of flow control.
In this method, basically message or data is broken down into various multiple frames, and then
receiver indicates its readiness to receive frame of data. When acknowledgement is received,
then only sender will send or transfer the next frame. This process is continued until sender
transmits EOT (End of Transmission) frame. In this method, only one of frames can be in
transmission at a time. It leads to inefficiency i.e. less productivity if propagation delay is very
much longer than the transmission delay and Ultimately In this method sender sent single frame
and receiver take one frame at a time and sent acknowledgement(which is next frame number
only) for new frame.
Advantages –
● This method is very easiest and simple and each of the frames is checked and
acknowledged well.
● This method is also very accurate.
Disadvantages –
● This method is fairly slow.
● In this, only one packet or frame can be sent at a time.
● It is very inefficient and makes the transmission process very slow.
2. Sliding Window Flow Control : This method is required where reliable in-order delivery of
packets or frames is very much needed like in data link layer. It is point to point protocol that
assumes that none of the other entity tries to communicate until current data or frame transfer
gets completed. In this method, sender transmits or sends various frames or packets before
receiving any acknowledgement. In this method, both the sender and receiver agree upon total
number of data frames after which acknowledgement is needed to be transmitted. Data Link
Layer requires and uses this method that simply allows sender to have more than one
unacknowledged packet “in-flight” at a time. This increases and improves network throughput.
and Ultimately In this method sender sent multiple frame but receiver take one by one and
after completing one frame acknowledge(which is next frame number only) for new frame.
Advantages –
● It performs much better than stop-and-wait flow control.
● This method increases efficiency.
● Multiples frames can be sent one after another.
Disadvantages –
● The main issue is complexity at the sender and receiver due to the transferring of
multiple frames.
● The receiver might receive data frames or packets out the sequence.

UNIT III
Medium Access Control Sublayer: Protocols - Stop and Wait, Go back n, Selective
Repeat, Sliding Window Protocols, Multiple access protocols: ALOHA, CSMA,
Collision free protocols, IEEE 802.3 standards, and HDLC. Network Layer:
Switching Techniques, Tunneling, Fragmentation, Logical addressing – IPV4, IPV6,
Address Mapping

The medium access control (MAC) is a sublayer of the data link layer of the open system
interconnections (OSI) reference model for data transmission. It is responsible for flow control
and multiplexing for transmission medium. It controls the transmission of data packets via
remotely shared channels. It sends data over the network interface card.
MAC Layer in the OSI Model
The Open System Interconnections (OSI) model is a layered networking framework that
conceptualizes how communications should be done between heterogeneous systems. The data
link layer is the second lowest layer. It is divided into two sublayers −
​ The logical link control (LLC) sublayer
​ The medium access control (MAC) sublayer
The following diagram depicts the position of the MAC layer −

Functions of MAC Layer


​ It provides an abstraction of the physical layer to the LLC and upper layers of the OSI
network.
​ It is responsible for encapsulating frames so that they are suitable for transmission via the
physical medium.
​ It resolves the addressing of source station as well as the destination station, or groups of
destination stations.
​ It performs multiple access resolutions when more than one data frame is to be
transmitted. It determines the channel access methods for transmission.
​ It also performs collision resolution and initiating retransmission in case of collisions.
​ It generates the frame check sequences and thus contributes to protection against
transmission errors.
MAC Addresses
MAC address or media access control address is a unique identifier allotted to a network
interface controller (NIC) of a device. It is used as a network address for data transmission
within a network segment like Ethernet, Wi-Fi, and Bluetooth.
MAC address is assigned to a network adapter at the time of manufacturing. It is hardwired or
hard-coded in the network interface card (NIC). A MAC address comprises of six groups of two
hexadecimal digits, separated by hyphens, colons, or no separators. An example of a MAC
address is 00:0A:89:5B:F0:11.
Stop and Wait Protocol
Before understanding the stop and Wait protocol, we first know about the error control
mechanism. The error control mechanism is used so that the received data should be exactly
same whatever sender has sent the data. The error control mechanism is divided into two
categories, i.e., Stop and Wait ARQ and sliding window. The sliding window is further divided
into two categories, i.e., Go Back N, and Selective Repeat. Based on the usage, the people select
the error control mechanism whether it is stop and wait or sliding window.
What is Stop and Wait protocol?
Here stop and wait means, whatever the data that sender wants to send, he sends the data to the
receiver. After sending the data, he stops and waits until he receives the acknowledgment from
the receiver. The stop and wait protocol is a flow control protocol where flow control is one of
the services of the data link layer.
It is a data-link layer protocol which is used for transmitting the data over the noiseless channels.
It provides unidirectional data transmission which means that either sending or receiving of data
will take place at a time. It provides flow-control mechanism but does not provide any error
control mechanism.
The idea behind the usage of this frame is that when the sender sends the frame then he waits for
the acknowledgment before sending the next frame.

Primitives of Stop and Wait Protocol


The primitives of stop and wait protocol are:
Sender side
Rule 1: Sender sends one data packet at a time.
Rule 2: Sender sends the next packet only when it receives the acknowledgment of the previous
packet.
Therefore, the idea of stop and wait protocol in the sender's side is very simple, i.e., send one
packet at a time, and do not send another packet before receiving the acknowledgment.
Receiver side
Rule 1: Receive and then consume the data packet.
Rule 2: When the data packet is consumed, receiver sends the acknowledgment to the sender.
Therefore, the idea of stop and wait protocol in the receiver's side is also very simple, i.e.,
consume the packet, and once the packet is consumed, the acknowledgment is sent. This is
known as a flow control mechanism.
Working of Stop and Wait protocol

The above figure shows the working of the stop and wait protocol. If there is a sender and
receiver, then sender sends the packet and that packet is known as a data packet. The sender will
not send the second packet without receiving the acknowledgment of the first packet. The
receiver sends the acknowledgment for the data packet that it has received. Once the
acknowledgment is received, the sender sends the next packet. This process continues until all
the packet are not sent. The main advantage of this protocol is its simplicity but it has some
disadvantages also. For example, if there are 1000 data packets to be sent, then all the 1000
packets cannot be sent at a time as in Stop and Wait protocol, one packet is sent at a time.
Disadvantages of Stop and Wait protocol
The following are the problems associated with a stop and wait protocol:
1. Problems occur due to lost data
Suppose the sender sends the data and the data is lost. The receiver is waiting for the data for a
long time. Since the data is not received by the receiver, so it does not send any
acknowledgment. Since the sender does not receive any acknowledgment so it will not send the
next packet. This problem occurs due to the lost data.
In this case, two problems occur:
ADVERTISEMENT
ADVERTISEMENT
ADVERTISEMENT
○ Sender waits for an infinite amount of time for an acknowledgment.
○ Receiver waits for an infinite amount of time for a data.
2. Problems occur due to lost acknowledgment
Suppose the sender sends the data and it has also been received by the receiver. On receiving the
packet, the receiver sends the acknowledgment. In this case, the acknowledgment is lost in a
network, so there is no chance for the sender to receive the acknowledgment. There is also no
chance for the sender to send the next packet as in stop and wait protocol, the next packet cannot
be sent until the acknowledgment of the previous packet is received.
In this case, one problem occurs:
○ Sender waits for an infinite amount of time for an acknowledgment.
3. Problem due to the delayed data or acknowledgment
Suppose the sender sends the data and it has also been received by the receiver. The receiver then
sends the acknowledgment but the acknowledgment is received after the timeout period on the
sender's side. As the acknowledgment is received late, so acknowledgment can be wrongly
considered as the acknowledgment of some other data packet.
Go-Back-N ARQ
Before understanding the working of Go-Back-N ARQ, we first look at the sliding window
protocol. As we know that the sliding window protocol is different from the stop-and-wait
protocol. In the stop-and-wait protocol, the sender can send only one frame at a time and cannot
send the next frame without receiving the acknowledgment of the previously sent frame,
whereas, in the case of sliding window protocol, the multiple frames can be sent at a time. The
variations of sliding window protocol are Go-Back-N ARQ and Selective Repeat ARQ. Let's
understand 'what is Go-Back-N ARQ'.
What is Go-Back-N ARQ?
In Go-Back-N ARQ, N is the sender's window size. Suppose we say that Go-Back-3, which
means that the three frames can be sent at a time before expecting the acknowledgment from the
receiver.
It uses the principle of protocol pipelining in which the multiple frames can be sent before
receiving the acknowledgment of the first frame. If we have five frames and the concept is
Go-Back-3, which means that the three frames can be sent, i.e., frame no 1, frame no 2, frame no
3 can be sent before expecting the acknowledgment of frame no 1.
In Go-Back-N ARQ, the frames are numbered sequentially as Go-Back-N ARQ sends the
multiple frames at a time that requires the numbering approach to distinguish the frame from
another frame, and these numbers are known as the sequential numbers.
ADVERTISEMENT
The number of frames that can be sent at a time totally depends on the size of the sender's
window. So, we can say that 'N' is the number of frames that can be sent at a time before
receiving the acknowledgment from the receiver.
If the acknowledgment of a frame is not received within an agreed-upon time period, then all the
frames available in the current window will be retransmitted. Suppose we have sent the frame no
5, but we didn't receive the acknowledgment of frame no 5, and the current window is holding
three frames, then these three frames will be retransmitted.
The sequence number of the outbound frames depends upon the size of the sender's window.
Suppose the sender's window size is 2, and we have ten frames to send, then the sequence
numbers will not be 1,2,3,4,5,6,7,8,9,10. Let's understand through an example.
ADVERTISEMENT
○ N is the sender's window size.
○ If the size of the sender's window is 4 then the sequence number will be
0,1,2,3,0,1,2,3,0,1,2, and so on.
The number of bits in the sequence number is 2 to generate the binary sequence 00,01,10,11.
Working of Go-Back-N ARQ
Suppose there are a sender and a receiver, and let's assume that there are 11 frames to be sent.
These frames are represented as 0,1,2,3,4,5,6,7,8,9,10, and these are the sequence numbers of the
frames. Mainly, the sequence number is decided by the sender's window size. But, for the better
understanding, we took the running sequence numbers, i.e., 0,1,2,3,4,5,6,7,8,9,10. Let's consider
the window size as 4, which means that the four frames can be sent at a time before expecting the
acknowledgment of the first frame.
Step 1: Firstly, the sender will send the first four frames to the receiver, i.e., 0,1,2,3, and now the
sender is expected to receive the acknowledgment of the 0th frame.

Let's assume that the receiver has sent the acknowledgment for the 0 frame, and the receiver has
successfully received it.
The sender will then send the next frame, i.e., 4, and the window slides containing four frames
(1,2,3,4).

The receiver will then send the acknowledgment for the frame no 1. After receiving the
acknowledgment, the sender will send the next frame, i.e., frame no 5, and the window will slide
having four frames (2,3,4,5).
Now, let's assume that the receiver is not acknowledging the frame no 2, either the frame is lost,
or the acknowledgment is lost. Instead of sending the frame no 6, the sender Go-Back to 2,
which is the first frame of the current window, retransmits all the frames in the current window,
i.e., 2,3,4,5.
Important points related to Go-Back-N ARQ:
○ In Go-Back-N, N determines the sender's window size, and the size of the receiver's
window is always 1.
○ It does not consider the corrupted frames and simply discards them.
○ It does not accept the frames which are out of order and discards them.
○ If the sender does not receive the acknowledgment, it leads to the retransmission of all
the current window frames.
Let's understand the Go-Back-N ARQ through an example.
Example 1: In GB4, if every 6th packet being transmitted is lost and if we have to spend 10
packets then how many transmissions are required?
Solution: Here, GB4 means that N is equal to 4. The size of the sender's window is 4.

Step 1: As the window size is 4, so four packets are transferred at a time, i.e., packet no 1, packet
no 2, packet no 3, and packet no 4.
Step 2: Once the transfer of window size is completed, the sender receives the acknowledgment
of the first frame, i.e., packet no1. As the acknowledgment receives, the sender sends the next
packet, i.e., packet no 5. In this case, the window slides having four packets, i.e., 2,3,4,5 and
excluded the packet 1 as the acknowledgment of the packet 1 has been received successfully.

Step 3: Now, the sender receives the acknowledgment of packet 2. After receiving the
acknowledgment for packet 2, the sender sends the next packet, i.e., packet no 6. As mentioned
in the question that every 6th is being lost, so this 6th packet is lost, but the sender does not know
that the 6th packet has been lost.
ADVERTISEMENT
Step 4: The sender receives the acknowledgment for the packet no 3. After receiving the
acknowledgment of 3rd packet, the sender sends the next packet, i.e., 7th packet. The window will
slide having four packets, i.e., 4, 5, 6, 7.

Step 5: When the packet 7 has been sent, then the sender receives the acknowledgment for the
packet no 4. When the sender has received the acknowledgment, then the sender sends the next
packet, i.e., the 8th packet. The window will slide having four packets, i.e., 5, 6, 7, 8.
ADVERTISEMENT
Step 6: When the packet 8 is sent, then the sender receives the acknowledgment of packet 5. On
receiving the acknowledgment of packet 5, the sender sends the next packet, i.e., 9th packet. The
window will slide having four packets, i.e., 6, 7, 8, 9.
Step 7: The current window is holding four packets, i.e., 6, 7, 8, 9, where the 6th packet is the
first packet in the window. As we know, the 6th packet has been lost, so the sender receives the
negative acknowledgment NAK(6). As we know that every 6th packet is being lost, so the
counter will be restarted from 1. So, the counter values 1, 2, 3 are given to the 7th packet, 8th
packet, 9th packet respectively.

Step 8: As it is Go-BACK, so it retransmits all the packets of the current window. It will resend
6, 7, 8, 9. The counter values of 6, 7, 8, 9 are 4, 5, 6, 1, respectively. In this case, the 8th packet is
lost as it has a 6-counter value, so the counter variable will again be restarted from 1.
Step 9: After the retransmission, the sender receives the acknowledgment of packet 6. On
receiving the acknowledgment of packet 6, the sender sends the 10th packet. Now, the current
window is holding four packets, i.e., 7, 8, 9, 10.
Step 10: When the 10th packet is sent, the sender receives the acknowledgment of packet 7. Now
the current window is holding three packets, 8, 9 and 10. The counter values of 8, 9, 10 are 6, 1,
2.
Step 11: As the 8th packet has 6 counter value which means that 8th packet has been lost, and the
sender receives NAK (8).
ADVERTISEMENT
Step 12: Since the sender has received the negative acknowledgment for the 8th packet, it resends
all the packets of the current window, i.e., 8, 9, 10.
Step 13: The counter values of 8, 9, 10 are 3, 4, 5, respectively, so their acknowledgments have
been received successfully.
We conclude from the above figure that total 17 transmissions are required.

Selective Repeat Protocol (SRP) is a type of error control protocol we use in computer networks
to ensure the reliable delivery of data packets. Additionally, we use it in conjunction with the
Transmission Control Protocol (TCP) to ensure that the receiver receives data transmitted
over the network without errors.
In the SRP, the sender divides the data into packets and sends them to the receiver. Furthermore,
the receiver sends an acknowledgment (ACK) for each packet received successfully. If the
sender doesn’t receive an ACK for a particular packet, it retransmits only that packet instead of
the entire set of packets.
The SRP uses a window-based flow control mechanism to ensure the sender doesn’t overwhelm
the receiver with too many packets. Additionally, the sender and receiver maintain a window
of packets. Based on the window size, the sender sends packets and waits for a specific amount
of time for acknowledgment from the receiver.
The receiver, in turn, maintains a window of packets that contains the frame number it’s
receiving from the sender. If a frame is lost during transmission, the receiver sends the sender a
negative acknowledgment attacking the frame number.
3. Steps
Now let’s discuss the steps involved in the SRP.
The first step is to divide data into packets. The sender divides the data into packets of a fixed
size. When the sender divides the data into packets, it assigns a unique sequence number to each
packet. The numbering of packets plays a crucial role in the SRP.
The next step is to send the packets to the receiver. The receiver receives the packets and sends
an acknowledgment (ACK) for each packet received successfully.
The sender and receiver maintain a window of packets indicating the number of frames we can
transmit or receive at a given time. Additionally, we determine the size of the window based on
the network conditions. As the sender sends packets, it updates its window to reflect the packets
that have been transmitted, and the ACKs received.
However, if the sender doesn’t receive an ACK for a particular packet within a certain timeout
period, it retransmits only that packet instead of the entire set of packets. The receiver only
accepts packets that are within its window. If the receiver receives a packet outside the window,
it discards the packet.
The receiver sends selective acknowledgments (SACKs) for packets received out of order
or lost. The sender processes the SACKs to determine which packets need to be retransmitted.
Finally, we continue this process until we successfully send the data packets or the number of
retransmissions exceeds a predetermined threshold.
4. Example
Let’s see how we can transmit data using the SRP. We divide our sample data into 6 data packets
or frames:

Additionally, we’re assuming the window size for the receiver and sender is 2. Hence, we
transmit two frames and wait for the receiver to acknowledge the frames transmitted before
sending the next frames. In case of a missing or unacknowledged frame, we need to resend it
before proceeding with the next set of frames.
4.1. No Error in Transmission
Let’s start sending the packets using the SRP. We send the first two frames to the receiver and
wait for the acknowledgment:
As we can see, the receiver successfully received and acknowledged the first two data frames.
One crucial point is that when the sender sends a frame, it waits for a specific time to get a
response. In this case, we receive responses from the receiver within the waiting time of each
frame. Hence, we move on to the next 2 data frames.
4.2. Frame Is Lost
Let’s discuss a scenario when a frame is lost during the transmission:

Here, frame 2 is lost during the transmission. Hence, the sender waits for a specific amount of
time to get a response from the receiver. In this case, we received a negative acknowledgment for
frame 2. Therefore, we need to resend frame 2 before we proceed further:
4.3. Acknowledgment Is Lost
Let’s take a look at another situation when the acknowledgment of a frame is lost during
transmission:
In this case, the receiver successfully receives frames 4 and 5, but the acknowledgment of
frame 5 is lost. Hence, the sender waits for a specific amount of time in order to receive an
acknowledgment for frame 5. After the waiting time is over, the sender sends frame 4 again:
5. Advantages and Disadvantages
The SRP offers several advantages over other error control protocols, including efficient
retransmission, selective acknowledgments, reduced delay, and higher throughput.
The main difference with other error control protocols is that it only retransmits lost packets
rather than retransmitting the entire set of packets. As a result, the SRP reduces unnecessary
network traffic and improves efficiency.
In the SRP, the receiver sends selective acknowledgments (SACKs) for packets received out of
order or lost. This allows the sender to know exactly which packets need to be retransmitted.
Furthermore, the SRP can reduce delay since the receiver can immediately start processing the
received packets, even if some packets are still missing.
Finally, the SRP can achieve higher throughput compared to other protocols like Go-Back-N,
especially when the network has a high error rate or high bandwidth-delay product.
Despite its advantages, the SRP also has some limitations and disadvantages.
It’s more complex compared to other error control protocols. Therefore it requires more
processing power and memory resources.
Additionally, the SRP requires more overhead since it uses selective acknowledgments (SACKs)
to notify the sender about lost or out-of-order packets. As a result, it can increase network traffic.
Furthermore, it requires more buffering on both the sender and receiver sides to store the
packets that are not yet acknowledged. This can be a problem if the network has limited
buffering capacity.
Finally, the SRP can increase delay since the sender needs to wait for an acknowledgment for
each packet before transmitting the next one.
Let’s take a look at the summary:

Sliding window protocols are data link layer protocols for reliable and sequential delivery of
data frames. The sliding window is also used in Transmission Control Protocol.
In this protocol, multiple frames can be sent by a sender at a time before receiving an
acknowledgment from the receiver. The term sliding window refers to the imaginary boxes to
hold frames. Sliding window method is also known as windowing.
Working Principle
In these protocols, the sender has a buffer called the sending window and the receiver has buffer
called the receiving window.
The size of the sending window determines the sequence number of the outbound frames. If the
sequence number of the frames is an n-bit field, then the range of sequence numbers that can be
assigned is 0 to 2𝑛−1. Consequently, the size of the sending window is 2𝑛−1. Thus in order to
accommodate a sending window size of 2𝑛−1, a n-bit sequence number is chosen.
The sequence numbers are numbered as modulo-n. For example, if the sending window size is 4,
then the sequence numbers will be 0, 1, 2, 3, 0, 1, 2, 3, 0, 1, and so on. The number of bits in the
sequence number is 2 to generate the binary sequence 00, 01, 10, 11.
The size of the receiving window is the maximum number of frames that the receiver can accept
at a time. It determines the maximum number of frames that the sender can send before receiving
acknowledgment.
Example
Suppose that we have sender window and receiver window each of size 4. So the sequence
numbering of both the windows will be 0,1,2,3,0,1,2 and so on. The following diagram shows
the positions of the windows after sending the frames and receiving acknowledgments.
Types of Sliding Window Protocols
The Sliding Window ARQ (Automatic Repeat reQuest) protocols are of two categories −

​ Go – Back – N ARQ
Go – Back – N ARQ provides for sending multiple frames before receiving the
acknowledgment for the first frame. It uses the concept of sliding window, and so is also
called sliding window protocol. The frames are sequentially numbered and a finite
number of frames are sent. If the acknowledgment of a frame is not received within the
time period, all frames starting from that frame are retransmitted.
​ Selective Repeat ARQ
This protocol also provides for sending multiple frames before receiving the
acknowledgment for the first frame. However, here only the erroneous or lost frames are
retransmitted, while the good frames are received and buffered.
Multiple access protocol- ALOHA, CSMA, CSMA/CA and CSMA/CD
Data Link Layer
The data link layer is used in a computer network to transmit the data between two devices or
nodes. It divides the layer into parts such as data link control and the multiple access
resolution/protocol. The upper layer has the responsibility to flow control and the error control
in the data link layer, and hence it is termed as logical of data link control. Whereas the lower
sub-layer is used to handle and reduce the collision or multiple access on a channel. Hence it is
termed as media access control or the multiple access resolutions.
Data Link Control
A data link control is a reliable channel for transmitting data over a dedicated link using various
techniques such as framing, error control and flow control of data packets in the computer
network.
What is a multiple access protocol?
When a sender and receiver have a dedicated link to transmit data packets, the data link control is
enough to handle the channel. Suppose there is no dedicated path to communicate or transfer the
data between two devices. In that case, multiple stations access the channel and simultaneously
transmits the data over the channel. It may create collision and cross talk. Hence, the multiple
access protocol is required to reduce the collision and avoid crosstalk between the channels.
For example, suppose that there is a classroom full of students. When a teacher asks a question,
all the students (small channels) in the class start answering the question at the same time
(transferring the data simultaneously). All the students respond at the same time due to which
data is overlap or data lost. Therefore it is the responsibility of a teacher (multiple access
protocol) to manage the students and make them one answer.
Following are the types of multiple access protocol that is subdivided into the different process
as:
A. Random Access Protocol
In this protocol, all the station has the equal priority to send the data over a channel. In random
access protocol, one or more stations cannot depend on another station nor any station control
another station. Depending on the channel's state (idle or busy), each station transmits the data
frame. However, if more than one station sends the data over a channel, there may be a collision
or data conflict. Due to the collision, the data frame packets may be lost or changed. And hence,
it does not receive by the receiver end.
Following are the different methods of random-access protocols for broadcasting frames on the
channel.

○ Aloha
○ CSMA
○ CSMA/CD
○ CSMA/CA
ALOHA Random Access Protocol
It is designed for wireless LAN (Local Area Network) but can also be used in a shared medium
to transmit data. Using this method, any station can transmit data across a network
simultaneously when a data frameset is available for transmission.
Aloha Rules
1. Any station can transmit data to a channel at any time.
2. It does not require any carrier sensing.
3. Collision and data frames may be lost during the transmission of data through multiple
stations.
4. Acknowledgment of the frames exists in Aloha. Hence, there is no collision detection.
5. It requires retransmission of data after some random amount of time.
Pure Aloha
Whenever data is available for sending over a channel at stations, we use Pure Aloha. In pure
Aloha, when each station transmits data to a channel without checking whether the channel is
idle or not, the chances of collision may occur, and the data frame can be lost. When any station
transmits the data frame to a channel, the pure Aloha waits for the receiver's acknowledgment. If
it does not acknowledge the receiver end within the specified time, the station waits for a random
amount of time, called the backoff time (Tb). And the station may assume the frame has been
lost or destroyed. Therefore, it retransmits the frame until all the data are successfully transmitted
to the receiver.
1. The total vulnerable time of pure Aloha is 2 * Tfr.
2. Maximum throughput occurs when G = 1/ 2 that is 18.4%.
3. Successful transmission of data frame is S = G * e ^ - 2 G.
As we can see in the figure above, there are four stations for accessing a shared channel and
transmitting data frames. Some frames collide because most stations send their frames at the
same time. Only two frames, frame 1.1 and frame 2.2, are successfully transmitted to the receiver
end. At the same time, other frames are lost or destroyed. Whenever two frames fall on a shared
channel simultaneously, collisions can occur, and both will suffer damage. If the new frame's
first bit enters the channel before finishing the last bit of the second frame. Both frames are
completely finished, and both stations must retransmit the data frame.
Slotted Aloha
The slotted Aloha is designed to overcome the pure Aloha's efficiency because pure Aloha has a
very high possibility of frame hitting. In slotted Aloha, the shared channel is divided into a fixed
time interval called slots. So that, if a station wants to send a frame to a shared channel, the
frame can only be sent at the beginning of the slot, and only one frame is allowed to be sent to
each slot. And if the stations are unable to send data to the beginning of the slot, the station will
have to wait until the beginning of the slot for the next time. However, the possibility of a
collision remains when trying to send a frame at the beginning of two or more station time slot.
1. Maximum throughput occurs in the slotted Aloha when G = 1 that is 37%.
2. The probability of successfully transmitting the data frame in the slotted Aloha is S = G *
e ^ - 2 G.
3. The total vulnerable time required in slotted Aloha is Tfr.

CSMA (Carrier Sense Multiple Access)


It is a carrier sense multiple access based on media access protocol to sense the traffic on a
channel (idle or busy) before transmitting the data. It means that if the channel is idle, the station
can send data to the channel. Otherwise, it must wait until the channel becomes idle. Hence, it
reduces the chances of a collision on a transmission medium.
CSMA Access Modes
1-Persistent: In the 1-Persistent mode of CSMA that defines each node, first sense the shared
channel and if the channel is idle, it immediately sends the data. Else it must wait and keep track
of the status of the channel to be idle and broadcast the frame unconditionally as soon as the
channel is idle.
Non-Persistent: It is the access mode of CSMA that defines before transmitting the data, each
node must sense the channel, and if the channel is inactive, it immediately sends the data.
Otherwise, the station must wait for a random time (not continuously), and when the channel is
found to be idle, it transmits the frames.
P-Persistent: It is the combination of 1-Persistent and Non-persistent modes. The P-Persistent
mode defines that each node senses the channel, and if the channel is inactive, it sends a frame
with a P probability. If the data is not transmitted, it waits for a (q = 1-p probability) random
time and resumes the frame with the next time slot.
O- Persistent: It is an O-persistent method that defines the superiority of the station before the
transmission of the frame on the shared channel. If it is found that the channel is inactive, each
station waits for its turn to retransmit the data.

CSMA/ CD
It is a carrier sense multiple access/ collision detection network protocol to transmit data
frames. The CSMA/CD protocol works with a medium access control layer. Therefore, it first
senses the shared channel before broadcasting the frames, and if the channel is idle, it transmits a
frame to check whether the transmission was successful. If the frame is successfully received, the
station sends another frame. If any collision is detected in the CSMA/CD, the station sends a
jam/ stop signal to the shared channel to terminate data transmission. After that, it waits for a
random time before sending a frame to a channel.
CSMA/ CA
It is a carrier sense multiple access/collision avoidance network protocol for carrier
transmission of data frames. It is a protocol that works with a medium access control layer. When
a data frame is sent to a channel, it receives an acknowledgment to check whether the channel is
clear. If the station receives only a single (own) acknowledgments, that means the data frame has
been successfully transmitted to the receiver. But if it gets two signals (its own and one more in
which the collision of frames), a collision of the frame occurs in the shared channel. Detects the
collision of the frame when a sender receives an acknowledgment signal.
Following are the methods used in the CSMA/ CA to avoid the collision:
Interframe space: In this method, the station waits for the channel to become idle, and if it gets
the channel is idle, it does not immediately send the data. Instead of this, it waits for some time,
and this time period is called the Interframe space or IFS. However, the IFS time is often used
to define the priority of the station.
Contention window: In the Contention window, the total time is divided into different slots.
When the station/ sender is ready to transmit the data frame, it chooses a random slot number of
slots as wait time. If the channel is still busy, it does not restart the entire process, except that it
restarts the timer only to send data packets when the channel is inactive.
Acknowledgment: In the acknowledgment method, the sender station sends the data frame to
the shared channel if the acknowledgment is not received ahead of time.
Almost all collisions can be avoided in CSMA/CD but they can still occur during the contention
period. The collision during the contention period adversely affects the system performance, this
happens when the cable is long and length of packet are short. This problem becomes serious as
fiber optics network came into use. Here we shall discuss some protocols that resolve the
collision during the contention period.
● Bit-map Protocol
● Binary Countdown
● Limited Contention Protocols
● The Adaptive Tree Walk Protocol
Pure and slotted Aloha, CSMA and CSMA/CD are Contention based Protocols:
● Try-if collide-Retry
● No guarantee of performance
● What happen if the network load is high?

Collision Free Protocols:
● Pay constant overhead to achieve performance guarantee
● Good when network load is high
1. Bit-map Protocol:
Bit map protocol is collision free Protocol. In bitmap protocol method, each contention period
consists of exactly N slots. If any station has to send frame, then it transmits a 1 bit in the
corresponding slot. For example, if station 2 has a frame to send, it transmits a 1 bit to the 2nd
slot.
In general, Station 1 Announce the fact that it has a frame questions by inserting a 1 bit into slot
1. In this way, each station has complete knowledge of which station wishes to transmit. There
will never be any collisions because everyone agrees on who goes next. Protocols like this in
which the desire to transmit is broadcasting for the actual transmission are called Reservation
Protocols.

Bit Map Protocol fig (1.1)


For analyzing the performance of this protocol, We will measure time in units of the contention
bits slot, with a data frame consisting of d time units. Under low load conditions, the bitmap will
simply be repeated over and over, for lack of data frames. All the stations have something to
send all the time at high load, the N bit contention period is prorated over N frames, yielding an
overhead of only 1 bit per frame.
Generally, high numbered stations have to wait for half a scan before starting to transmit low
numbered stations have to wait for half a scan(N/2 bit slots) before starting to transmit, low
numbered stations have to wait on an average 1.5 N slots.

2. Binary Countdown:
Binary countdown protocol is used to overcome the overhead 1 bit per binary station. In binary
countdown, binary station addresses are used. A station wanting to use the channel broadcast its
address as binary bit string starting with the high order bit. All addresses are assumed of the
same length. Here, we will see the example to illustrate the working of the binary countdown.
In this method, different station addresses are read together who decide the priority of
transmitting. If these stations 0001, 1001, 1100, 1011 all are trying to seize the channel for
transmission. All the station at first broadcast their most significant address bit that is 0, 1, 1, 1
respectively. The most significant bits are read together. Station 0001 see the 1 MSB in another
station address and knows that a higher numbered station is competing for the channel, so it
gives up for the current round.
Other three stations 1001, 1100, 1011 continue. The next station at which next bit is 1 is at
station 1100, so station 1011 and 1001 give up because there 2nd bit is 0. Then station 1100 starts
transmitting a frame, after which another bidding cycle starts.
Binary Countdown fig (1.2)
3. Limited Contention Protocols:
● Collision based protocols (pure and slotted ALOHA, CSMA/CD) are good when the
network load is low.
● Collision free protocols (bitmap, binary Countdown) are good when load is high.
● How about combining their advantages :
1. Behave like the ALOHA scheme under light load
2. Behave like the bitmap scheme under heavy load.
4. Adaptive Tree Walk Protocol:
● partition the group of station and limit the contention for each slot.
● Under light load, everyone can try for each slot like aloha
● Under heavy load, only a group can try for each slot
● How do we do it :
1. treat every stations as the leaf of a binary tree
2. first slot (after successful transmission), all stations
can try to get the slot(under the root node).
3. If no conflict, fine.
4. Else, in case of conflict, only nodes under a subtree get to try for the next one. (depth
first search)
Adaptive Tree Walk Protocol fig (1.3)
Slot-0 : C*, E*, F*, H* (all nodes under node 0 can try which are going to send), conflict
Slot-1 : C* (all nodes under node 1 can try}, C sends
Slot-2 : E*, F*, H*(all nodes under node 2 can try}, conflict
Slot-3 : E*, F* (all nodes under node 5 can try to send), conflict
Slot-4 : E* (all nodes under E can try), E sends
Slot-5 : F* (all nodes under F can try), F sends
Slot-6 : H* (all nodes under node 6 can try to send), H sends.

Ethernet is a set of technologies and protocols that are used primarily in LANs. It was first
standardized in 1980s by IEEE 802.3 standard. IEEE 802.3 defines the physical layer and the
medium access control (MAC) sub-layer of the data link layer for wired Ethernet networks.
Ethernet is classified into two categories: classic Ethernet and switched Ethernet.
Classic Ethernet is the original form of Ethernet that provides data rates between 3 to 10 Mbps.
The varieties are commonly referred as 10BASE-X. Here, 10 is the maximum throughput, i.e. 10
Mbps, BASE denoted use of baseband transmission, and X is the type of medium used. Most
varieties of classic Ethernet have become obsolete in present communication scenario.
A switched Ethernet uses switches to connect to the stations in the LAN. It replaces the repeaters
used in classic Ethernet and allows full bandwidth utilization.
IEEE 802.3 Popular Versions
There are a number of versions of IEEE 802.3 protocol. The most popular ones are -
​ IEEE 802.3: This was the original standard given for 10BASE-5. It used a thick single
coaxial cable into which a connection can be tapped by drilling into the cable to the core.
Here, 10 is the maximum throughput, i.e. 10 Mbps, BASE denoted use of baseband
transmission, and 5 refers to the maximum segment length of 500m.
​ IEEE 802.3a: This gave the standard for thin coax (10BASE-2), which is a thinner
variety where the segments of coaxial cables are connected by BNC connectors. The 2
refers to the maximum segment length of about 200m (185m to be precise).
​ IEEE 802.3i: This gave the standard for twisted pair (10BASE-T) that uses unshielded
twisted pair (UTP) copper wires as physical layer medium. The further variations were
given by IEEE 802.3u for 100BASE-TX, 100BASE-T4 and 100BASE-FX.
​ IEEE 802.3i: This gave the standard for Ethernet over Fiber (10BASE-F) that uses fiber
optic cables as medium of transmission.

Frame Format of Classic Ethernet and IEEE 802.3


The main fields of a frame of classic Ethernet are -
​ Preamble: It is the starting field that provides alert and timing pulse for transmission. In
case of classic Ethernet it is an 8 byte field and in case of IEEE 802.3 it is of 7 bytes.
​ Start of Frame Delimiter: It is a 1 byte field in a IEEE 802.3 frame that contains an
alternating pattern of ones and zeros ending with two ones.
​ Destination Address: It is a 6 byte field containing physical address of destination
stations.
​ Source Address: It is a 6 byte field containing the physical address of the sending
station.
​ Length: It a 7 bytes field that stores the number of bytes in the data field.
​ Data: This is a variable sized field carries the data from the upper layers. The maximum
size of data field is 1500 bytes.
​ Padding: This is added to the data to bring its length to the minimum requirement of 46
bytes.
​ CRC: CRC stands for cyclic redundancy check. It contains the error detection
information.
High-level Data Link Control (HDLC) is a group of communication protocols of the data link
layer for transmitting data between network points or nodes. Since it is a data link protocol,
data is organized into frames. A frame is transmitted via the network to the destination that
verifies its successful arrival. It is a bit - oriented protocol that is applicable for both point - to -
point and multipoint communications.
Transfer Modes
HDLC supports two types of transfer modes, normal response mode and asynchronous balanced
mode.
​ Normal Response Mode (NRM) − Here, two types of stations are there, a primary
station that send commands and secondary station that can respond to received
commands. It is used for both point - to - point and multipoint communications.
​ Asynchronous Balanced Mode (ABM) − Here, the configuration is balanced, i.e. each
station can both send commands and respond to commands. It is used for only point - to -
point communications.

HDLC Frame
HDLC is a bit - oriented protocol where each frame contains up to six fields. The structure varies
according to the type of frame. The fields of a HDLC frame are −
​ Flag − It is an 8-bit sequence that marks the beginning and the end of the frame. The bit
pattern of the flag is 01111110.
​ Address − It contains the address of the receiver. If the frame is sent by the primary
station, it contains the address(es) of the secondary station(s). If it is sent by the
secondary station, it contains the address of the primary station. The address field may be
from 1 byte to several bytes.
​ Control − It is 1 or 2 bytes containing flow and error control information.
​ Payload − This carries the data from the network layer. Its length may vary from one
network to another.
​ FCS − It is a 2 byte or 4 bytes frame check sequence for error detection. The standard
code used is CRC (cyclic redundancy code)

Types of HDLC Frames


There are three types of HDLC frames. The type of frame is determined by the control field of
the frame −
​ I-frame − I-frames or Information frames carry user data from the network layer. They
also include flow and error control information that is piggybacked on user data. The first
bit of control field of I-frame is 0.
​ S-frame − S-frames or Supervisory frames do not contain information field. They are
used for flow and error control when piggybacking is not required. The first two bits of
control field of S-frame is 10.
​ U-frame − U-frames or Un-numbered frames are used for myriad miscellaneous
functions, like link management. It may contain an information field, if required. The
first two bits of control field of U-frame is 11.
In computer networking, Switching is the process of transferring data packets from one device to
another in a network, or from one network to another, using specific devices called switches. A
computer user experiences switching all the time for example, accessing the Internet from your
computer device, whenever a user requests a webpage to open, the request is processed through
switching of data packets only.
Switching takes place at the Data Link layer of the OSI Model. This means that after the
generation of data packets in the Physical Layer, switching is the immediate next process in data
communication. In this article, we shall discuss different processes involved in switching, what
kind of hardware is used in switching, etc.
What is a Switch?
A switch is a hardware device in a network that connects other devices, like computers and
servers. It helps multiple devices share a network without their data interfering with each other.
A switch works like a traffic cop at a busy intersection. When a data packet arrives, the switch
decides where it needs to go and sends it through the right port.
Some data packets come from devices directly connected to the switch, like computers or VoIP
phones. Other packets come from devices connected through hubs or routers.
The switch knows which devices are connected to it and can send data directly between them. If
the data needs to go to another network, the switch sends it to a router, which forwards it to the
correct destination.
What is a Network Switching?
A switch is a dedicated piece of computer hardware that facilitates the process of switching i.e.,
incoming data packets and transferring them to their destination. A switch works at the Data
Link layer of the OSI Model. A switch primarily handles the incoming data packets from a
source computer or network and decides the appropriate port through which the data packets will
reach their target computer or network.
A switch decides the port through which a data packet shall pass with the help of its destination
MAC(Media Access Control) Address. A switch does this effectively by maintaining a switching
table, (also known as forwarding table). A network switch is more efficient than a network Hub
or repeater because it maintains a switching table, which simplifies its task and reduces
congestion on a network, which effectively improves the performance of the network.

Process of Switching
The switching process involves the following steps:
● Frame Reception: The switch receives a data frame or packet from a computer
connected to its ports.
● MAC Address Extraction: The switch reads the header of the data frame and
collects the destination MAC Address from it.
● MAC Address Table Lookup: Once the switch has retrieved the MAC Address, it
performs a lookup in its Switching table to find a port that leads to the MAC Address
of the data frame.
● Forwarding Decision and Switching Table Update: If the switch matches the
destination MAC Address of the frame to the MAC address in its switching table, it
forwards the data frame to the respective port. However, if the destination MAC
Address does not exist in its forwarding table, it follows the flooding process, in
which it sends the data frame to all its ports except the one it came from and records
all the MAC Addresses to which the frame was delivered. This way, the switch finds
the new MAC Address and updates its forwarding table.
● Frame Transition: Once the destination port is found, the switch sends the data
frame to that port and forwards it to its target computer/network.
Types of Switching
There are three types of switching methods:
● Message Switching
● Circuit Switching
● Packet Switching
○ Datagram Packet Switching
○ Virtual Circuit Packet Switching

Let us now discuss them individually:


Message Switching: This is an older switching technique that has become obsolete. In message
switching technique, the entire data block/message is forwarded across the entire network thus,
making it highly inefficient.
Circuit Switching: In this type of switching, a connection is established between the source and
destination beforehand. This connection receives the complete bandwidth of the network until
the data is transferred completely.
This approach is better than message switching as it does not involve sending data to the entire
network, instead of its destination only.
Packet Switching: This technique requires the data to be broken down into smaller components,
data frames, or packets. These data frames are then transferred to their destinations according to
the available resources in the network at a particular time.
This switching type is used in modern computers and even the Internet. Here, each data frame
contains additional information about the destination and other information required for proper
transfer through network components.
Datagram Packet Switching: In Datagram Packet switching, each data frame is taken as an
individual entity and thus, they are processed separately. Here, no connection is established
before data transmission occurs. Although this approach provides flexibility in data transfer, it
may cause a loss of data frames or late delivery of the data frames.
Virtual-Circuit Packet Switching: In Virtual-Circuit Packet switching, a logical connection
between the source and destination is made before transmitting any data. These logical
connections are called virtual circuits. Each data frame follows these logical paths and provides a
reliable way of transmitting data with less chance of data loss.
Conclusion
In conclusion, switching is a fundamental networking process that enables the exchange of data
between devices within a network. By efficiently directing data packets to their correct
destinations, switches help maintain smooth and organized communication, ensuring that
multiple devices can share the same network without interference. Switching is crucial for the
seamless operation of local-area networks (LANs) and the overall performance of network
infrastructure.
A technique of inter-networking called Tunneling is used when source and destination networks
of the same type are to be connected through a network of different types. Tunneling uses a
layered protocol model such as those of the OSI or TCP/IP protocol suite.
So, in other words, when data moves from host A to B it covers all the different levels of the
specified protocol (OSI, TCP/IP, etc.) while moving between different levels, data conversion
(Encapsulation) to suit different interfaces of the particular layer is called tunneling.
For example, let us consider an Ethernet to be connected to another Ethernet through a WAN as:

Tunneling
The task is sent on an IP packet from host A of Ethernet-1 to host B of Ethernet-2 via a WAN.
Steps
● Host A constructs a packet that contains the IP address of Host B.
● It then inserts this IP packet into an Ethernet frame and this frame is addressed to the
multiprotocol router M1
● Host A then puts this frame on Ethernet.
● When M1 receives this frame, it removes the IP packet, inserts it in the payload
packet of the WAN network layer packet, and addresses the WAN packet to M2. The
multiprotocol router M2 removes the IP packet and sends it to host B in an Ethernet
frame.
How Does Encapsulation Work?
Data travels from one place to another in the form of packets, and a packet has two parts, the first
one is the header which consists of the destination address and the working protocol and the
second thing is its contents.
In simple terminology, Encapsulation is the process of adding a new packet within the existing
packet or a packet inside a packet. In an encapsulated packet, the header part of the first packet is
remain surrounded by the payload section of the surrounding packet, which has actual contents.
Why is this Technique Called Tunneling?
In this particular example, the IP packet does not have to deal with WAN, and the host’s A and B
also do not have to deal with the WAN. The multiprotocol routers M1 and M2 will have to
understand IP and WAN packets. Therefore, the WAN can be imagined to be equivalent to a big
tunnel extending between multiprotocol routers M1 and M2 and the technique is called
Tunneling.
Types of Tunneling Protocols
1. Generic Routing Encapsulation
2. Internet Protocol Security
3. Ip-in-IP
4. SSH
5. Point-to-Point Tunneling Protocol
6. Secure Socket Tunneling Protocol
7. Layer 2 Tunneling Protocol
8. Virtual Extensible Local Area Network
1. Generic Routing Encapsulation (GRE)
Generic Routing Encapsulation is a method of encapsulation of IP packets in a GRE header that
hides the original IP packet. Also, a new header named delivery header is added above the GRE
header which contains the new source and destination address.
GRE header act as a new IP header with a Delivery header containing a new source and
destination address. Only routers between which GRE is configured can decrypt and encrypt the
GRE header. The original IP packet enters a router, travels in encrypted form, and emerges out of
another GRE-configured router as the original IP packet as they have traveled through a tunnel.
Hence, this process is called GRE tunneling.
2. Internet Protocol Security (IPsec)
IP security (IPSec) is an Internet Engineering Task Force (IETF) standard suite of protocols
between 2 communication points across the IP network that provide data authentication,
integrity, and confidentiality. It also defines the encrypted, decrypted, and authenticated packets.
The protocols needed for secure key exchange and key management are defined in it.
3. IP-in-IP
IP-in-IP is a Tunneling Protocol for encapsulating IP packets inside another IP packet.
4. Secure Shell (SSH)
SSH(Secure Shell) is an access credential that is used in the SSH Protocol. In other words, it is a
cryptographic network protocol that is used for transferring encrypted data over the network. It
allows you to connect to a server, or multiple servers, without having to remember or enter your
password for each system which is to log in remotely from one system to another.
5. Point-to-Point Tunneling Protocol (PPTP)
PPTP or Point-to-Point Tunneling Protocol generates a tunnel and confines the data packet.
Point-to-Point Protocol (PPP) is used to encrypt the data between the connection. PPTP is one of
the most widely used VPN protocols and has been in use since the early release of Windows.
PPTP is also used on Mac and Linux apart from Windows.

Point-to-Point Tunneling Protocol (PPTP)


6. Secure Socket Tunneling Protocol (SSTP)
A VPN protocol developed by Microsoft that uses SSL to secure the connection, but only
available for Windows.
7. Layer 2 Tunneling Protocol (L2TP)
L2TP stands for Layer 2 Tunneling Protocol, published in 2000 as proposed standard RFC 2661.
It is a computer networking protocol that was designed to support VPN connections used by an
Internet service provider (ISP) to enable VPN operation over the Internet. L2TP combines the
best features of two other tunneling protocols- PPTP(Point-to-Point Tunneling Protocol) from
Microsoft and L2F(Layer 2 Forwarding) from Cisco Systems.
8. Virtual Extensible Local Area Network (VXLAN)
Virtual Extensible Local Area Network is short called VXLAN. It is a network virtualization
technology that stretches layer 2 connections over layer 3 networks by encapsulating Ethernet
frames in a VXLAN packet which includes IP addresses to address the scalability problem in a
more extensible manner.
What is SSL Tunneling?
SSL Tunneling involves a client that requires an SSL connection to a backend service or secures
a server via a proxy server. This proxy server opens the connection between the client and the
backend service and copies the data to both sides without any direct interference in the SSL
connection.
SSL Tunneling
Fragmentation is done by the network layer when the maximum size of datagram is greater than
maximum size of data that can be held in a frame i.e., its Maximum Transmission Unit (MTU).
The network layer divides the datagram received from the transport layer into fragments so that
data flow is not disrupted.

● Since there are 16 bits for total length in IP header so, the maximum size of IP
datagram = 216 – 1 = 65, 535 bytes.

● It is done by the network layer at the destination side and is usually done at routers.
● Source side does not require fragmentation due to wise (good) segmentation by
transport layer i.e. instead of doing segmentation at the transport layer and
fragmentation at the network layer, the transport layer looks at datagram data limit
and frame data limit and does segmentation in such a way that resulting data can
easily fit in a frame without the need of fragmentation.
● Receiver identifies the frame with the identification (16 bits) field in the IP header.
Each fragment of a frame has the same identification number.
● Receiver identifies the sequence of frames using the fragment offset(13 bits) field in
the IP header
● Overhead at the network layer is present due to the extra header introduced due to
fragmentation.
the need of Fragmentation at Network Layer:
Fragmentation at the Network Layer is a process of dividing a large data packet into smaller
pieces, known as fragments, to improve the efficiency of data transmission over a network. The
need for fragmentation at the network layer arises from several factors:
1.Maximum Transmission Unit (MTU): Different networks have different Maximum
Transmission Unit (MTU) sizes, which determine the maximum size of a data packet that can be
transmitted over that network. If the size of a data packet exceeds the MTU, it needs to be
fragmented into smaller fragments that can be transmitted over the network.
2.Network Performance: Large data packets can consume a significant amount of network
resources and can cause congestion in the network. Fragmentation helps to reduce the impact of
large data packets on network performance by breaking them down into smaller fragments that
can be transmitted more efficiently.
3.Bandwidth Utilization: Large data packets may consume a significant amount of network
bandwidth, causing other network traffic to be slowed down. Fragmentation helps to reduce the
impact of large data packets on network bandwidth utilization by breaking them down into
smaller fragments that can be transmitted more efficiently.
Fragmentation at the network layer is necessary in order to ensure efficient and reliable
transmission of data over communication networks.
1.Large Packet Size: In some cases, the size of the packet to be transmitted may be too large for
the underlying communication network to handle. Fragmentation at the network layer allows the
large packet to be divided into smaller fragments that can be transmitted over the network.
2.Path MTU: The Maximum Transmission Unit (MTU) of a network defines the largest packet
size that can be transmitted over the network. Fragmentation at the network layer allows the
packet to be divided into smaller fragments that can be transmitted over networks with different
MTU values.
3.Reliable Transmission: Fragmentation at the network layer increases the reliability of data
transmission, as smaller fragments are less likely to be lost or corrupted during transmission.
Fields in IP header for fragmentation –
● Identification (16 bits) – use to identify fragments of the same frame.
● Fragment offset (13 bits) – use to identify the sequence of fragments in the frame. It
generally indicates a number of data bytes preceding or ahead of the fragment.
Maximum fragment offset possible = (65535 – 20) = 65515
{where 65535 is the maximum size of datagram and 20 is the minimum size of IP
header}
So, we need ceil(log265515) = 16 bits for a fragment offset but the fragment offset
field has only 13 bits. So, to represent efficiently we need to scale down the fragment
offset field by 216/213 = 8 which acts as a scaling factor. Hence, all fragments except
the last fragment should have data in multiples of 8 so that fragment offset ∈ N.
● More fragments (MF = 1 bit) – tells if more fragments are ahead of this fragment
i.e. if MF = 1, more fragments are ahead of this fragment and if MF = 0, it is the last
fragment.
● Don’t fragment (DF = 1 bit) – if we don’t want the packet to be fragmented then DF
is set i.e. DF = 1.
Reassembly of Fragments –
It takes place only at the destination and not at routers since packets take an independent
path(datagram packet switching), so all may not meet at a router and hence a need of
fragmentation may arise again. The fragments may arrive out of order also.
Algorithm –
1. Destination should identify that datagram is fragmented from MF, Fragment offset
field.
2. Destination should identify all fragments belonging to same datagram from
Identification field.
3. Identify the 1st fragment(offset = 0).
4. Identify subsequent fragments using header length, fragment offset.
5. Repeat until MF = 0.
Efficiency –
Efficiency (e) = useful/total = (Data without header)/(Data with header)

Throughput = e * B { where B is bottleneck bandwidth }


Example – An IP router with a Maximum Transmission Unit (MTU) of 200 bytes has received
an IP packet of size 520 bytes with an IP header of length 20 bytes. The values of the relevant
fields in the IP header.
Explanation – Since MTU is 200 bytes and 20 bytes is header size so, the maximum length of
data = 180 bytes but it can’t be represented in fragment offset since it is not divisible by 8 so, the
maximum length of data feasible = 176 bytes.
Number of fragments = (520/200) = 3.
Header length = 5 (since scaling factor is 4 therefore, 20/4 = 5)
Efficiency, e = (Data without header)/(Data with header) = 500/560 = 89.2 %
Logical Address
Logical address also referred to as IP (Internet Protocol) address is an universal addressing
system. It is used in the Network layer. This address facilitates universal communication that are
not dependent on the underlying physical networks. There are two types of IP addresses – IPv4
and IPv6.
The size of IPv4 is 32 bits. For example ,
192 : 180 : 210 where, 1 octant = 8 bits.
The size of IPv6 is 128 bits. For example ,
1C18 : 1B32 : C450 : 62A5 : 34DC : AE24 : 15BC : 6A5D where , 1 octant = 16 bits.
Below is a diagram representing the working mechanism of Logical address:
Mechanism of Logical Address
In the above diagram , we can see that there are two networks – Network 1 and Network 2. A1 is
the sender and there are two receivers – D1 and D2. In case of logical address, receiver D1 as
well as D2 receives the data. This is because logical address can be passed in different networks.
The purpose of using logical address is to send the data across networks.
Advantages
● Logical address can be used in different networks because they can traverse through
routers.
● They can handle a number devices and networks. Even if the number of devices and
network increases, the logical address is able to handle all them very easily. Thus,
they are highly scalable.
Disadvantages
● Internet Protocol is vulnerable to attacks such as hacking, phishing etc. and there can
be data loss.
● It lacks privacy. The data which is moving through the packets can be intercepted,
traced and monitored by unauthorized entities.
The address through which any computer communicates with our computer is simply called an
Internet Protocol Address or IP address. For example, if we want to load a web page or
download something, we require the address to deliver that particular file or webpage. That
address is called an IP Address.
There are two versions of IP: IPv4 and IPv6. IPv4 is the older version, while IPv6 is the newer
one. Both have their own features and functions, but they differ in many ways. Understanding
these differences helps us see why we need IPv6 as the internet grows and evolves.
What is IP?
An IP, or Internet Protocol address, is a unique set of numbers assigned to each device connected
to a network, like the Internet. It’s like an address for your computer, phone, or any other device,
allowing them to communicate with each other. When you visit a website, your device uses the
IP address to find and connect to the website’s server.
Types of IP Addresses
● IPv4 (Internet Protocol Version 4)
● IPv6 (Internet Protocol Version 6)
What is IPv4?
IPv4 addresses consist of two things: the network address and the host address. It stands for
Internet Protocol version four. It was introduced in 1981 by DARPA and was the first
deployed version in 1982 for production on SATNET and on the ARPANET in January 1983.
IPv4 addresses are 32-bit integers that have to be expressed in Decimal Notation. It is
represented by 4 numbers separated by dots in the range of 0-255, which have to be converted to
0 and 1, to be understood by Computers. For Example, An IPv4 Address can be written as
189.123.123.90.
IPv4 Address Format
IPv4 Address Format is a 32-bit Address that comprises binary digits separated by a dot (.).

IPv4 Address Format


Drawback of IPv4
● Limited Address Space: IPv4 has a limited number of addresses, which is not
enough for the growing number of devices connecting to the internet.
● Complex Configuration: IPv4 often requires manual configuration or DHCP to
assign addresses, which can be time-consuming and prone to errors.
● Less Efficient Routing: The IPv4 header is more complex, which can slow down
data processing and routing.
● Security Issues: IPv4 does not have built-in security features, making it more
vulnerable to attacks unless extra security measures are added.
● Limited Support for Quality of Service (QoS): IPv4 has limited capabilities for
prioritizing certain types of data, which can affect the performance of real-time
applications like video streaming and VoIP.
● Fragmentation: IPv4 allows routers to fragment packets, which can lead to
inefficiencies and increased chances of data being lost or corrupted.
● Broadcasting Overhead: IPv4 uses broadcasting to communicate with multiple
devices on a network, which can create unnecessary network traffic and reduce
performance.
What is IPv6?
IPv6 is based on IPv4 and stands for Internet Protocol version 6. It was first introduced in
December 1995 by Internet Engineering Task Force. IP version 6 is the new version of Internet
Protocol, which is way better than IP version 4 in terms of complexity and efficiency. IPv6 is
written as a group of 8 hexadecimal numbers separated by colon (:). It can be written as 128 bits
of 0s and 1s.
IPv6 Address Format
IPv6 Address Format is a 128-bit IP Address, which is written in a group of 8 hexadecimal
numbers separated by colon (:).

IPv6 Address Format


To switch from IPv4 to IPv6, there are several strategies:
● Dual Stacking: Devices can use both IPv4 and IPv6 at the same time. This way, they
can talk to networks and devices using either version.
● Tunneling: This method allows IPv6 users to send data through an IPv4 network to
reach other IPv6 users. Think of it as creating a “tunnel” for IPv6 traffic through the
older IPv4 system.
● Network Address Translation (NAT): NAT helps devices using different versions of
IP addresses (IPv4 and IPv6) to communicate with each other by translating the
addresses so they understand each other.
Difference Between IPv4 and IPv6
IPv4 IPv6

IPv4 has a 32-bit address


IPv6 has a 128-bit address length
length

It Supports Manual and It supports Auto and renumbering address


DHCP address configuration configuration
In IPv4 end to end,
In IPv6 end-to-end, connection integrity is
connection integrity is
Achievable
Unachievable

It can generate 4.29×109 The address space of IPv6 is quite large it can
address space produce 3.4×1038 address space

The Security feature is IPSEC is an inbuilt security feature in the IPv6


dependent on the application protocol

Address representation of
Address representation of IPv6 is in hexadecimal
IPv4 is in decimal

Fragmentation performed by
In IPv6 fragmentation is performed only by the
Sender and forwarding
sender
routers

In IPv4 Packet flow In IPv6 packet flow identification are Available and
identification is not available uses the flow label field in the header

In IPv4 checksum field is


In IPv6 checksum field is not available
available

It has a broadcast Message In IPv6 multicast and anycast message transmission


Transmission Scheme scheme is available

In IPv4 Encryption and


In IPv6 Encryption and Authentication are provided
Authentication facility not
provided

IPv4 has a header of 20-60 IPv6 has a header of 40 bytes fixed


bytes.

IPv4 can be converted to


Not all IPv6 can be converted to IPv4
IPv6
IPv4 consists of 4 fields
IPv6 consists of 8 fields, which are separated by a
which are separated by
colon (:)
addresses dot (.)

IPv4’s IP addresses are


divided into five different
IPv6 does not have any classes of the IP address.
classes. Class A , Class B,
Class C, Class D , Class E.

IPv4 supports
VLSM(Variable Length IPv6 does not support VLSM.
subnet mask).

Example of IPv4: Example of IPv6:


66.94.29.13 2001:0000:3238:DFE1:0063:0000:0000:FEFB

Benefits of IPv6 over IPv4


The recent Version of IP IPv6 has a greater advantage over IPv4. Here are some of the mentioned
benefits:
● Larger Address Space: IPv6 has a greater address space than IPv4, which is
required for expanding the IP Connected Devices. IPv6 has 128 bit IP Address rather
and IPv4 has a 32-bit Address.
● Improved Security: IPv6 has some improved security which is built in with it. IPv6
offers security like Data Authentication, Data Encryption, etc. Here, an Internet
Connection is more Secure.
● Simplified Header Format: As compared to IPv4, IPv6 has a simpler and more
effective header Structure, which is more cost-effective and also increases the speed
of Internet Connection.
● Prioritize: IPv6 contains stronger and more reliable support for QoS features, which
helps in increasing traffic over websites and increases audio and video quality on
pages.
● Improved Support for Mobile Devices: IPv6 has increased and better support for
Mobile Devices. It helps in making quick connections over other Mobile Devices and
in a safer way than IPv4.
Conclusion
In simple terms, IPv4 and IPv6 are two versions of Internet Protocol addresses used to identify
devices on a network. IPv6 is the newer version and offers many improvements over IPv4, such
as a much larger address space, better security, and more efficient routing. However, IPv4 is still
widely used, and the transition to IPv6 is ongoing. The main difference is that IPv6 can handle
many more devices, which is crucial as the number of internet-connected devices continues to
grow.
Adaptive routing algorithms, also known as dynamic routing algorithms, makes routing
decisions dynamically while transferring data packets from the source to the destination. These
algorithms constructs routing tables depending on the network conditions like network traffic
and topology. They try to compute computes the best path, i.e. “least – cost path”, depending
upon the hop count, transit time and distance.
Network Layer Protocols
TCP/IP supports the following protocols:
ARP
ADVERTISEMENT
ADVERTISEMENT
○ ARP stands for Address Resolution Protocol.
○ It is used to associate an IP address with the MAC address.
○ Each device on the network is recognized by the MAC address imprinted on the NIC.
Therefore, we can say that devices need the MAC address for communication on a local
area network. MAC address can be changed easily. For example, if the NIC on a
particular machine fails, the MAC address changes but IP address does not change. ARP
is used to find the MAC address of the node when an internet address is known.

Note: MAC address: The MAC address is used to identify the actual device.

IP address: It is an address used to locate a device on the network.

How ARP works


If the host wants to know the physical address of another host on its network, then it sends an
ARP query packet that includes the IP address and broadcast it over the network. Every host on
the network receives and processes the ARP packet, but only the intended recipient recognizes
the IP address and sends back the physical address. The host holding the datagram adds the
physical address to the cache memory and to the datagram header, then sends back to the sender.
Steps taken by ARP protocol
If a device wants to communicate with another device, the following steps are taken by the
device:
○ The device will first look at its internet list, called the ARP cache to check whether an IP
address contains a matching MAC address or not. It will check the ARP cache in
command prompt by using a command arp-a.

○ If ARP cache is empty, then device broadcast the message to the entire network asking
each device for a matching MAC address.
○ The device that has the matching IP address will then respond back to the sender with its
MAC address
○ Once the MAC address is received by the device, then the communication can take place
between two devices.
○ If the device receives the MAC address, then the MAC address gets stored in the ARP
cache. We can check the ARP cache in command prompt by using a command arp -a.

Note: ARP cache is used to make a network more efficient.

In the above screenshot, we observe the association of IP address to the MAC address.
There are two types of ARP entries:
○ Dynamic entry: It is an entry which is created automatically when the sender broadcast
its message to the entire network. Dynamic entries are not permanent, and they are
removed periodically.
○ Static entry: It is an entry where someone manually enters the IP to MAC address
association by using the ARP command utility.

RARP
○ RARP stands for Reverse Address Resolution Protocol.
○ If the host wants to know its IP address, then it broadcast the RARP query packet that
contains its physical address to the entire network. A RARP server on the network
recognizes the RARP packet and responds back with the host IP address.
○ The protocol which is used to obtain the IP address from a server is known as Reverse
Address Resolution Protocol.
○ The message format of the RARP protocol is similar to the ARP protocol.
○ Like ARP frame, RARP frame is sent from one machine to another encapsulated in the
data portion of a frame.
ICMP
○ ICMP stands for Internet Control Message Protocol.
○ The ICMP is a network layer protocol used by hosts and routers to send the notifications
of IP datagram problems back to the sender.
○ ICMP uses echo test/reply to check whether the destination is reachable and responding.
○ ICMP handles both control and error messages, but its main function is to report the error
but not to correct them.
○ An IP datagram contains the addresses of both source and destination, but it does not
know the address of the previous router through which it has been passed. Due to this
reason, ICMP can only send the messages to the source, but not to the immediate routers.
○ ICMP protocol communicates the error messages to the sender. ICMP messages cause the
errors to be returned back to the user processes.
○ ICMP messages are transmitted within IP datagram.
The Format of an ICMP message

○ The first field specifies the type of the message.


○ The second field specifies the reason for a particular message type.
○ The checksum field covers the entire ICMP message.
Error Reporting
ICMP protocol reports the error messages to the sender.
Five types of errors are handled by the ICMP protocol:
○ Destination unreachable
○ Source Quench
○ Time Exceeded
○ Parameter problems
○ Redirection

○ Destination unreachable: The message of "Destination Unreachable" is sent from


receiver to the sender when destination cannot be reached, or packet is discarded when
the destination is not reachable.
○ Source Quench: The purpose of the source quench message is congestion control. The
message sent from the congested router to the source host to reduce the transmission rate.
ICMP will take the IP of the discarded packet and then add the source quench message to
the IP datagram to inform the source host to reduce its transmission rate. The source host
will reduce the transmission rate so that the router will be free from congestion.
○ Time Exceeded: Time Exceeded is also known as "Time-To-Live". It is a parameter that
defines how long a packet should live before it would be discarded.
There are two ways when Time Exceeded message can be generated:
Sometimes packet discarded due to some bad routing implementation, and this causes the
looping issue and network congestion. Due to the looping issue, the value of TTL keeps on
decrementing, and when it reaches zero, the router discards the datagram. However, when the
datagram is discarded by the router, the time exceeded message will be sent by the router to the
source host.
When destination host does not receive all the fragments in a certain time limit, then the received
fragments are also discarded, and the destination host sends time Exceeded message to the
source host.
○ Parameter problems: When a router or host discovers any missing value in the IP
datagram, the router discards the datagram, and the "parameter problem" message is sent
back to the source host.
○ Redirection: Redirection message is generated when host consists of a small routing
table. When the host consists of a limited number of entries due to which it sends the
datagram to a wrong router. The router that receives a datagram will forward a datagram
to a correct router and also sends the "Redirection message" to the host to update its
routing table.

IGMP
○ IGMP stands for Internet Group Message Protocol.
○ The IP protocol supports two types of communication:
○ Unicasting: It is a communication between one sender and one receiver.
Therefore, we can say that it is one-to-one communication.
○ Multicasting: Sometimes the sender wants to send the same message to a large
number of receivers simultaneously. This process is known as multicasting which
has one-to-many communication.
○ The IGMP protocol is used by the hosts and router to support multicasting.
○ The IGMP protocol is used by the hosts and router to identify the hosts in a LAN that are
the members of a group.
○ IGMP is a part of the IP layer, and IGMP has a fixed-size message.
○ The IGMP message is encapsulated within an IP datagram.

The Format of IGMP message

Where,
Type: It determines the type of IGMP message. There are three types of IGMP message:
Membership Query, Membership Report and Leave Report.
Maximum Response Time: This field is used only by the Membership Query message. It
determines the maximum time the host can send the Membership Report message in response to
the Membership Query message.
Checksum: It determines the entire payload of the IP datagram in which IGMP message is
encapsulated.
Group Address: The behavior of this field depends on the type of the message sent.
○ For Membership Query, the group address is set to zero for General Query and set to
multicast group address for a specific query.
○ For Membership Report, the group address is set to the multicast group address.
○ For Leave Group, it is set to the multicast group address.
IGMP Messages

○ Membership Query message


○ This message is sent by a router to all hosts on a local area network to determine
the set of all the multicast groups that have been joined by the host.
○ It also determines whether a specific multicast group has been joined by the hosts
on a attached interface.
○ The group address in the query is zero since the router expects one response from
a host for every group that contains one or more members on that host.
○ Membership Report message
○ The host responds to the membership query message with a membership report
message.
○ Membership report messages can also be generated by the host when a host wants
to join the multicast group without waiting for a membership query message from
the router.
○ Membership report messages are received by a router as well as all the hosts on
an attached interface.
○ Each membership report message includes the multicast address of a single group
that the host wants to join.
○ IGMP protocol does not care which host has joined the group or how many hosts
are present in a single group. It only cares whether one or more attached hosts
belong to a single multicast group.
○ The membership Query message sent by a router also includes a "Maximum
Response time". After receiving a membership query message and before
sending the membership report message, the host waits for the random amount of
time from 0 to the maximum response time. If a host observes that some other
attached host has sent the "Maximum Report message", then it discards its
"Maximum Report message" as it knows that the attached router already knows
that one or more hosts have joined a single multicast group. This process is
known as feedback suppression. It provides the performance optimization, thus
avoiding the unnecessary transmission of a "Membership Report message".
○ Leave Report
When the host does not send the "Membership Report message", it means that the host
has left the group. The host knows that there are no members in the group, so even when
it receives the next query, it would not report the group.
NAT:
To access the Internet, one public IP address is needed, but we can use a private IP address in our
private network. The idea of NAT is to allow multiple devices to access the Internet through a
single public address. To achieve this, the translation of a private IP address to a public IP
address is required. Network Address Translation (NAT) is a process in which one or more
local IP address is translated into one or more Global IP address and vice versa in order to
provide Internet access to the local hosts. Also, it does the translation of port numbers i.e. masks
the port number of the host with another port number, in the packet that will be routed to the
destination. It then makes the corresponding entries of IP address and port number in the NAT
table. NAT generally operates on a router or firewall.
Network Address Translation (NAT) working –
Generally, the border router is configured for NAT i.e the router which has one interface in the
local (inside) network and one interface in the global (outside) network. When a packet traverse
outside the local (inside) network, then NAT converts that local (private) IP address to a global
(public) IP address. When a packet enters the local network, the global (public) IP address is
converted to a local (private) IP address.
If NAT runs out of addresses, i.e., no address is left in the pool configured then the packets will
be dropped and an Internet Control Message Protocol (ICMP) host unreachable packet to the
destination is sent.
Why mask port numbers ?
Suppose, in a network, two hosts A and B are connected. Now, both of them request for the same
destination, on the same port number, say 1000, on the host side, at the same time. If NAT does
only translation of IP addresses, then when their packets will arrive at the NAT, both of their IP
addresses would be masked by the public IP address of the network and sent to the destination.
Destination will send replies to the public IP address of the router. Thus, on receiving a reply, it
will be unclear to NAT as to which reply belongs to which host (because source port numbers for
both A and B are the same). Hence, to avoid such a problem, NAT masks the source port number
as well and makes an entry in the NAT table.
NAT inside and outside addresses –
Inside refers to the addresses which must be translated. Outside refers to the addresses which are
not in control of an organization. These are the network Addresses in which the translation of the
addresses will be done.
● Inside local address – An IP address that is assigned to a host on the Inside (local)
network. The address is probably not an IP address assigned by the service provider
i.e., these are private IP addresses. This is the inside host seen from the inside
network.

● Inside global address – IP address that represents one or more inside local IP
addresses to the outside world. This is the inside host as seen from the outside
network.

● Outside local address – This is the actual IP address of the destination host in the
local network after translation.

● Outside global address – This is the outside host as seen from the outside network. It
is the IP address of the outside destination host before translation.

Network Address Translation (NAT) Types –


There are 3 ways to configure NAT:

1. Static NAT – In this, a single unregistered (Private) IP address is mapped with a


legally registered (Public) IP address i.e one-to-one mapping between local and
global addresses. This is generally used for Web hosting. These are not used in
organizations as there are many devices that will need Internet access and to provide
Internet access, a public IP address is needed.
Suppose, if there are 3000 devices that need access to the Internet, the organization
has to buy 3000 public addresses that will be very costly.

2. Dynamic NAT – In this type of NAT, an unregistered IP address is translated into a


registered (Public) IP address from a pool of public IP addresses. If the IP address of
the pool is not free, then the packet will be dropped as only a fixed number of private
IP addresses can be translated to public addresses.
Suppose, if there is a pool of 2 public IP addresses then only 2 private IP addresses
can be translated at a given time. If 3rd private IP address wants to access the Internet
then the packet will be dropped therefore many private IP addresses are mapped to a
pool of public IP addresses. NAT is used when the number of users who want to
access the Internet is fixed. This is also very costly as the organization has to buy
many global IP addresses to make a pool.

3. Port Address Translation (PAT) – This is also known as NAT overload. In this,
many local (private) IP addresses can be translated to a single registered IP address.
Port numbers are used to distinguish the traffic i.e., which traffic belongs to which IP
address. This is most frequently used as it is cost-effective as thousands of users can
be connected to the Internet by using only one real global (public) IP address.

Dynamic Host Configuration Protocol


Dynamic Host Configuration Protocol (DHCP) is a network management protocol used to
dynamically assign an IP address to nay device, or node, on a network so they can communicate
using IP (Internet Protocol). DHCP automates and centrally manages these configurations. There
is no need to manually assign IP addresses to new devices. Therefore, there is no requirement for
any user configuration to connect to a DHCP based network.
DHCP can be implemented on local networks as well as large enterprise networks. DHCP is the
default protocol used by the most routers and networking equipment. DHCP is also called RFC
(Request for comments) 2131.
DHCP does the following:
ADVERTISEMENT
○ DHCP manages the provision of all the nodes or devices added or dropped from the
network.
○ DHCP maintains the unique IP address of the host using a DHCP server.
○ It sends a request to the DHCP server whenever a client/node/device, which is configured
to work with DHCP, connects to a network. The server acknowledges by providing an IP
address to the client/node/device.
DHCP is also used to configure the proper subnet mask, default gateway and DNS server
information on the node or device.
There are many versions of DCHP are available for use in IPV4 (Internet Protocol Version 4) and
IPV6 (Internet Protocol Version 6).
Backward Skip 10s
Play Video
Forward Skip 10s
How DHCP works
DHCP runs at the application layer of the TCP/IP protocol stack to dynamically assign IP
addresses to DHCP clients/nodes and to allocate TCP/IP configuration information to the DHCP
clients. Information includes subnet mask information, default gateway, IP addresses and domain
name system addresses.
DHCP is based on client-server protocol in which servers manage a pool of unique IP addresses,
as well as information about client configuration parameters, and assign addresses out of those
address pools.
The DHCP lease process works as follows:
○ First of all, a client (network device) must be connected to the internet.
○ DHCP clients request an IP address. Typically, client broadcasts a query for this
information.
○ DHCP server responds to the client request by providing IP server address and other
configuration information. This configuration information also includes time period,
called a lease, for which the allocation is valid.
○ When refreshing an assignment, a DHCP clients request the same parameters, but the
DHCP server may assign a new IP address. This is based on the policies set by the
administrator.
Components of DHCP
When working with DHCP, it is important to understand all of the components. Following are
the list of components:
○ DHCP Server: DHCP server is a networked device running the DCHP service that holds
IP addresses and related configuration information. This is typically a server or a router
but could be anything that acts as a host, such as an SD-WAN appliance.
○ DHCP client: DHCP client is the endpoint that receives configuration information from
a DHCP server. This can be any device like computer, laptop, IoT endpoint or anything
else that requires connectivity to the network. Most of the devices are configured to
receive DHCP information by default.
○ IP address pool: IP address pool is the range of addresses that are available to DHCP
clients. IP addresses are typically handed out sequentially from lowest to the highest.
○ Subnet: Subnet is the partitioned segments of the IP networks. Subnet is used to keep
networks manageable.
○ Lease: Lease is the length of time for which a DHCP client holds the IP address
information. When a lease expires, the client has to renew it.
○ DHCP relay: A host or router that listens for client messages being broadcast on that
network and then forwards them to a configured server. The server then sends responses
back to the relay agent that passes them along to the client. DHCP relay can be used to
centralize DHCP servers instead of having a server on each subnet.
Benefits of DHCP
There are following benefits of DHCP:
Centralized administration of IP configuration: DHCP IP configuration information can be
stored in a single location and enables that administrator to centrally manage all IP address
configuration information.
Dynamic host configuration: DHCP automates the host configuration process and eliminates
the need to manually configure individual host. When TCP/IP (Transmission control
protocol/Internet protocol) is first deployed or when IP infrastructure changes are required.
Seamless IP host configuration: The use of DHCP ensures that DHCP clients get accurate and
timely IP configuration IP configuration parameter such as IP address, subnet mask, default
gateway, IP address of DND server and so on without user intervention.
Flexibility and scalability: Using DHCP gives the administrator increased flexibility, allowing
the administrator to move easily change IP configuration when the infrastructure changes.
Types of Adaptive Routing Algorithms
The three popular types of adaptive routing algorithms are shown in the following diagram −

​ Centralized algorithm − In centralized routing, one centralized node has the total
network information and takes the routing decisions. It finds the least-cost path between
source and destination nodes by using global knowledge about the network. So, it is also
known as global routing algorithm. The advantage of this routing is that only the central
node is required to store network information and so the resource requirement of the
other nodes may be less. However, routing performance is too much dependent upon the
central node. An example of centralized routing is link state routing algorithm.
​ Isolated algorithm − In this algorithm, the nodes make the routing decisions based upon
local information available to them instead of gathering information from other nodes.
They do not have information regarding the link status. While this helps in fast decision
making, the nodes may transmit data packets along congested network resulting in delay.
The examples of isolated routing are hot potato routing and backward learning.
​ Distributed algorithm − This is a decentralized algorithm where each node receives
information from its neighbouring nodes and takes the decision based upon the received
information. The least-cost path between source and destination is computed iteratively in
a distributed manner. An advantage is that each node can dynamically change routing
decisions based upon the changes in the network. However, on the flip side, delays may
be introduced due to time required to gather information. Example of distributed
algorithm is distance vector routing algorithm.
Non-adaptive routing algorithms, also known as static routing algorithms, do not change the
selected routing decisions for transferring data packets from the source to the destination. They
construct a static routing table in advance to determine the path through which packets are to be
sent.
The static routing table is constructed based upon the routing information stored in the routers
when the network is booted up. Once the static paths are available to all the routers, they transmit
the data packets along these paths. The changing network topology and traffic conditions do not
affect the routing decisions.
Types of Non − adaptive Routing Algorithms
​ Flooding − In flooding, when a data packet arrives at a router, it is sent to all the
outgoing links except the one it has arrived on. Flooding may be of three types−
​ Uncontrolled flooding − Here, each router unconditionally transmits the
incoming data packets to all its neighbours.
​ Controlled flooding − They use some methods to control the transmission of
packets to the neighbouring nodes. The two popular algorithms for controlled
flooding are Sequence Number Controlled Flooding (SNCF) and Reverse Path
Forwarding (RPF).
​ Selective flooding − Here, the routers don't transmit the incoming packets only
along those paths which are heading towards approximately in the right direction,
instead of every available paths.
​ Random walks (RW) − This is a probabilistic algorithm where a data packet is sent by a
router to any one of its neighbours randomly. The transmission path thereby formed is a
random walk. RW can explore the alternative routes very efficiently. RW is very simple
to implement, requires small memory footprints, does not topology information of the
network and has inherent load balancing property. RW is suitable for very small devices
and for dynamic networks.

Unit-4
Transport Layer: Transport Services, Connection Management using three-way handshake
principle, User Datagram Protocol (UDP), Transmission Control Protocol (TCP), SCTP,
Congestion Control Policies, QoS Techniques: Leaky Bucket and Token Bucket algorithm.
1) Transport layer services(notes)
2) Three way handshake process(notes)

3)User Datagram Protocol (UDP) is a Transport Layer protocol. UDP is a part of the Internet
Protocol suite, referred to as UDP/IP suite. Unlike TCP, it is an unreliable and connectionless
protocol. So, there is no need to establish a connection before data transfer. The UDP helps to
establish low-latency and loss-tolerating connections over the network. The UDP enables
process-to-process communication.
What is User Datagram Protocol?
User Datagram Protocol (UDP) is one of the core protocols of the Internet Protocol (IP) suite. It
is a communication protocol used across the internet for time-sensitive transmissions such as
video playback or DNS lookups. Unlike Transmission Control Protocol (TCP), UDP is
connectionless and does not guarantee delivery, order, or error checking, making it a lightweight
and efficient option for certain types of data transmission.
UDP Header
UDP header is an 8-byte fixed and simple header, while for TCP it may vary from 20 bytes to 60
bytes. The first 8 Bytes contain all necessary header information and the remaining part consists
of data. UDP port number fields are each 16 bits long, therefore the range for port numbers is
defined from 0 to 65535; port number 0 is reserved. Port numbers help to distinguish different
user requests or processes.

UDP Header
● Source Port: Source Port is a 2 Byte long field used to identify the port number of
the source.
● Destination Port: It is a 2 Byte long field, used to identify the port of the destined
packet.
● Length: Length is the length of UDP including the header and the data. It is a 16-bits
field.
● Checksum: Checksum is 2 Bytes long field. It is the 16-bit one’s complement of the
one’s complement sum of the UDP header, the pseudo-header of information from the
IP header, and the data, padded with zero octets at the end (if necessary) to make a
multiple of two octets.
Notes – Unlike TCP, the Checksum calculation is not mandatory in UDP. No Error control or
flow control is provided by UDP. Hence UDP depends on IP and ICMP for error reporting. Also
UDP provides port numbers so that is can differentiate between users requests.
Applications of UDP
● Used for simple request-response communication when the size of data is less and
hence there is lesser concern about flow and error control.
● It is a suitable protocol for multicasting as UDP supports packet switching.
● UDP is used for some routing update protocols like RIP(Routing Information
Protocol).
● Normally used for real-time applications which can not tolerate uneven delays
between sections of a received message.
● VoIP (Voice over Internet Protocol) services, such as Skype and WhatsApp, use UDP
for real-time voice communication. The delay in voice communication can be
noticeable if packets are delayed due to congestion control, so UDP is used to ensure
fast and efficient data transmission.
● DNS (Domain Name System) also uses UDP for its query/response messages. DNS
queries are typically small and require a quick response time, making UDP a suitable
protocol for this application.
● DHCP (Dynamic Host Configuration Protocol) uses UDP to dynamically assign IP
addresses to devices on a network. DHCP messages are typically small, and the delay
caused by packet loss or retransmission is generally not critical for this application.
● Following implementations uses UDP as a transport layer protocol:
○ NTP (Network Time Protocol)
○ DNS (Domain Name Service)
○ BOOTP, DHCP.
○ NNP (Network News Protocol)
○ Quote of the day protocol
○ TFTP, RTSP, RIP.
● The application layer can do some of the tasks through UDP-
○ Trace Route
○ Record Route
○ Timestamp
● UDP takes a datagram from Network Layer, attaches its header, and sends it to the
user. So, it works fast.
TCP vs UDP
User
Transmission Control Datagram
Basis
Protocol (TCP) Protocol
(UDP)
UDP is the
Datagram-oriented
protocol. This is
because there is no
TCP is a connection-oriented overhead for opening
protocol. Connection orientation a connection,
means that the communicating maintaining a
Type of
devices should establish a connection, or
Service
connection before transmitting data terminating a
and should close the connection connection. UDP is
after transmitting the data. efficient for
broadcast and
multicast types of
network
transmission.

The delivery
of data to the
TCP is reliable as it
destination
Reliability guarantees the delivery of
cannot be
data to the destination router.
guaranteed in
UDP.

UDP has only


TCP provides extensive the basic
Error
error-checking mechanisms. It is error-checkin
checking
because it provides flow control and g mechanism
mechanism
acknowledgment of data. using
checksums.

No
Acknowledg An acknowledgment
acknowledgm
me nt segment is present.
ent segment.
There is no
sequencing of
data in UDP.
Sequencing of data is a feature of If the order is
Transmission Control Protocol required, it
Sequence
(TCP). this means that packets has to be
arrive in order at the receiver. managed by
the
application
layer.

UDP is faster,
TCP is comparatively slower simpler, and
Speed
than UDP. more efficient
than TCP.

There is no
retransmissio
n of lost
Retransmission of lost
Retransmissi packets in the
packets is possible in TCP,
on User
but not in UDP.
Datagram
Protocol
(UDP).

UDP has an 8
Header TCP has a (20-60) bytes bytes
Length variable length header. fixed-length
header.

UDP is
Weight TCP is heavy-weight.
lightweight.

It’s a
connectionles
Handshakin Uses handshakes such as
s protocol i.e.
g Techniques SYN, ACK, SYN-ACK
No
handshake
UDP
TCP doesn’t support
Broadcasting supports
Broadcasting.
Broadcasting.

UDP is used by
TCP is used by HTTP, HTTPs, FTP, DNS, DHCP, TFTP,
Protocols
SMTP and Telnet. SNMP, RIP, and
VoIP.

UDP
The TCP connection is a connection is
Stream Type
byte stream. a message
stream.

Overhead Low but higher than UDP. Very low.

This protocol
is used in
situations
where quick
communicati
This protocol is primarily on is
utilized in situations when a necessary but
safe and trustworthy where
Applications communication procedure is dependability
necessary, such as in email, is not a
on the web surfing, and in concern, such
military services. as VoIP, game
streaming,
video, and
music
streaming,
etc.

Advantages of UDP
● Speed: UDP is faster than TCP because it does not have the overhead of establishing
a connection and ensuring reliable data delivery.
● Lower latency: Since there is no connection establishment, there is lower latency and
faster response time.
● Simplicity: UDP has a simpler protocol design than TCP, making it easier to
implement and manage.
● Broadcast support: UDP supports broadcasting to multiple recipients, making it
useful for applications such as video streaming and online gaming.
● Smaller packet size: UDP uses smaller packet sizes than TCP, which can reduce
network congestion and improve overall network performance.
● User Datagram Protocol (UDP) is more efficient in terms of both latency and
bandwidth.

4)TCP
TCP stands for Transmission Control Protocol. TCP protocol provides transport layer services to
applications. TCP protocol is a connection-oriented protocol. A secured connection is being
established between the sender and the receiver. For a generation of a secured connection, a
virtual circuit is generated between the sender and the receiver. The data transmitted by TCP
protocol is in the form of continuous byte streams. A unique sequence number is assigned to
each byte. With the help of this unique number, a positive acknowledgment is received from
receipt. If the acknowledgment is not received within a specific period the data is retransmitted
to the specified destination.

TCP Segment
A TCP segment’s header may have 20–60 bytes. The options take about 40 bytes. A header
consists of 20 bytes by default, although it can contain up to 60 bytes.
● Source Port Address: The port address of the programme sending the data segment
is stored in the 16-bit field known as the source port address.
● Destination Port Address: The port address of the application running on the host
receiving the data segment is stored in the destination port address, a 16-bit field.
● Sequence Number: The sequence number, or the byte number of the first byte sent in
that specific segment, is stored in a 32-bit field. At the receiving end, it is used to put
the message back together once it has been received out of sequence.
● Acknowledgement Number : The acknowledgement number, or the byte number
that the recipient anticipates receiving next, is stored in a 32-bit field called the
acknowledgement number. It serves as a confirmation that the earlier bytes were
successfully received.
● Header Length (HLEN): This 4-bit field stores the number of 4-byte words in the
TCP header, indicating how long the header is. For example, if the header is 20 bytes
(the minimum length of the TCP header), this field will store 5 because 5 x 4 = 20,
and if the header is 60 bytes (the maximum length), it will store 15 because 15 x 4 =
60. As a result, this field’s value is always between 5 and 15.
● Control flags: These are six 1-bit control bits that regulate flow control, method of
transfer, connection abortion, termination, and establishment. They serve the
following purposes:
○ Urgent: This pointer is legitimate
○ ACK: The acknowledgement number (used in cumulative
acknowledgement cases) is valid.
○ PSH: Push request
○ RST: Restart the link.
○ SYN: Sequence number synchronisation
○ FIN: Cut off the communication
○ Window size: This parameter provides the sender TCP’s window
size in bytes.
● Checksum: The checksum for error control is stored in this field. Unlike UDP, it is
required for TCP.
● Urgent pointer: This field is used to point to data that must urgently reach the
receiving process as soon as possible. It is only valid if the URG control flag is set.
To obtain the byte number of the final urgent byte, the value of this field is appended
to the sequence number.
Advantages of TCP
● TCP supports multiple routing protocols.
● TCP protocol operates independently of that of the operating system.
● TCP protocol provides the features of error control and flow control.
● TCP provides a connection-oriented protocol and provides the delivery of data.
Disadvantages of TCP
● TCP protocol cannot be used for broadcast or multicast transmission.
● TCP protocol has no block boundaries.
● No clear separation is being offered by TCP protocol between its interface, services,
and protocols.
● In TCP/IP replacement of protocol is difficult.
5) SCTP
SCTP stands for Stream Control Transmission Protocol. SCTP is a connection-oriented protocol.
Stream Control Transmission Protocol transmits the data from sender to receiver in full duplex
mode. SCTP is a unicast protocol that provides a point-to-point connection and uses different
hosts for reaching the destination. SCTP protocol provides a simpler way to build a connection
over a wireless network. SCTP protocol provides a reliable transmission of data. SCTP provides
a reliable and easier telephone conversation over the internet. SCTP protocol supports the feature
of multihoming ie. it can establish more than one connection path between the two points of
communication and does not depend on the IP layer. SCTP protocol also ensures security by not
allowing the half-open connections.

Advantages of SCTP
● SCTP provides a full duplex connection. It can send and receive the data
simultaneously.
● SCTP protocol possesses the properties of both TCP and UDP protocol.
● SCTP protocol does not depend on the IP layer.
● SCTP is a secure protocol.
Disadvantages of SCTP
● To handle multiple streams simultaneously the applications need to be modified
accordingly.
● The transport stack on the node needs to be changed for the SCTP protocol.
● Modification is required in applications if SCTP is used instead of TCP or UDP
protocol.
6) Qos Techniques:
When too many packets are present in the network it causes packet delay and loss of packet
which degrades the performance of the system. This situation is called congestion.
The network layer and transport layer share the responsibility for handling congestions. One
of the most effective ways to control congestion is trying to reduce the load that transport layer is
placing on the network. To maintain this, the network and transport layers have to work together.
With too much traffic, performance drops sharply.
There are two types of Congestion control algorithms, which are as follows −
​ Leaky Bucket Algorithm
​ Token Bucket Algorithm
Leaky Bucket Algorithm
Let see the working condition of Leaky Bucket Algorithm −

Leaky Bucket Algorithm mainly controls the total amount and the rate of the traffic sent to the
network.
Step 1 − Let us imagine a bucket with a small hole at the bottom where the rate at which water is
poured into the bucket is not constant and can vary but it leaks from the bucket at a constant rate.
Step 2 − So (up to water is present in the bucket), the rate at which the water leaks does not
depend on the rate at which the water is input to the bucket.
Step 3 − If the bucket is full, additional water that enters into the bucket that spills over the sides
and is lost.
Step 4 − Thus the same concept applied to packets in the network. Consider that data is coming
from the source at variable speeds. Suppose that a source sends data at 10 Mbps for 4 seconds.
Then there is no data for 3 seconds. The source again transmits data at a rate of 8 Mbps for 2
seconds. Thus, in a time span of 8 seconds, 68 Mb data has been transmitted.
That’s why if a leaky bucket algorithm is used, the data flow would be 8 Mbps for 9 seconds.
Thus, the constant flow is maintained.

Token bucket algorithm is one of the techniques for congestion control algorithms. When too
many packets are present in the network it causes packet delay and loss of packet which degrades
the performance of the system. This situation is called congestion.
The network layer and transport layer share the responsibility for handling congestions. One
of the most effective ways to control congestion is trying to reduce the load that transport layer is
placing on the network. To maintain this network and transport layers have to work together.
The Token Bucket Algorithm is diagrammatically represented as follows −

With too much traffic, performance drops sharply.


Token Bucket Algorithm
The leaky bucket algorithm enforces output patterns at the average rate, no matter how busy the
traffic is. So, to deal with the more traffic, we need a flexible algorithm so that the data is not
lost. One such approach is the token bucket algorithm.
Let us understand this algorithm step wise as given below −
​ Step 1 − In regular intervals tokens are thrown into the bucket f.
​ Step 2 − The bucket has a maximum capacity f.
​ Step 3 − If the packet is ready, then a token is removed from the bucket, and the packet is
sent.
​ Step 4 − Suppose, if there is no token in the bucket, the packet cannot be sent.
Example
Let us understand the Token Bucket Algorithm with an example −
In figure (a) the bucket holds two tokens, and three packets are waiting to be sent out of the
interface.
In Figure (b) two packets have been sent out by consuming two tokens, and 1 packet is still left.
When compared to Leaky bucket the token bucket algorithm is less restrictive that means it
allows more traffic. The limit of busyness is restricted by the number of tokens available in the
bucket at a particular instant of time.
The implementation of the token bucket algorithm is easy − a variable is used to count the
tokens. For every t seconds the counter is incremented and then it is decremented whenever a
packet is sent. When the counter reaches zero, no further packet is sent out.
This is shown in below given diagram −
The differences between leaky and token bucket algorithm are:

Token Bucket Algorithm Leaky Bucket Algorithm

It depends on tokens. It does not depend on tokens.

If bucket is full, token is discarded but If bucket is full, then packets are
not the packet. discarded.

Packets can only transmit when there


Packets are transmitted continuously.
are enough tokens.

Allows large bursts to be sent at faster


Sends the packet at a constant rate.
rate. Bucket has maximum capacity.
The bucket holds tokens generated at When the host has to send a packet ,
regular intervals of time. packet is thrown in bucket.

If there is a ready packet , a token is


Bursty traffic is converted into uniform
removed from Bucket and packet is
traffic by leaky bucket.
send.

If there is no token in the bucket, then In practice a bucket is a finite queue


the packet cannot be sent. outputs at a finite rate.

Congestion control refers to the techniques used to control or prevent congestion. Congestion
control techniques can be broadly classified into two categories:

Open Loop Congestion Control


Open loop congestion control policies are applied to prevent congestion before it happens. The
congestion control is handled either by the source or the destination.
Policies adopted by open loop congestion control –

1. Retransmission Policy :
It is the policy in which retransmission of the packets are taken care of. If the sender
feels that a sent packet is lost or corrupted, the packet needs to be retransmitted. This
transmission may increase the congestion in the network.
To prevent congestion, retransmission timers must be designed to prevent congestion
and also able to optimize efficiency.
2. Window Policy :
The type of window at the sender’s side may also affect the congestion. Several
packets in the Go-back-n window are re-sent, although some packets may be received
successfully at the receiver side. This duplication may increase the congestion in the
network and make it worse.
Therefore, Selective repeat window should be adopted as it sends the specific packet
that may have been lost.

3. Discarding Policy :
A good discarding policy adopted by the routers is that the routers may prevent
congestion and at the same time partially discard the corrupted or less sensitive
packages and also be able to maintain the quality of a message.
In case of audio file transmission, routers can discard less sensitive packets to prevent
congestion and also maintain the quality of the audio file.

4. Acknowledgment Policy :
Since acknowledgements are also the part of the load in the network, the
acknowledgment policy imposed by the receiver may also affect congestion. Several
approaches can be used to prevent congestion related to acknowledgment.
The receiver should send acknowledgement for N packets rather than sending
acknowledgement for a single packet. The receiver should send an acknowledgment
only if it has to send a packet or a timer expires.

5. Admission Policy :
In admission policy a mechanism should be used to prevent congestion. Switches in a
flow should first check the resource requirement of a network flow before
transmitting it further. If there is a chance of a congestion or there is a congestion in
the network, router should deny establishing a virtual network connection to prevent
further congestion.
All the above policies are adopted to prevent congestion before it happens in the network.

Closed Loop Congestion Control


Closed loop congestion control techniques are used to treat or alleviate congestion after it
happens. Several techniques are used by different protocols; some of them are:

1. Backpressure :
Backpressure is a technique in which a congested node stops receiving packets from upstream
node. This may cause the upstream node or nodes to become congested and reject receiving data
from above nodes. Backpressure is a node-to-node congestion control technique that propagate
in the opposite direction of data flow. The backpressure technique can be applied only to virtual
circuit where each node has information of its above upstream node.
In above diagram the 3rd node is congested and stops receiving packets as a result 2nd
node may be get congested due to slowing down of the output data flow. Similarly 1st node may
get congested and inform the source to slow down.

2. Choke Packet Technique :


Choke packet technique is applicable to both virtual networks as well as datagram subnets. A
choke packet is a packet sent by a node to the source to inform it of congestion. Each router
monitors its resources and the utilization at each of its output lines. Whenever the resource
utilization exceeds the threshold value which is set by the administrator, the router directly sends
a choke packet to the source giving it a feedback to reduce the traffic. The intermediate nodes
through which the packets has traveled are not warned about congestion.

3. Implicit Signaling :
In implicit signaling, there is no communication between the congested nodes and the source.
The source guesses that there is congestion in a network. For example when sender sends several
packets and there is no acknowledgment for a while, one assumption is that there is a congestion.

4. Explicit Signaling :
In explicit signaling, if a node experiences congestion it can explicitly sends a packet to the
source or destination to inform about congestion. The difference between choke packet and
explicit signaling is that the signal is included in the packets that carry data rather than creating a
different packet as in case of choke packet technique.
Explicit signaling can occur in either forward or backward direction.
● Forward Signaling : In forward signaling, a signal is sent in the direction of the
congestion. The destination is warned about congestion. The receiver in this case
adopt policies to prevent further congestion.
● Backward Signaling : In backward signaling, a signal is sent in the opposite
direction of the congestion. The source is warned about congestion and it needs to
slow down.

UNIT V
Application Layer: DNS, TELNET, E-MAIL, FTP, WWW, HTTP, SNMP, Bluetooth,
Firewalls.
1)DNS (Domain Name System)
Description: DNS is a hierarchical and decentralized naming system used to resolve
human-readable domain names (like www.example.com) into IP addresses that computers use to
identify each other on the network.
○ DNS stands for Domain Name System.
○ DNS is a directory service that provides a mapping between the name of a host on the
network and its numerical address.
○ DNS is required for the functioning of the internet.
○ Each node in a tree has a domain name, and a full domain name is a sequence of symbols
specified by dots.
○ DNS is a service that translates the domain name into IP addresses. This allows the users
of networks to utilize user-friendly names when looking for other hosts instead of
remembering the IP addresses.
○ For example, suppose the FTP site at EduSoft had an IP address of 132.147.165.50, most
people would reach this site by specifying ftp.EduSoft.com. Therefore, the domain name
is more reliable than IP address.
DNS is a TCP/IP protocol used on different platforms. The domain name space is divided into
three different sections: generic domains, country domains, and inverse domain.
Advantages:
1. Simplifies User Access: Users can access websites using easy-to-remember domain
names instead of numeric IP addresses.
2. Decentralization: The hierarchical structure allows for distributed management and
redundancy.
3. Scalability: Can handle a vast number of domain names efficiently.
4. Flexibility: Supports various types of records (A, MX, CNAME, etc.) for different
purposes.
Disadvantages:
1. Security Vulnerabilities: Susceptible to attacks like DNS spoofing or cache poisoning.
2. Complexity: Managing DNS records can be complex, especially for large organizations.
3. Latency: DNS lookups can add latency to the initial connection time.
Applications:
● Translating domain names to IP addresses for web browsing, email, and other Internet
services.
● Load balancing by distributing traffic among multiple servers.
● Supporting CDN (Content Delivery Network) services by directing users to the nearest
server.

2)Telnet
Description: Telnet is a protocol that allows for remote access to another computer over a
network. It provides a command-line interface for communication with remote devices.
○ The main task of the internet is to provide services to users. For example, users want to
run different application programs at the remote site and transfers a result to the local site.
This requires a client-server program such as FTP, SMTP. But this would not allow us to
create a specific program for each demand.
○ The better solution is to provide a general client-server program that lets the user access
any application program on a remote computer. Therefore, a program that allows a user to
log on to a remote computer. A popular client-server program Telnet is used to meet such
demands. Telnet is an abbreviation for Terminal Network.
○ Telnet provides a connection to the remote computer in such a way that a local terminal
appears to be at the remote side.
There are two types of login:
○ Local Login

○ When a user logs into a local computer, then it is known as local login.
○ When the workstation running terminal emulator, the keystrokes entered by
the user are accepted by the terminal driver. The terminal driver then passes
these characters to the operating system which in turn, invokes the desired
application program.
○ However, the operating system has special meaning to special characters. For
example, in UNIX some combination of characters have special meanings
such as control character with "z" means suspend. Such situations do not
create any problem as the terminal driver knows the meaning of such
characters. But, it can cause the problems in remote login.
○ Remote login

○ When the user wants to access an application program on a remote computer, then
the user must perform remote login.
How remote login occurs
At the local site
The user sends the keystrokes to the terminal driver, the characters are then sent to the TELNET
client. The TELNET client which in turn, transforms the characters to a universal character set
known as network virtual terminal characters and delivers them to the local TCP/IP stack
At the remote site
The commands in NVT forms are transmitted to the TCP/IP at the remote machine. Here, the
characters are delivered to the operating system and then pass to the TELNET server. The
TELNET server transforms the characters which can be understandable by a remote computer.
However, the characters cannot be directly passed to the operating system as a remote operating
system does not receive the characters from the TELNET server. Therefore it requires some
piece of software that can accept the characters from the TELNET server. The operating system
then passes these characters to the appropriate application program.
Advantages:
1. Simplicity: Easy to set up and use for basic remote management.
2. Flexibility: Can be used on various operating systems and network devices.
3. Low Overhead: Minimal bandwidth usage due to text-based communication.
Disadvantages:
1. Lack of Security: Transmits data, including passwords, in plain text, making it
vulnerable to interception.
2. Limited Features: Basic compared to more modern protocols like SSH.
3. Compatibility Issues: Not all modern devices and systems support Telnet due to its
security limitations.
Applications:
● Remote management of servers and network devices.
● Troubleshooting network services and connectivity issues.
● Legacy systems and devices that do not support more secure protocols.
3)FTP
○ FTP stands for File transfer protocol.
○ FTP is a standard internet protocol provided by TCP/IP used for transmitting the files
from one host to another.
○ It is mainly used for transferring the web page files from their creator to the computer
that acts as a server for other computers on the internet.
○ It is also used for downloading the files to computer from other servers.
Objectives of FTP
○ It provides the sharing of files.
○ It is used to encourage the use of remote computers.
○ It transfers the data more reliably and efficiently.
Why FTP?
Although transferring files from one system to another is very simple and straightforward, but
sometimes it can cause problems. For example, two systems may have different file conventions.
Two systems may have different ways to represent text and data. Two systems may have
different directory structures. FTP protocol overcomes these problems by establishing two
connections between hosts. One connection is used for data transfer, and another connection is
used for the control connection.
Mechanism of FTP

The above figure shows the basic model of the FTP. The FTP client has three components: the
user interface, control process, and data transfer process. The server has two components: the
server control process and the server data transfer process.
There are two types of connections in FTP:

○ Control Connection: The control connection uses very simple rules for communication.
Through control connection, we can transfer a line of command or line of response at a
time. The control connection is made between the control processes. The control
connection remains connected during the entire interactive FTP session.
○ Data Connection: The Data Connection uses very complex rules as data types may vary.
The data connection is made between data transfer processes. The data connection opens
when a command comes for transferring the files and closes when the file is transferred.
FTP Clients
○ FTP client is a program that implements a file transfer protocol which allows you to
transfer files between two hosts on the internet.
○ It allows a user to connect to a remote host and upload or download the files.
○ It has a set of commands that we can use to connect to a host, transfer the files between
you and your host and close the connection.
○ The FTP program is also available as a built-in component in a Web browser. This GUI
based FTP client makes the file transfer very easy and also does not require to remember
the FTP commands.
Advantages of FTP:
○ Speed: One of the biggest advantages of FTP is speed. The FTP is one of the fastest way
to transfer the files from one computer to another computer.
○ Efficient: It is more efficient as we do not need to complete all the operations to get the
entire file.
○ Security: To access the FTP server, we need to login with the username and password.
Therefore, we can say that FTP is more secure.
○ Back & forth movement: FTP allows us to transfer the files back and forth. Suppose
you are a manager of the company, you send some information to all the employees, and
they all send information back on the same server.
Disadvantages of FTP:
○ The standard requirement of the industry is that all the FTP transmissions should be
encrypted. However, not all the FTP providers are equal and not all the providers offer
encryption. So, we will have to look out for the FTP providers that provides encryption.
○ FTP serves two operations, i.e., to send and receive large files on a network. However,
the size limit of the file is 2GB that can be sent. It also doesn't allow you to run
simultaneous transfers to multiple receivers.
○ Passwords and file contents are sent in clear text that allows unwanted eavesdropping.
So, it is quite possible that attackers can carry out the brute force attack by trying to guess
the FTP password.
○ It is not compatible with every system.

4)SNMP
○ SNMP stands for Simple Network Management Protocol.
○ SNMP is a framework used for managing devices on the internet.
○ It provides a set of operations for monitoring and managing the internet.
SNMP Concept

○ SNMP has two components: Manager and agent.


○ The manager is a host that controls and monitors a set of agents such as routers.
○ It is an application layer protocol in which a few manager stations can handle a set of
agents.
○ The protocol designed at the application level can monitor the devices made by different
manufacturers and installed on different physical networks.
○ It is used in a heterogeneous network made of different LANs and WANs connected by
routers or gateways.
Managers & Agents
○ A manager is a host that runs the SNMP client program while the agent is a router that
runs the SNMP server program.
○ Management of the internet is achieved through simple interaction between a manager
and agent.
○ The agent is used to keep the information in a database while the manager is used to
access the values in the database. For example, a router can store the appropriate
variables such as a number of packets received and forwarded while the manager can
compare these variables to determine whether the router is congested or not.
○ Agents can also contribute to the management process. A server program on the agent
checks the environment, if something goes wrong, the agent sends a warning message to
the manager.
Management with SNMP has three basic ideas:
○ A manager checks the agent by requesting the information that reflects the behavior of
the agent.
○ A manager also forces the agent to perform a certain function by resetting values in the
agent database.
○ An agent also contributes to the management process by warning the manager regarding
an unusual condition.
Management Components
○ Management is not achieved only through the SNMP protocol but also the use of other
protocols that can cooperate with the SNMP protocol. Management is achieved through
the use of the other two protocols: SMI (Structure of management information) and
MIB(management information base).
○ Management is a combination of SMI, MIB, and SNMP. All these three protocols such as
abstract syntax notation 1 (ASN.1) and basic encoding rules (BER).

SMI
The SMI (Structure of management information) is a component used in network management.
Its main function is to define the type of data that can be stored in an object and to show how to
encode the data for the transmission over a network.
MIB
○ The MIB (Management information base) is a second component for the network
management.
○ Each agent has its own MIB, which is a collection of all the objects that the manager can
manage. MIB is categorized into eight groups: system, interface, address translation, ip,
icmp, tcp, udp, and egp. These groups are under the mib object.
SNMP
SNMP defines five types of messages: GetRequest, GetNextRequest, SetRequest, GetResponse,
and Trap.

GetRequest: The GetRequest message is sent from a manager (client) to the agent (server) to
retrieve the value of a variable.
GetNextRequest: The GetNextRequest message is sent from the manager to agent to retrieve
the value of a variable. This type of message is used to retrieve the values of the entries in a
table. If the manager does not know the indexes of the entries, then it will not be able to retrieve
the values. In such situations, GetNextRequest message is used to define an object.
GetResponse: The GetResponse message is sent from an agent to the manager in response to the
GetRequest and GetNextRequest message. This message contains the value of a variable
requested by the manager.
SetRequest: The SetRequest message is sent from a manager to the agent to set a value in a
variable.
Trap: The Trap message is sent from an agent to the manager to report an event. For example, if
the agent is rebooted, then it informs the manager as well as sends the time of rebooting.

5)HTTP
HTTP stands for HyperText Transfer Protocol.
○ It is a protocol used to access the data on the World Wide Web (www).
○ The HTTP protocol can be used to transfer the data in the form of plain text, hypertext,
audio, video, and so on.
○ This protocol is known as HyperText Transfer Protocol because of its efficiency that
allows us to use in a hypertext environment where there are rapid jumps from one
document to another document.
○ HTTP is similar to the FTP as it also transfers the files from one host to another host. But,
HTTP is simpler than FTP as HTTP uses only one connection, i.e., no control connection
to transfer the files.
○ HTTP is used to carry the data in the form of MIME-like format.
○ HTTP is similar to SMTP as the data is transferred between client and server. The HTTP
differs from the SMTP in the way the messages are sent from the client to the server and
from server to the client. SMTP messages are stored and forwarded while HTTP
messages are delivered immediately.
Features of HTTP:
○ Connectionless protocol: HTTP is a connectionless protocol. HTTP client initiates a
request and waits for a response from the server. When the server receives the request,
the server processes the request and sends back the response to the HTTP client after
which the client disconnects the connection. The connection between client and server
exist only during the current request and response time only.
○ Media independent: HTTP protocol is a media independent as data can be sent as long
as both the client and server know how to handle the data content. It is required for both
the client and server to specify the content type in MIME-type header.
○ Stateless: HTTP is a stateless protocol as both the client and server know each other only
during the current request. Due to this nature of the protocol, both the client and server do
not retain the information between various requests of the web pages.
HTTP Transactions
The above figure shows the HTTP transaction between client and server. The client initiates a
transaction by sending a request message to the server. The server replies to the request message
by sending a response message.
Messages
HTTP messages are of two types: request and response. Both the message types follow the same
message format.

Request Message: The request message is sent by the client that consists of a request line,
headers, and sometimes a body.
Response Message: The response message is sent by the server to the client that consists of a
status line, headers, and sometimes a body.

Uniform Resource Locator (URL)


○ A client that wants to access the document in an internet needs an address and to facilitate
the access of documents, the HTTP uses the concept of Uniform Resource Locator
(URL).
○ The Uniform Resource Locator (URL) is a standard way of specifying any kind of
information on the internet.
○ The URL defines four parts: method, host computer, port, and path.

○ Method: The method is the protocol used to retrieve the document from a server. For
example, HTTP.
○ Host: The host is the computer where the information is stored, and the computer is
given an alias name. Web pages are mainly stored in the computers and the computers are
given an alias name that begins with the characters "www". This field is not mandatory.
○ Port: The URL can also contain the port number of the server, but it's an optional field. If
the port number is included, then it must come between the host and path and it should be
separated from the host by a colon.
○ Path: Path is the pathname of the file where the information is stored. The path itself
contain slashes that separate the directories from the subdirectories and files.
The World Wide Web (WWW), often called the Web, is a system of interconnected webpages
and information that you can access using the Internet. It was created to help people share and
find information easily, using links that connect different pages together. The Web allows us to
browse websites, watch videos, shop online, and connect with others around the world through
our computers and phones.
All public websites or web pages that people may access on their local computers and other
devices through the internet are collectively known as the World Wide Web or W3. Users can get
further information by navigating to links interconnecting these pages and documents. This data
may be presented in text, picture, audio, or video formats on the internet.
What is WWW?
WWW stands for World Wide Web and is commonly known as the Web. The WWW was started
by CERN in 1989. WWW is defined as the collection of different websites around the world,
containing different information shared via local servers(or computers).
Web pages are linked together using hyperlinks which are HTML-formatted and, also referred to
as hypertext, these are the fundamental units of the Internet and are accessed through Hypertext
Transfer Protocol(HTTP). Such digital connections, or links, allow users to easily access desired
information by connecting relevant pieces of information. The benefit of hypertext is it allows
you to pick a word or phrase from the text and click on other sites that have more information
about it.
History of the WWW
It is a project created, by Tim Berner Lee in 1989, for researchers to work together effectively at
CERN. It is an organization, named the World Wide Web Consortium (W3C), which was
developed for further development of the web. This organization is directed by Tim Berner’s
Lee, aka the father of the web. CERN, where Tim Berners worked, is a community of more than
1700 researchers from more than 100 countries. These researchers spend a little time on CERN
and the rest of the time they work at their colleges and national research facilities in their home
country, so there was a requirement for solid communication so that they can exchange data.
System Architecture
From the user’s point of view, the web consists of a vast, worldwide connection of documents or
web pages. Each page may contain links to other pages anywhere in the world. The pages can be
retrieved and viewed by using browsers of which internet explorer, Netscape Navigator, Google
Chrome, etc are the popular ones. The browser fetches the page requested interprets the text and
formatting commands on it, and displays the page, properly formatted, on the screen.
The basic model of how the web works are shown in the figure below. Here the browser is
displaying a web page on the client machine. When the user clicks on a line of text that is linked
to a page on the abd.com server, the browser follows the hyperlink by sending a message to the
abd.com server asking it for the page.
Here the browser displays a web page on the client machine when the user clicks on a line of text
that is linked to a page on abd.com, the browser follows the hyperlink by sending a message to
the abd.com server asking for the page.
Working of WWW
A Web browser is used to access web pages. Web browsers can be defined as programs which
display text, data, pictures, animation and video on the Internet. Hyperlinked resources on the
World Wide Web can be accessed using software interfaces provided by Web browsers. Initially,
Web browsers were used only for surfing the Web but now they have become more universal.
The below diagram indicates how the Web operates just like client-server architecture of the
internet. When users request web pages or other information, then the web browser of your
system request to the server for the information and then the web server provide requested
services to web browser back and finally the requested service is utilized by the user who made
the request.
World Wide Web
Web browsers can be used for several tasks including conducting searches, mailing, transferring
files, and much more. Some of the commonly used browsers are Internet Explorer, Opera Mini,
and Google Chrome.
Features of WWW
● WWW is open source.
● It is a distributed system spread across various websites.
● It is a Hypertext Information System.
● It is Cross-Platform.
● Uses Web Browsers to provide a single interface for many services.
● Dynamic, Interactive and Evolving.
Components of the Web
There are 3 components of the web:
● Uniform Resource Locator (URL): URL serves as a system for resources on the
web.
● Hyper Text Transfer Protocol (HTTP): HTTP specifies communication of browser
and server.
● Hyper Text Markup Language (HTML): HTML defines the structure, organisation
and content of a web page.
Email protocols
Email protocols are a collection of protocols that are used to send and receive emails properly.
The email protocols provide the ability for the client to transmit the mail to or from the intended
mail server. Email protocols are a set of commands for sharing mails between two computers.
Email protocols establish communication between the sender and receiver for the transmission of
email. Email forwarding includes components like two computers sending and receiving emails
and the mail server. There are three basic types of email protocols.
Types of Email Protocols:
Three basic types of email protocols involved for sending and receiving mails are:
● SMTP
● POP3
● IMAP

SMTP (Simple Mail Transfer Protocol):


Simple Mail Transfer Protocol is used to send mails over the internet. SMTP is an application
layer and connection-oriented protocol. SMTP is efficient and reliable for sending emails. SMTP
uses TCP as the transport layer protocol. It handles the sending and receiving of messages
between email servers over a TCP/IP network. This protocol along with sending emails also
provides the feature of notification for incoming mails. When a sender sends an email then the
sender’s mail client sends it to the sender’s mail server and then it is sent to the receiver mail
server through SMTP. SMTP commands are used to identify the sender and receiver email
addresses along with the message to be sent.
Some of the SMTP commands are HELLO, MAIL FROM, RCPT TO, DATA, QUIT, VERIFY,
SIZE, etc. SMTP sends an error message if the mail is not delivered to the receiver hence,
reliable protocol.
For more details please refer to the Simple Mail Transfer Protocol (SMTP) article.
POP(Post Office Protocol):
Post Office Protocol is used to retrieve email for a single client. POP3 version is the current
version of POP used. It is an application layer protocol. It allows to access mail offline and thus,
needs less internet time. To access the message it has to be downloaded. POP allows only a
single mailbox to be created on the mail server. POP does not allow search facilities
Some of the POP commands are LOG IN, STAT, LIST, RETR, DELE, RSET, and QUIT. For
more details please refer to the POP Full-Form article.
IMAP(Internet Message Access Protocol):
Internet Message Access Protocol is used to retrieve mails for multiple clients. There are several
IMAP versions: IMAP, IMAP2, IMAP3, IMAP4, etc. IMAP is an application layer protocol.
IMAP allows to access email without downloading them and also supports email download. The
emails are maintained by the remote server. It enables all email operations such as creating,
manipulating, delete the email without reading it. IMAP allows you to search emails. It allows
multiple mailboxes to be created on multiple mail servers and allows concurrent access. Some of
the IMAP commands are: IMAP_LOGIN, CREATE, DELETE, RENAME, SELECT,
EXAMINE, and LOGOUT.
For more details please refer to the Internet Message Access Protocol (IMAP) article.

MIME(Multipurpose Internet Mail Extension Protocol):


Multipurpose Internet Mail Extension Protocol is an additional email protocol that allows
non-ASCII data to be sent through SMTP. It allows users to send and receive different types of
data like audio, images, videos and other application programs on the Internet. It allows to send
multiple attachments with single message. It allows to send message of unlimited length.

Bluetooth is used for short-range wireless voice and data communication. It is a Wireless
Personal Area Network (WPAN) technology and is used for data communications over smaller
distances. This generation changed into being invented via Ericson in 1994. It operates within the
unlicensed, business, scientific, and clinical (ISM) bands from 2.4 GHz to 2.485 GHz.
Bluetooth stages up to 10 meters. Depending upon the version, it presents information up to at
least 1 Mbps or 3 Mbps. The spreading method that it uses is FHSS (Frequency-hopping unfold
spectrum). A Bluetooth network is called a piconet and a group of interconnected piconets is
called a scatter net.
What is Bluetooth?
Bluetooth is a wireless technology that lets devices like phones, tablets, and headphones connect
to each other and share information without needing cables. Bluetooth simply follows the
principle of transmitting and receiving data using radio waves. It can be paired with the other
device which has also Bluetooth but it should be within the estimated communication range to
connect. When two devices start to share data, they form a network called piconet which can
further accommodate more than five devices.
Key Features of Bluetooth
● The transmission capacity of Bluetooth is 720 kbps.
● Bluetooth is a wireless device.
● Bluetooth is a Low-cost and short-distance radio communications standard.
● Bluetooth is robust and flexible.
● The basic architecture unit of Bluetooth is a piconet.
Architecture of Bluetooth
The architecture of Bluetooth defines two types of networks:
Piconet
Piconet is a type of Bluetooth network that contains one primary node called the master node and
seven active secondary nodes called slave nodes. Thus, we can say that there is a total of 8 active
nodes which are present at a distance of 10 meters. The communication between the primary and
secondary nodes can be one-to-one or one-to-many. Possible communication is only between the
master and slave; Slave-slave communication is not possible. It also has 255 parked nodes, these
are secondary nodes and cannot take participation in communication unless it gets converted to
the active state.
Scatternet
It is formed by using various piconets. A slave that is present in one piconet can act as master or
we can say primary in another piconet. This kind of node can receive a message from a master in
one piconet and deliver the message to its slave in the other piconet where it is acting as a master.
This type of node is referred to as a bridge node. A station cannot be mastered in two piconets.
Bluetooth Architecture
Bluetooth Protocol Stack
● Radio (RF) Layer: It specifies the details of the air interface, including frequency,
the use of frequency hopping and transmit power. It performs
modulation/demodulation of the data into RF signals. It defines the physical
characteristics of Bluetooth transceivers. It defines two types of physical links:
connection-less and connection-oriented.
● Baseband Link Layer: The baseband is the digital engine of a Bluetooth system and
is equivalent to the MAC sublayer in LANs. It performs the connection
establishment within a piconet, addressing, packet format, timing and power control.
● Link Manager Protocol Layer: It performs the management of the already
established links which includes authentication and encryption processes. It is
responsible for creating the links, monitoring their health, and terminating them
gracefully upon command or failure.
● Logical Link Control and Adaption (L2CAP) Protocol Layer: It is also known as
the heart of the Bluetooth protocol stack. It allows the communication between upper
and lower layers of the Bluetooth protocol stack. It packages the data packets
received from upper layers into the form expected by lower layers. It also performs
segmentation and multiplexing.
● Service Discovery Protocol (SDP) Layer: It is short for Service Discovery Protocol.
It allows discovering the services available on another Bluetooth-enabled device.
● RF Comm Layer: It is a cabal replacement protocol. It is short for Radio Frontend
Component. It provides a serial interface with WAP and OBEX. It also provides
emulation of serial ports over the logical link control and adaption protocol(L2CAP).
The protocol is based on the ETSI standard TS 07.10.
● OBEX: It is short for Object Exchange. It is a communication protocol to exchange
objects between 2 devices.
● WAP: It is short for Wireless Access Protocol. It is used for internet access.
● TCS: It is short for Telephony Control Protocol. It provides telephony service. The
basic function of this layer is call control (setup & release) and group management for
the gateway serving multiple devices.
● Application Layer: It enables the user to interact with the application.

Bluetooth Protocol Stack


Types of Bluetooth
Various types of Bluetooth are available in the market nowadays. Let us look at them.
● In-Car Headset: One can make calls from the car speaker system without the use of
mobile phones.
● Stereo Headset: To listen to music in car or in music players at home.
● Webcam: One can link the camera with the help of Bluetooth with their laptop or
phone.
● Bluetooth-Equipped Printer: The printer can be used when connected via Bluetooth
with mobile phone or laptop.
● Bluetooth Global Positioning System (GPS): To use Global Positioning System
(GPS) in cars, one can connect their phone with car system via Bluetooth to fetch the
directions of the address.
Applications of Bluetooth
● It can be used in wireless headsets, wireless PANs, and LANs.
● It can connect a digital camera wireless to a mobile phone.
● It can transfer data in terms of videos, songs, photographs, or files from one cell
phone to another cell phone or computer.
● It is used in the sectors of Medical healthcare, sports and fitness, Military.
Advantages
● It is a low-cost and easy-to-use device.
● It can also penetrate through walls.
● It creates an Ad-hoc connection immediately without any wires.
● It is used for voice and data transfer.
Disadvantages
● It can be hacked and hence, less secure.
● It has a slow data transfer rate of 3 Mbps.
● Bluetooth communication does not support routing.
In the world of computer networks, a firewall acts like a security guard. Its job is to watch over
the flow of information between your computer or network and the internet. It’s designed to
block unauthorized access while allowing safe data to pass through.
Essentially, a firewall helps keep your digital world safe from unwanted visitors and potential
threats, making it an essential part of today’s connected environment. It monitors both incoming
and outgoing traffic using a predefined set of security to detect and prevent threats.
What is Firewall?
A firewall is a network security device, either hardware or software-based, which monitors all
incoming and outgoing traffic and based on a defined set of security rules accepts, rejects, or
drops that specific traffic.
● Accept: allow the traffic
● Reject: block the traffic but reply with an “unreachable error”
● Drop: block the traffic with no reply
A firewall is a type of network security device that filters incoming and outgoing network traffic
with security policies that have previously been set up inside an organization. A firewall is
essentially the wall that separates a private internal network from the open Internet at its very
basic level.

History and Need For Firewall


Before Firewalls, network security was performed by Access Control Lists (ACLs) residing on
routers. ACLs are rules that determine whether network access should be granted or denied to
specific IP address. But ACLs cannot determine the nature of the packet it is blocking. Also,
ACL alone does not have the capacity to keep threats out of the network. Hence, the Firewall
was introduced. Connectivity to the Internet is no longer optional for organizations. However,
accessing the Internet provides benefits to the organization; it also enables the outside world to
interact with the internal network of the organization. This creates a threat to the organization. In
order to secure the internal network from unauthorized traffic, we need a Firewall.
Working of Firewall
Firewall match the network traffic against the rule set defined in its table. Once the rule is
matched, associate action is applied to the network traffic. For example, Rules are defined as any
employee from Human Resources department cannot access the data from code server and at the
same time another rule is defined like system administrator can access the data from both Human
Resource and technical department. Rules can be defined on the firewall based on the necessity
and security policies of the organization. From the perspective of a server, network traffic can be
either outgoing or incoming.
Firewall maintains a distinct set of rules for both the cases. Mostly the outgoing traffic,
originated from the server itself, allowed to pass. Still, setting a rule on outgoing traffic is always
better in order to achieve more security and prevent unwanted communication. Incoming traffic
is treated differently. Most traffic which reaches on the firewall is one of these three major
Transport Layer protocols- TCP, UDP or ICMP. All these types have a source address and
destination address. Also, TCP and UDP have port numbers. ICMP uses type code instead of port
number which identifies purpose of that packet.
Default policy: It is very difficult to explicitly cover every possible rule on the firewall. For this
reason, the firewall must always have a default policy. Default policy only consists of action
(accept, reject or drop). Suppose no rule is defined about SSH connection to the server on the
firewall. So, it will follow the default policy. If default policy on the firewall is set to accept, then
any computer outside of your office can establish an SSH connection to the server. Therefore,
setting default policy as drop (or reject) is always a good practice.
Types of Firewall
Firewalls can be categorized based on their generation.
1. Packet Filtering Firewall
Packet filtering firewall is used to control network access by monitoring outgoing and incoming
packets and allowing them to pass or stop based on source and destination IP address, protocols,
and ports. It analyses traffic at the transport protocol layer (but mainly uses first 3 layers). Packet
firewalls treat each packet in isolation. They have no ability to tell whether a packet is part of an
existing stream of traffic. Only It can allow or deny the packets based on unique packet headers.
Packet filtering firewall maintains a filtering table that decides whether the packet will be
forwarded or discarded. From the given filtering table, the packets will be filtered according to
the following rules:
● Incoming packets from network 192.168.21.0 are blocked.
● Incoming packets destined for the internal TELNET server (port 23) are blocked.
● Incoming packets destined for host 192.168.21.3 are blocked.
● All well-known services to the network 192.168.21.0 are allowed.
2. Stateful Inspection Firewall
Stateful firewalls (performs Stateful Packet Inspection) are able to determine the connection state
of packet, unlike Packet filtering firewall, which makes it more efficient. It keeps track of the
state of networks connection travelling across it, such as TCP streams. So the filtering decisions
would not only be based on defined rules, but also on packet’s history in the state table.
3. Software Firewall
A software firewall is any firewall that is set up locally or on a cloud server. When it comes to
controlling the inflow and outflow of data packets and limiting the number of networks that can
be linked to a single device, they may be the most advantageous. But the problem with software
firewall is they are time-consuming.
4. Hardware Firewall
They also go by the name “firewalls based on physical appliances.” It guarantees that the
malicious data is halted before it reaches the network endpoint that is in danger.
5. Application Layer Firewall
Application layer firewall can inspect and filter the packets on any OSI layer, up to the
application layer. It has the ability to block specific content, also recognize when certain
application and protocols (like HTTP, FTP) are being misused. In other words, Application layer
firewalls are hosts that run proxy servers. A proxy firewall prevents the direct connection
between either side of the firewall, each packet has to pass through the proxy.
6. Next Generation Firewalls (NGFW)
NGFW consists of Deep Packet Inspection, Application Inspection, SSL/SSH inspection and
many functionalities to protect the network from these modern threats.
7. Proxy Service Firewall
This kind of firewall filters communications at the application layer, and protects the network. A
proxy firewall acts as a gateway between two networks for a particular application.
8. Circuit Level Gateway Firewall
This works as the Sessions layer of the OSI Model’s . This allows for the simultaneous setup of
two Transmission Control Protocol (TCP) connections. It can effortlessly allow data packets to
flow without using quite a lot of computing power. These firewalls are ineffective because they
do not inspect data packets; if malware is found in a data packet, they will permit it to pass
provided that TCP connections are established properly.
Functions of Firewall
● Every piece of data that enters or leaves a computer network must go via the firewall.
● If the data packets are safely routed via the firewall, all of the important data remains
intact.
● A firewall logs each data packet that passes through it, enabling the user to keep track
of all network activities.
● Since the data is stored safely inside the data packets, it cannot be altered.
● Every attempt for access to our operating system is examined by our firewall, which
also blocks traffic from unidentified or undesired sources.
Who Invented Firewalls?
The firewall keeps changing and getting better because different people have been working on it
since the late 1980s to the mid-90s. Each person added new parts and improved versions of the
firewall before it became what we use in modern times. This means the firewall is always
evolving to become more effective and secure.
Jeff Mogul, Paul Vixie, and Brian Reid
In the late 1980s, Mogul, Reid, and Vixie worked at Digital Equipment Corp (DEC) on
packet-filtering technology. This tech became important for future firewalls. They started the
idea of checking external connections before they reach computers on an internal network. Some
people think this packet filter was the first firewall, but it was really a part of the technology that
later became true firewall systems.
Kshitiji Nigam, William Cheswick, David Presotto, Steven Bellovin, and Janardan Sharma
In the late 1980s to early 1990s, researchers at AT&T Bell Labs worked on a new type of
firewall called the circuit-level gateway. Unlike earlier methods, this firewall didn’t need to
reauthorize connections for each data packet but instead vetted and allowed ongoing
connections. From 1989 to 1990, Presotto, Sharma, and Nigam developed this technology, and in
1991, Cheswick and Bellovin continued to advance firewall technology based on their work.
Marcus Ranum
From 1991 to 1992, Ranum introduced security proxies at DEC, which became a crucial part of
the first application-layer firewall product. Known as the Secure External Access Link (SEAL)
product, it was based on earlier work by Reid, Vixie, and Mogul at DEC. SEAL marked the first
commercially available firewall, pioneering the way for enhanced network security through
application-level protection.
Gil Shwed and Nir Zuk
From 1993 to 1994, at Check Point, Gil Shwed and developer Nir Zuk made major contributions
to creating the first widely-used and easy-to-use firewall product called Firewall-1. Gil Shwed
pioneered stateful inspection technology, filing a U.S. patent in 1993. Following this, Nir Zuk
developed a user-friendly graphical interface for Firewall-1 in 1994. These innovations were
pivotal in making firewalls accessible and popular among businesses and homes, shaping their
adoption for years to come.
Importance of Firewalls
So, what does a firewall do and why is it important? Without protection, networks are vulnerable
to any traffic trying to access your systems, whether it’s harmful or not. That’s why it’s crucial to
check all network traffic.
When you connect personal computers to other IT systems or the internet, it opens up many
benefits like collaboration, resource sharing, and creativity. But it also exposes your network and
devices to risks like hacking, identity theft, malware, and online fraud.
Once a malicious person finds your network, they can easily access and threaten it, especially
with constant internet connections.
Using a firewall is essential for proactive protection against these risks. It helps users shield their
networks from the worst dangers.
What Does Firewall Security Do?
A firewall serves as a security barrier for a network, narrowing the attack surface to a single
point of contact. Instead of every device on a network being exposed to the internet, all traffic
must first go through the firewall. This way, the firewall can filter and block non-permitted
traffic, whether it’s coming in or going out. Additionally, firewalls help create a record of
attempted connections, improving security awareness.
What Can Firewalls Protect Against?
● Infiltration by Malicious Actors: Firewalls can block suspicious connections,
preventing eavesdropping and advanced persistent threats (APTs).
● Parental Controls: Parents can use firewalls to block their children from accessing
explicit web content.
● Workplace Web Browsing Restrictions: Employers can restrict employees from
using the company network to access certain services and websites, like social media.
● Nationally Controlled Intranet: Governments can block access to certain web
content and services that conflict with national policies or values.
By allowing network owners to set specific rules, firewalls offer customizable protection for
various scenarios, enhancing overall network security.
Advantages of Using Firewall
● Protection From Unauthorized Access: Firewalls can be set up to restrict incoming
traffic from particular IP addresses or networks, preventing hackers or other
malicious actors from easily accessing a network or system. Protection from
unwanted access.
● Prevention of Malware and Other Threats: Malware and other threat prevention:
Firewalls can be set up to block traffic linked to known malware or other security
concerns, assisting in the defense against these kinds of attacks.
● Control of Network Access: By limiting access to specified individuals or groups for
particular servers or applications, firewalls can be used to restrict access to particular
network resources or services.
● Monitoring of Network Activity: Firewalls can be set up to record and keep track of
all network activity.
● Regulation Compliance: Many industries are bound by rules that demand the usage
of firewalls or other security measures.
● Network Segmentation: By using firewalls to split up a bigger network into smaller
subnets, the attack surface is reduced and the security level is raised.
Disadvantages of Using Firewall
● Complexity: Setting up and keeping up a firewall can be time-consuming and
difficult, especially for bigger networks or companies with a wide variety of users and
devices.
● Limited Visibility: Firewalls may not be able to identify or stop security risks that
operate at other levels, such as the application or endpoint level, because they can
only observe and manage traffic at the network level.
● False Sense of Security: Some businesses may place an excessive amount of reliance
on their firewall and disregard other crucial security measures like endpoint security
or intrusion detection systems.
● Limited adaptability: Because firewalls are frequently rule-based, they might not be
able to respond to fresh security threats.
● Performance Impact: Network performance can be significantly impacted by
firewalls, particularly if they are set up to analyze or manage a lot of traffic.
● Limited Scalability: Because firewalls are only able to secure one network,
businesses that have several networks must deploy many firewalls, which can be
expensive.
● Limited VPN support: Some firewalls might not allow complex VPN features like
split tunneling, which could restrict the experience of a remote worker.
● Cost: Purchasing many devices or add-on features for a firewall system can be
expensive, especially for businesses.
Conclusion
In conclusion, firewalls play a crucial role in safeguarding computers and networks. By
monitoring and controlling incoming and outgoing data, they help prevent unauthorized access
and protect against cyber threats. Using a firewall is a smart way to enhance security and ensure
a safer online experience for users and organizations alike.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy