CN
CN
CN
Unit-1
Data communication Components: Representation of data, Data Flow, Network Topologies,
Protocols, OSI Reference Model, TCP/IP Reference Model. Physical Layer: Transmission
Media – Guided and Unguided Transmission Media.
1) Delivery. The system must deliver data to the correct destination. Data must be received by
the intended device or user and only by that device or user.
2) Accuracy. The system must deliver the data accurately. Data that have been altered in
transmission and left uncorrected are unusable.
3) Timeliness. The system must deliver data in a timely manner. Data delivered late are useless.
In the case of video and audio, timely delivery means delivering data as they are produced, in the
same order that they are produced, and without significant delay. This kind of delivery is called
real-time transmission.
4)Jitter. Jitter refers to the variation in the packet arrival time. It is the uneven delay in the
delivery of audio or video packets. For example, let us assume that video packets are sent every
3D ms. Ifsome of the packets arrive with 3D-ms delay and others with 4D-ms delay, an uneven
quality in the video is the result.
4)Transmission medium. The transmission medium is the physical path by which a message
travels from sender to receiver. Some examples of transmission media include twisted-pair wire,
coaxial cable, fiber-optic cable, and radio waves.
~Data Representation
Information today comes in different forms such as text, numbers, images, audio, and video.
Text In data communications, text is represented as a bit pattern, a sequence of bits (Os or Is).
Different sets of bit patterns have been designed to represent text symbols. Each set is called a
code, and the process of representing symbols is called coding. Today, the prevalent coding
system is called Unicode, which uses 32 bits to represent a symbol or character used in any
language in the world. The American Standard Code for Information Interchange (ASCII),
developed some decades ago in the United States, now constitutes the first 127 characters in
Unicode and is also referred to as Basic Latin. Appendix A includes part of the Unicode.
Numbers:Numbers are also represented by bit patterns. However, a code such as ASCII is not
used to represent numbers; the number is directly converted to a binary number to simplify
mathematical operations. Appendix B discusses several different numbering systems.
Images :Images are also represented by bit patterns. In its simplest form, an image is composed
of a matrix of pixels (picture elements), where each pixel is a small dot. The size of the pixel
depends on the resolution. For example, an image can be divided into 1000 pixels or 10,000
pixels. In the second case, there is a better representation of the image (better resolution), but
more memory is needed to store the image. After an image is divided into pixels, each pixel is
assigned a bit pattern. The size and the value of the pattern depend on the image. For an image
made of only blackand-white dots (e.g., a chessboard), a I-bit pattern is enough to represent a
pixel. If an image is not made of pure white and pure black pixels, you can increase the size of
the bit pattern to include gray scale. For example, to show four levels of gray scale, you can use
2-bit patterns. A black pixel can be represented by 00, a dark gray pixel by 01, a light gray pixel
by 10, and a white pixel by 11. There are several methods to represent color images. One method
is called RGB, so called because each color is made of a combination of three primary colors:
red, green, and blue. The intensity of each color is measured, and a bit pattern is assigned to it.
Another method is called YCM, in which a color is made of a combination of three other primary
colors: yellow, cyan, and magenta.
~Data Flow:
Transmission mode means transferring data between two devices. It is also known as a
communication mode. Buses and networks are designed to allow communication to occur
between individual devices that are interconnected.
There are three types of transmission mode:-
Advantages:
● Simplex mode is the easiest and most reliable mode of communication.
● It is the most cost-effective mode, as it only requires one communication channel.
● There is no need for coordination between the transmitting and receiving devices,
which simplifies the communication process.
● Simplex mode is particularly useful in situations where feedback or response is not
required, such as broadcasting or surveillance.
Disadvantages:
● Only one-way communication is possible.
● There is no way to verify if the transmitted data has been received correctly.
● Simplex mode is not suitable for applications that require bidirectional
communication.
2. Half-Duplex Mode –
In half-duplex mode, each station can both transmit and receive, but not at the same time. When
one device is sending, the other can only receive, and vice versa. The half-duplex mode is used
in cases where there is no need for communication in both directions at the same time. The entire
capacity of the channel can be utilized for each direction.
Example: Walkie-talkie in which message is sent one at a time and messages are sent in both
directions.
Channel capacity=Bandwidth * Propagation Delay
Advantages:
● Half-duplex mode allows for bidirectional communication, which is useful in
situations where devices need to send and receive data.
● It is a more efficient mode of communication than simplex mode, as the channel can
be used for both transmission and reception.
● Half-duplex mode is less expensive than full-duplex mode, as it only requires one
communication channel.
Disadvantages:
● Half-duplex mode is less reliable than Full-Duplex mode, as both devices cannot
transmit at the same time.
● There is a delay between transmission and reception, which can cause problems in
some applications.
● There is a need for coordination between the transmitting and receiving devices,
which can complicate the communication process.
3. Full-Duplex Mode –
In full-duplex mode, both stations can transmit and receive simultaneously. In full_duplex mode,
signals going in one direction share the capacity of the link with signals going in another
direction, this sharing can occur in two ways:
● Either the link must contain two physically separate transmission paths, one for
sending and the other for receiving.
● Or the capacity is divided between signals traveling in both directions.
Full-duplex mode is used when communication in both directions is required all the time. The
capacity of the channel, however, must be divided between the two directions.
Example: Telephone Network in which there is communication between two persons by a
telephone line, through which both can talk and listen at the same time.
Channel Capacity=2* Bandwidth*propagation Delay
Advantages:
● Full-duplex mode allows for simultaneous bidirectional communication, which is
ideal for real-time applications such as video conferencing or online gaming.
● It is the most efficient mode of communication, as both devices can transmit and
receive data simultaneously.
● Full-duplex mode provides a high level of reliability and accuracy, as there is no need
for error correction mechanisms.
Disadvantages:
● Full-duplex mode is the most expensive mode, as it requires two communication
channels.
● It is more complex than simplex and half-duplex modes, as it requires two physically
separate transmission paths or a division of channel capacity.
● Full-duplex mode may not be suitable for all applications, as it requires a high level
of bandwidth and may not be necessary for some types of communication.
Network Topologies
Network topology refers to the arrangement of different elements like nodes, links, and devices
in a computer network. It defines how these components are connected and interact with each
other. Understanding various types of network topologies helps in designing efficient and robust
networks. Common types include bus, star, ring, mesh, and tree topologies, each with its own
advantages and disadvantages. In this article, we are going to discuss different types of network
topology their advantages and disadvantages in detail.
Mesh Topology
Figure 1: Every device is connected to another via dedicated channels. These channels are
known as links.
● Suppose, the N number of devices are connected with each other in a mesh topology,
the total number of ports that are required by each device is N-1. In Figure 1, there
are 5 devices connected to each other, hence the total number of ports required by
each device is 4. The total number of ports required = N * (N-1).
● Suppose, N number of devices are connected with each other in a mesh topology, then
the total number of dedicated links required to connect them is NC2 i.e. N(N-1)/2. In
Figure 1, there are 5 devices connected to each other, hence the total number of links
required is 5*4/2 = 10.
Advantages of Mesh Topology
● Communication is very fast between the nodes.
● Mesh Topology is robust.
● The fault is diagnosed easily. Data is reliable because data is transferred among the
devices through dedicated channels or links.
● Provides security and privacy.
Disadvantages of Mesh Topology
● Installation and configuration are difficult.
● The cost of cables is high as bulk wiring is required, hence suitable for less number of
devices.
● The cost of maintenance is high.
A common example of mesh topology is the internet backbone, where various internet service
providers are connected to each other via dedicated channels. This topology is also used in
military communication systems and aircraft navigation systems.
For more, refer to the Advantages and Disadvantages of Mesh Topology.
Star Topology
In Star Topology, all the devices are connected to a single hub through a cable. This hub is the
central node and all other nodes are connected to the central node. The hub can be passive in
nature i.e., not an intelligent hub such as broadcasting devices, at the same time the hub can be
intelligent known as an active hub. Active hubs have repeaters in them. Coaxial cables or RJ-45
cables are used to connect the computers. In Star Topology, many popular Ethernet LAN
protocols are used as CD(Collision Detection), CSMA (Carrier Sense Multiple Access), etc.
Star Topology
Figure 2: A star topology having four systems connected to a single point of connection i.e. hub.
Advantages of Star Topology
● If N devices are connected to each other in a star topology, then the number of cables
required to connect them is N. So, it is easy to set up.
● Each device requires only 1 port i.e. to connect to the hub, therefore the total number
of ports required is N.
● It is Robust. If one link fails only that link will affect and not other than that.
● Easy to fault identification and fault isolation.
● Star topology is cost-effective as it uses inexpensive coaxial cable.
Disadvantages of Star Topology
● If the concentrator (hub) on which the whole topology relies fails, the whole system
will crash down.
● The cost of installation is high.
● Performance is based on the single concentrator i.e. hub.
A common example of star topology is a local area network (LAN) in an office where all
computers are connected to a central hub. This topology is also used in wireless networks where
all devices are connected to a wireless access point.
For more, refer to the Advantages and Disadvantages of Star Topology.
Bus Topology
Bus Topology is a network type in which every computer and network device is connected to a
single cable. It is bi-directional. It is a multi-point connection and a non-robust topology because
if the backbone fails the topology crashes. In Bus Topology, various MAC (Media Access
Control) protocols are followed by LAN ethernet connections like TDMA, Pure Aloha, CDMA,
Slotted Aloha, etc.
Bus Topology
Figure 3: A bus topology with shared backbone cable. The nodes are connected to the channel
via drop lines.
Advantages of Bus Topology
● If N devices are connected to each other in a bus topology, then the number of cables
required to connect them is 1, known as backbone cable, and N drop lines are
required.
● Coaxial or twisted pair cables are mainly used in bus-based networks that support up
to 10 Mbps.
● The cost of the cable is less compared to other topologies, but it is used to build small
networks.
● Bus topology is familiar technology as installation and troubleshooting techniques are
well known.
● CSMA is the most common method for this type of topology.
Disadvantages of Bus Topology
● A bus topology is quite simpler, but still, it requires a lot of cabling.
● If the common cable fails, then the whole system will crash down.
● If the network traffic is heavy, it increases collisions in the network. To avoid this,
various protocols are used in the MAC layer known as Pure Aloha, Slotted Aloha,
CSMA/CD, etc.
● Adding new devices to the network would slow down networks.
● Security is very low.
A common example of bus topology is the Ethernet LAN, where all devices are connected to a
single coaxial cable or twisted pair cable. This topology is also used in cable television networks.
For more, refer to the Advantages and Disadvantages of Bus Topology.
Ring Topology
In a Ring Topology, it forms a ring connecting devices with exactly two neighboring devices. A
number of repeaters are used for Ring topology with a large number of nodes, because if
someone wants to send some data to the last node in the ring topology with 100 nodes, then the
data will have to pass through 99 nodes to reach the 100th node. Hence to prevent data loss
repeaters are used in the network.
The data flows in one direction, i.e. it is unidirectional, but it can be made bidirectional by
having 2 connections between each Network Node, it is called Dual Ring Topology. In-Ring
Topology, the Token Ring Passing protocol is used by the workstations to transmit the data.
Ring Topology
Figure 4: A ring topology comprises 4 stations connected with each forming a ring.
The most common access method of ring topology is token passing.
● Token passing: It is a network access method in which a token is passed from one
node to another node.
● Token: It is a frame that circulates around the network.
Operations of Ring Topology
1. One station is known as a monitor station which takes all the responsibility for
performing the operations.
2. To transmit the data, the station has to hold the token. After the transmission is done,
the token is to be released for other stations to use.
3. When no station is transmitting the data, then the token will circulate in the ring.
4. There are two types of token release techniques: Early token release releases the
token just after transmitting the data and Delayed token release releases the token
after the acknowledgment is received from the receiver.
Advantages of Ring Topology
● The data transmission is high-speed.
● The possibility of collision is minimum in this type of topology.
● Cheap to install and expand.
● It is less costly than a star topology.
Disadvantages of Ring Topology
● The failure of a single node in the network can cause the entire network to fail.
● Troubleshooting is difficult in this topology.
● The addition of stations in between or the removal of stations can disturb the whole
topology.
● Less secure.
For more, refer to the Advantages and Disadvantages of Ring Topology.
Tree Topology
This topology is the variation of the Star topology. This topology has a hierarchical flow of data.
In Tree Topology, protocols like DHCP and SAC (Standard Automatic Configuration ) are used.
Tree Topology
Figure 5: In this, the various secondary hubs are connected to the central hub which contains the
repeater. This data flow from top to bottom i.e. from the central hub to the secondary and then to
the devices or from bottom to top i.e. devices to the secondary hub and then to the central hub. It
is a multi-point connection and a non-robust topology because if the backbone fails the topology
crashes.
Advantages of Tree Topology
● It allows more devices to be attached to a single central hub thus it decreases the
distance that is traveled by the signal to come to the devices.
● It allows the network to get isolated and also prioritize from different computers.
● We can add new devices to the existing network.
● Error detection and error correction are very easy in a tree topology.
Disadvantages of Tree Topology
● If the central hub gets fails the entire system fails.
● The cost is high because of the cabling.
● If new devices are added, it becomes difficult to reconfigure.
A common example of a tree topology is the hierarchy in a large organization. At the top of the
tree is the CEO, who is connected to the different departments or divisions (child nodes) of the
company. Each department has its own hierarchy, with managers overseeing different teams
(grandchild nodes). The team members (leaf nodes) are at the bottom of the hierarchy, connected
to their respective managers and departments.
For more, refer to the Advantages and Disadvantages of Tree Topology.
Hybrid Topology
This topological technology is the combination of all the various types of topologies we have
studied above. Hybrid Topology is used when the nodes are free to take any form. It means these
can be individuals such as Ring or Star topology or can be a combination of various types of
topologies seen above. Each individual topology uses the protocol that has been discussed
earlier.
Hybrid Topology
The above figure shows the structure of the Hybrid topology. As seen it contains a combination
of all different types of networks.
Advantages of Hybrid Topology
● This topology is very flexible.
● The size of the network can be easily expanded by adding new devices.
Disadvantages of Hybrid Topology
● It is challenging to design the architecture of the Hybrid Network.
● Hubs used in this topology are very expensive.
● The infrastructure cost is very high as a hybrid network requires a lot of cabling and
network devices.
A common example of a hybrid topology is a university campus network. The network may have
a backbone of a star topology, with each building connected to the backbone through a switch or
router. Within each building, there may be a bus or ring topology connecting the different rooms
and offices. The wireless access points also create a mesh topology for wireless devices. This
hybrid topology allows for efficient communication between different buildings while providing
flexibility and redundancy within each building.
~Protocols
In computer networks, communication occurs between entities in different systems. An entity is
anything capable of sending or receiving information. However, two entities cannot simply send
bit streams to each other and expect to be understood. For communication to occur, the entities
must agree on a protocol. A protocol is a set of rules that govern data communications. A
protocol defines what is communicated, how it is communicated, and when it is communicated.
The key elements of a protocol are syntax, semantics, and timing.
o Syntax. The term syntax refers to the structure or format of the data, meaning the order in
which they are presented. For example, a simple protocol might expect the first 8 bits of data to
be the address of the sender, the second 8 bits to be the address of the receiver, and the rest of the
stream to be the message itself.
o Semantics. The word semantics refers to the meaning of each section of bits. How is a
particular pattern to be interpreted, and what action is to be taken based on that interpretation?
For example, does an address identify the route to be taken or the final destination of the
message?
o Timing. The term timing refers to two characteristics: when data should be sent and how fast
they can be sent. For example, if a sender produces data at 100 Mbps but the receiver can process
data at only 1 Mbps, the transmission will overload the receiver and some data will be lost
Types of Network Protocols
In most cases, communication across a network like the Internet uses the OSI model. The OSI
model has a total of seven layers. Secured connections, network management, and network
communication are the three main tasks that the network protocol performs. The purpose of
protocols is to link different devices.
The protocols can be broadly classified into three major categories:
● Network Communication
● Network Management
● Network Security
1. Network Communication
Communication protocols are really important for the functioning of a network. They are so
crucial that it is not possible to have computer networks without them. These protocols formally
set out the rules and formats through which data is transferred. These protocols handle syntax,
semantics, error detection, synchronization, and authentication. Below mentioned are some
network communication protocol:
Hypertext Transfer Protocol(HTTP)
It is a layer 7 protocol that is designed for transferring a hypertext between two or more systems.
HTTP works on a client-server model, most of the data sharing over the web is done through
using HTTP.
Transmission Control Protocol(TCP)
TCP layouts a reliable stream delivery by using sequenced acknowledgment. It is a
connection-oriented protocol i.e., it establishes a connection between applications before sending
any data. It is used for communicating over a network. It has many applications such as emails,
FTP, streaming media, etc.
User Datagram Protocol(UDP)
It is a connectionless protocol that lay-out a basic but unreliable message service. It adds no flow
control, reliability, or error-recovery functions. UPD is functional in cases where reliability is not
required. It is used when we want faster transmission, for multicasting and broadcasting
connections, etc.
Border Gateway Protocol(BGP)
BGP is a routing protocol that controls how packets pass through the router in an independent
system one or more networks run by a single organization and connect to different networks. It
connects the endpoints of a LAN with other LANs and it also connects endpoints in different
LANs to one another.
Address Resolution Protocol(ARP)
ARP is a protocol that helps in mapping logical addresses to the physical addresses
acknowledged in a local network. For mapping and maintaining a correlation between these
logical and physical addresses a table known as ARP cache is used.
Internet Protocol(IP)
It is a protocol through which data is sent from one host to another over the internet. It is used for
addressing and routing data packets so that they can reach their destination.
Dynamic Host Configuration Protocol(DHCP)
it’s a protocol for network management and it’s used for the method of automating the process of
configuring devices on IP networks. A DHCP server automatically assigns an IP address and
various other configurational changes to devices on a network so they can communicate with
other IP networks. it also allows devices to use various services such as NTP, DNS, or any other
protocol based on TCP or UDP.
2. Network Management
These protocols assist in describing the procedures and policies that are used in monitoring,
maintaining, and managing the computer network. These protocols also help in communicating
these requirements across the network to ensure stable communication. Network management
protocols can also be used for troubleshooting connections between a host and a client.
Internet Control Message Protocol(ICMP)
It is a layer 3 protocol that is used by network devices to forward operational information and
error messages. ICMP is used for reporting congestions, network errors, diagnostic purposes, and
timeouts.
Simple Network Management Protocol(SNMP)
It is a layer 7 protocol that is used for managing nodes on an IP network. There are three main
components in the SNMP protocol i.e., SNMP agent, SNMP manager, and managed device.
SNMP agent has the local knowledge of management details, it translates those details into a
form that is compatible with the SNMP manager. The manager presents data acquired from
SNMP agents, thus helping in monitoring network glitches, and network performance, and
troubleshooting them.
Gopher
It is a type of file retrieval protocol that provides downloadable files with some description for
easy management, retrieving, and searching of files. All the files are arranged on a remote
computer in a stratified manner. Gopher is an old protocol and it is not much used nowadays.
File Transfer Protocol(FTP)
FTP is a Client/server protocol that is used for moving files to or from a host computer, it allows
users to download files, programs, web pages, and other things that are available on other
services.
Post Office Protocol(POP3)
It is a protocol that a local mail client uses to get email messages from a remote email server
over a TCP/IP connection. Email servers hosted by ISPs also use the POP3 protocol to hold and
receive emails intended for their users. Eventually, these users will use email client software to
look at their mailbox on the remote server and to download their emails. After the email client
downloads the emails, they are generally deleted from the servers.
Telnet
It is a protocol that allows the user to connect to a remote computer program and to use it i.e., it
is designed for remote connectivity. Telnet creates a connection between a host machine and a
remote endpoint to enable a remote session.
3. Network Security
These protocols secure the data in passage over a network. These protocols also determine how
the network secures data from any unauthorized attempts to extract or review data. These
protocols make sure that no unauthorized devices, users, or services can access the network data.
Primarily, these protocols depend on encryption to secure data.
Secure Socket Layer(SSL)
It is a network security protocol mainly used for protecting sensitive data and securing internet
connections. SSL allows both server-to-server and client-to-server communication. All the data
transferred through SSL is encrypted thus stopping any unauthorized person from accessing it.
Hypertext Transfer Protocol(HTTPS)
It is the secured version of HTTP. this protocol ensures secure communication between two
computers where one sends the request through the browser and the other fetches the data from
the web server.
Transport Layer Security(TLS)
It is a security protocol designed for data security and privacy over the internet, its functionality
is encryption, checking the integrity of data i.e., whether it has been tampered with or not, and
authentication. It is generally used for encrypted communication between servers and web apps,
like a web browser loading a website, it can also be used for encryption of messages, emails, and
VoIP.
Some Other Protocols
Internet Message Access Protocol (IMAP)
● ICMP protocol is used to retrieve message from the mail server. By using ICMP mail
user can view and manage mails on his system.
Session Initiation Protocol (SIP)
● SIP is used in video, voice, and messaging application. This protocol is used to
initiating, Managing, Terminating the session between two users while they are
communicating.
Real-Time Transport Protocol (RTP)
● This protocol is used to forward audio, video over IP network. This protocol is used
with SIP protocol to send audio, video at real-time.
Rout Access Protocol (RAP)
● RAP is used in network management. It helps to user for accessing the nearest router
for communication. RAP is less efficient as compared to SNMP.
Point To Point Tunnelling Protocol (PPTP)
● It is used to implement VPN ( Virtual Private Network ). PPTP protocol append PPP
frame in IP datagram for transmission through IP based network.
Trivial File Transfer Protocol (TFTP)
● TFTP is the simplified version of FTP. TFTP is also used to transfer file over internet
●
~OSI Model
The OSI model, created in 1984 by ISO, is a reference framework that explains the process of
transmitting data between computers. It is divided into seven layers that work together to carry
out specialised network functions, allowing for a more systematic approach to networking.
OSI Model
Data Flow In OSI Model
When we transfer information from one device to another, it travels through 7 layers of OSI
model. First data travels down through 7 layers from the sender’s end and then climbs back 7
layers on the receiver’s end.
Data flows through the OSI model in a step-by-step process:
● Application Layer: Applications create the data.
● Presentation Layer: Data is formatted and encrypted.
● Session Layer: Connections are established and managed.
● Transport Layer: Data is broken into segments for reliable delivery.
● Network Layer: Segments are packaged into packets and routed.
● Data Link Layer: Packets are framed and sent to the next device.
● Physical Layer: Frames are converted into bits and transmitted physically.
Each layer adds specific information to ensure the data reaches its destination correctly, and
these steps are reversed upon arrival.
Information
Layer
Layer Name Responsibility Form (Data Device or Protocol
No
Unit)
Helps in
identifying the
Application
7 client and Message SMTP
Layer
synchronizing
communication.
Establishes
Connection,
Message (or
Session Maintenance,
5 encrypted Gateway
Layer Ensures
message)
Authentication and
Ensures security.
Node to Node
Data Link
2 Delivery of Frame Switch, Bridge
Layer
Message.
Establishing
Physical Physical Hub, Repeater, Modem,
1 Bits
Layer Connections Cables
between Devices.
~TCP/IP
The TCP/IP model is a fundamental framework for computer networking. It stands for
Transmission Control Protocol/Internet Protocol, which are the core protocols of the Internet.
This model defines how data is transmitted over networks, ensuring reliable communication
between devices. It consists of four layers: the Link Layer, the Internet Layer, the Transport
Layer, and the Application Layer. Each layer has specific functions that help manage different
aspects of network communication, making it essential for understanding and working with
modern networks.
TCP/IP was designed and developed by the Department of Defense (DoD) in the 1960s and is
based on standard protocols. The TCP/IP model is a concise version of the OSI model. It
contains four layers, unlike the seven layers in the OSI model. In this article, we are going to
discuss the TCP/IP model in detail.
What Does TCP/IP Do?
The main work of TCP/IP is to transfer the data of a computer from one device to another. The
main condition of this process is to make data reliable and accurate so that the receiver will
receive the same information which is sent by the sender. To ensure that, each message reaches
its final destination accurately, the TCP/IP model divides its data into packets and combines
them at the other end, which helps in maintaining the accuracy of the data while transferring
from one end to another end.
Difference Between TCP and IP
TCP (Transmission
Feature IP (Internet Protocol)
Control Protocol)
Ensures reliable, ordered,
Provides addressing and
and error-checked delivery
Purpose routing of packets across
of data between
networks.
applications.
Smaller, typically 20
Header Size Larger, 20-60 bytes
bytes
1. Guided Media
Guided Media is also referred to as Wired or Bounded transmission media. Signals being
transmitted are directed and confined in a narrow pathway by using physical links.
Features:
● High Speed
● Secure
● Used for comparatively shorter distances
There are 3 major types of Guided Media:
Twisted Pair Cable
It consists of 2 separately insulated conductor wires wound about each other. Generally, several
such pairs are bundled together in a protective sheath. They are the most widely used
Transmission Media. Twisted Pair is of two types:
● Unshielded Twisted Pair (UTP): UTP consists of two insulated copper wires twisted
around one another. This type of cable has the ability to block interference and does
not depend on a physical shield for this purpose. It is used for telephonic applications.
Microwave Transmission
Infrared
Infrared waves are used for very short distance communication. They cannot penetrate through
obstacles. This prevents interference between systems. Frequency Range:300GHz – 400THz. It
is used in TV remotes, wireless mouse, keyboard, printer, etc.
Difference between Radio Waves Vs Micro Waves Vs Infrared Waves
Transmission Impairment
● Attenuation – It means loss of energy. The strength of signal decreases with
increasing distance which causes loss of energy in overcoming resistance of medium.
This is also known as attenuated signal. Amplifiers are used to amplify the attenuated
signal which gives the original signal back and compensate for this loss.
● Distortion – It means changes in the form or shape of the signal. This is generally
seen in composite signals made up with different frequencies. Each frequency
component has its own propagation speed travelling through a medium. And thats
why it delay in arriving at the final destination Every component arrive at different
time which leads to distortion. Therefore, they have different phases at receiver end
from what they had at senders end.
● Noise – The random or unwanted signal that mixes up with the original signal is
called noise. There are several types of noise such as induced noise, crosstalk noise,
thermal noise and impulse noise which may corrupt the signal.
UNIT II
LAN: Wired LAN, Wireless LANs, Techniques for Bandwidth utilization: Multiplexing -
Frequency division, Time division and Wave division. Data Link Layer: Services, Framing,
Error Control: Parity bit method, Block coding, CRC, Hamming code, and Flow Control.
LAN:
What is a Local Area Network?
The full form of LAN is Local-area Network. It is a computer network that covers a small area
such as a building or campus up to a few kilometers in size. LANs are commonly used to
connect personal computers and workstations in company offices to share common resources,
like printers, and exchange information. If we connect LAN in a real-life example then the
family is the best example each family member is connected to each other in the same way each
device is connected to the network. Several experimental and early commercial LAN
technologies were developed in the 1970s. Cambridge Ring is a type of LAN that was developed
at Cambridge University in 1974.
Local Area Network
How do LANs Work?
A router serves as the hub where the majority of LANs connect to the Internet. Home LANs
often utilise a single router, but bigger LANs may also use network switches to transmit packets
more effectively.
LANs nearly always connect devices to the network via Ethernet, WiFi, or both of these
technologies. Ethernet is a way to connect devices to the Local Area Network ethernet define the
physical and data link layer of the OSI model. WiFi is a protocol that is used to connect devices
to the Local Area Network wirelessely.
There are many devices that is connected to the LAN for example Servers, desktop computers,
laptops, printers, Internet of Things (IoT) devices, and even game consoles. LANs are usually
used in offices to give internal staff members shared access to servers or printers that are linked
to the network.
Wireless Local Area Network (WLAN):
A Wireless Local Area Network (WLAN) is a type of network that uses wireless technology,
such as Wi-Fi, to connect devices in the same area. WLANs use wireless access points to
transmit data between devices, allowing for greater mobility and flexibility.
Advantages of WLAN:
● Mobility: WLANs provide greater device mobility and flexibility, as devices can
connect wirelessly from anywhere within the network range.
● Easy Installation: WLANs are easier to install than LANs, as they do not require
physical cabling and switches.
● Range: WLANs can cover a larger area than LANs, allowing for greater device
connectivity and flexibility.
Disadvantages of WLAN:
● Security: WLANs are less secure than LANs, as wireless signals can be intercepted
by unauthorized users and devices.
● Speed: WLANs provide slower data transfer rates than LANs, typically around 54
Mbps, which can result in slower data transfer between devices.
● Interference: WLANs are susceptible to interference from other wireless devices,
which can cause connectivity issues.
Similarities between LAN and WLAN:
● Both provide connectivity: The primary purpose of both LAN and WLAN is to
provide connectivity between devices, allowing them to share data and resources.
● Both use the same protocols: LANs and WLANs use the same protocols for data
transfer, such as TCP/IP and Ethernet, which ensures compatibility between devices.
● Both can support multiple devices: Both LANs and WLANs can support multiple
devices simultaneously, allowing multiple users to share data and resources.
● Both can be secured: Both LANs and WLANs can be secured using encryption and
authentication methods, ensuring that only authorized users have access to the
network.
● Both require network hardware: Both LANs and WLANs require network hardware,
such as routers, switches, and access points, to function properly.
● Both can be used for internet connectivity: Both LANs and WLANs can be used to
connect to the internet, providing access to online resources and services.
Let’s discuss about LAN and WLAN:
LAN WLAN
In LAN, devices are connected locally For WLAN Ethernet cable is not
with Ethernet cable. necessary.
Conclusion:
Both LANs and WLANs have their advantages and disadvantages, depending on the specific
requirements. LANs are generally faster and more secure, while WLANs provide greater
mobility and flexibility. Choosing the right network for your needs depends on your specific
requirements, such as speed, security, and device mobility.
Uses of Multiplexing
Multiplexing is used for a variety of purposes in data communications to enhance the efficiency
and capacity of networks. Here are some of the main uses:
● Efficient Utilization of Resources: Multiplexing allows multiple signals to share the
same communication channel, making the most of the available bandwidth. This is
especially important in environments where bandwidth is limited.
● Telecommunications: In telephone networks, multiplexing enables the simultaneous
transmission of multiple telephone calls over a single line, enhancing the capacity of
the network.
● Internet and Data Networks: Multiplexing is used in internet communications to
transmit data from multiple users over a single network line, improving the efficiency
and speed of data transfer.
● Satellite Communications: Multiplexing helps in efficiently utilizing the available
bandwidth on satellite transponders, allowing multiple signals to be transmitted and
received simultaneously.
Types of Multiplexing
There are five different types of multiplexing techniques, each designed to handle various types
of data and communication needs. These techniques include:
● Frequency Division Multiplexing (FDM)
● Time-Division Multiplexing (TDM)
● Wavelength Division Multiplexing (WDM)
● Code-division multiplexing (CDM)
● Space-division multiplexing (SDM)
1. Frequency Division Multiplexing
Frequency division multiplexing is defined as a type of multiplexing where the bandwidth of a
single physical medium is divided into a number of smaller, independent frequency channels.
Frequency Division Multiplexing is used in radio and television transmission.
In FDM, we can observe a lot of inter-channel cross-talk, due to the fact that in this type of
multiplexing the bandwidth is divided into frequency channels. In order to prevent the
inter-channel cross talk, unused strips of bandwidth must be placed between each channel. These
unused strips between each channel are known as guard bands.
Statistical TDM: Statistical TDM is a type of Time Division Multiplexing where the output
frame collects data from the input frame till it is full, not leaving an empty slot like in
Synchronous TDM. In statistical TDM, we need to include the address of each particular data in
the slot that is being sent to the output frame.
Statistical TDM is a more efficient type of time-division multiplexing as the channel capacity is
fully utilized and improves the bandwidth efficiency.
Frames are the units of digital transmission, particularly in computer networks and
telecommunications. Frames are comparable to the packets of energy called photons in the case
of light energy. Frame is continuously used in Time Division Multiplexing process.
Framing is a point-to-point connection between two computers or devices consisting of a wire in
which data is transmitted as a stream of bits. However, these bits must be framed into discernible
blocks of information. Framing is a function of the data link layer. It provides a way for a sender
to transmit a set of bits that are meaningful to the receiver. Ethernet, token ring, frame relay, and
other data link layer technologies have their own frame structures. Frames have headers that
contain information such as error-checking codes.
At the data link layer, it extracts the message from the sender and provides it to the receiver by
providing the sender’s and receiver’s addresses. The advantage of using frames is that data is
broken up into recoverable chunks that can easily be checked for corruption.
The process of dividing the data into frames and reassembling it is transparent to the user and is
handled by the data link layer.
Framing
Framing is an important aspect of data link layer protocol design because it allows the
transmission of data to be organized and controlled, ensuring that the data is delivered accurately
and efficiently.
Problems in Framing
● Detecting start of the frame: When a frame is transmitted, every station must be
able to detect it. Station detects frames by looking out for a special sequence of bits
that marks the beginning of the frame i.e. SFD (Starting Frame Delimiter).
● How does the station detect a frame: Every station listens to link for SFD pattern
through a sequential circuit. If SFD is detected, sequential circuit alerts station.
Station checks destination address to accept or reject frame.
● Detecting end of frame: When to stop reading the frame.
● Handling errors: Framing errors may occur due to noise or other transmission
errors, which can cause a station to misinterpret the frame. Therefore, error detection
and correction mechanisms, such as cyclic redundancy check (CRC), are used to
ensure the integrity of the frame.
● Framing overhead: Every frame has a header and a trailer that contains control
information such as source and destination address, error detection code, and other
protocol-related information. This overhead reduces the available bandwidth for data
transmission, especially for small-sized frames.
● Framing incompatibility: Different networking devices and protocols may use
different framing methods, which can lead to framing incompatibility issues. For
example, if a device using one framing method sends data to a device using a
different framing method, the receiving device may not be able to correctly interpret
the frame.
● Framing synchronization: Stations must be synchronized with each other to avoid
collisions and ensure reliable communication. Synchronization requires that all
stations agree on the frame boundaries and timing, which can be challenging in
complex networks with many devices and varying traffic loads.
● Framing efficiency: Framing should be designed to minimize the amount of data
overhead while maximizing the available bandwidth for data transmission. Inefficient
framing methods can lead to lower network performance and higher latency.
Types of framing
There are two types of framing:
1. Fixed-size: The frame is of fixed size and there is no need to provide boundaries to the frame,
the length of the frame itself acts as a delimiter.
● Drawback: It suffers from internal fragmentation if the data size is less than the
frame size
● Solution: Padding
2. Variable size: In this, there is a need to define the end of the frame as well as the beginning of
the next frame to distinguish. This can be done in two ways:
1. Length field – We can introduce a length field in the frame to indicate the length of
the frame. Used in Ethernet(802.3). The problem with this is that sometimes the
length field might get corrupted.
2. End Delimiter (ED) – We can introduce an ED(pattern) to indicate the end of the
frame. Used in Token Ring. The problem with this is that ED can occur in the data.
This can be solved by:
1. Character/Byte Stuffing: Used when frames consist of characters. If data contains
ED then, a byte is stuffed into data to differentiate it from ED.
Let ED = “$” –> if data contains ‘$’ anywhere, it can be escaped using ‘\O’ character.
–> if data contains ‘\O$’ then, use ‘\O\O\O$'($ is escaped using \O and \O is escaped
using \O).
Parity
Error Detection Codes : The binary information is transferred from one location to another
location through some communication medium. The external noise can change bits from 1 to 0
or 0 to 1.This changes in values changes the meaning of actual message and is called error. For
efficient data transfer, there should be an error detection and correction codes. An error detection
code is a binary code that detects digital errors during transmission. To detect error in the
received message, we add some extra bits to the actual data.
Without addition of redundant bits, it is not possible to detect errors in the received message.
There are 3 ways in which we can detect errors in the received message :
1. Parity Bit
2. CheckSum
3. Cyclic Redundancy Check (CRC)
We’ll be understanding the parity bit method in this article in depth :-
Parity Bit Method : A parity bit is an extra bit included in binary message to make total number
of 1’s either odd or even. Parity word denotes number of 1’s in a binary string. There are two
parity system – even and odd parity checks.
1. Even Parity Check: Total number of 1’s in the given data bit should be even. So if the total
number of 1’s in the data bit is odd then a single 1 will be appended to make total number of 1’s
even else 0 will be appended(if total number of 1’s are already even). Hence, if any error occurs,
the parity check circuit will detect it at the receiver’s end. Let’s understand this with example,
see the below diagram :
000 1 0
001 0 1
010 0 1
011 1 0
100 0 1
101 1 0
110 1 0
111 0 1
In Computer Networks, Hamming code is used for the set of error-correction codes which
may occur when the data is moved from the sender to the receiver. The hamming method
corrects the error by finding the state at which the error has occurred.
Redundant Bits
Redundant bits are extra binary bits that are generated and added to the information-carrying bits
of data transfer to ensure that no bits were lost during the data transfer. The redundancy bits are
placed at certain calculated positions to eliminate the errors and the distance between the two
redundancy bits is called "Hamming Distance".
Error Correction Code − This is the relationship between data bits and redundancy bits to
correct a single-bit error. A-frame consists of M data bits and R redundant bits. Suppose the total
length of the frame be N (N=M+R). An N-bit unit containing data and the check bit is often
referred to as an N-bit codeword.
The following formula is used to find the number of redundant bits.
Number of single-bit errors = M + R
Number of states for no error = 1
So, the number of redundant bits (R) that represent all states (M+R+1) must satisfy −
2𝑅 ≥ 𝑀 + 𝑅 + 1
where R = Redundant bit, and M = data bit.
Steps to find the Hamming Code −
The hamming method uses the extra parity bits to allow the identification of a single-bit error.
Step 1 − First write the bit positions starting from 1 in a binary form (1, 10, 11,100, etc.)
Step 2 − Mark all the bit positions that are powers of two as parity bits (1, 2, 4, 8, 16, 32,
64, etc.)
Step 3 − All other bit positions are for the data to be encoded using (3, 5, 6, 7, 9, 10 and
11, etc.)
Each parity bit calculates the parity for some of the bits in the code word. The position of the
parity determines the sequence of bits that it alternatively checks and skips.
Position 1 − Check 1 bit, then skip 1 bit, check 1 bit and then skip 1 bit and so on (Ex −
1,3,5,7,11, etc.)
Position 2 − Check 2 bit, then skip 2 bit, check 2 bit, then skip 2 bit (Ex −
2,3,6,7,10,11,14,15, etc.)
Position 4 − Check 4 bit, then skip 4 bit, check 4 bit, then skip 4 bit (Ex − 4, 5, 6, 7, 12,
13, 14, 15, etc.)
Position 8 − Check 8 bit, then skip 8 bit, check 8 bit, then skip 8 bit (Ex − 8, 9, 10, 11,
12, 13, 14, 15, 24, 25, 26, 27, 28, 29, 30, 31).
Note − Set the parity bit 1 if the total number of 1s in the positions it checks odd or set the parity
bit 0 if the total number of 1s in the positions it checks even.
Example −
Construct the even parity Hamming code word for a data byte 1001101.
The number (1001101) of bits is 7.
The value of r is calculated as −
2𝑅 ≥ 𝑀 + 𝑅 + 1
⇒ 24 ≥ 7 + 4 + 1
Therefore, the number of redundancy bits = 4
Now, let's calculate the required number of parity bits.
We take 𝑃 = 2, then 2𝑃 = 22 = 4 and 𝑛 + 𝑃 + 1 = 4 + 2 + 1 = 7
The 2 parity bits are not sufficient for the 4-bit data.
Now, we will take 𝑃 = 3,then 2𝑃 = 23 = 8 and 𝑛 + 𝑃 + 1 = 4 + 3 + 1 = 8
Therefore, 3 parity bits are sufficient for 4-bit data.
The total bits in the codeword are − 4 + 3 = 7
Position 1: checks the bits 1,3,5,7,9 and 11.
? _1_0 0 1_1 0 1 0.In position 1 even parity so set position 1 to a 0:0_1_0 0 1_1 0 1 0.
0 1 0 1 1 0 0 1 0
Position 2: checks bits 2,3,6,7,10,11.
0 ? 1_0 0 1_1 0 1 0. In position 2 odd parity so set position 2 to.a 1:0 1 1_0 0 1_1 0 1 0
0 1 0 1 1 0 0 1 1 0
Position 4 checks bits 4,5,6,7,12.
0 1 1 ? 0 0 1_1 0 1 0. In position 4 odd parity so set position 4 to.a 1: 0 1 1 1 0 0 1_1 0 1 0
0 1 0 1 1 0 0 1 1 1 0
Position 8 checks bits 8,9,10,11,12.
0 1 1 1 0 0 1 ? 1 0 1 0. In position 8 even parity so set position 8 to.a 1: 0 1 1 1 0 0 1 0 1 0 1 0
0 1 0 1 0 1 0 0 1 1 1 0
Code Word = 011100101010
0 1 1 1 0 0 1 0 1 0 1 0
CRC
The Cyclic Redundancy Checks (CRC) is the most powerful method for Error-Detection and
Correction. It is given as a kbit message and the transmitter creates an (n – k) bit sequence
called frame check sequence. The out coming frame, including n bits, is precisely divisible by
some fixed number. Modulo 2 Arithmetic is used in this binary addition with no carries, just like
the XOR operation.
Redundancy means duplicacy. The redundancy bits used by CRC are changed by splitting the
data unit by a fixed divisor. The remainder is CRC.
Qualities of CRC
It should have accurately one less bit than the divisor.
Joining it to the end of the data unit should create the resulting bit sequence precisely
divisible by the divisor.
CRC generator and checker
Process
A string of n 0s is added to the data unit. The number n is one smaller than the number of
bits in the fixed divisor.
The new data unit is divided by a divisor utilizing a procedure known as binary division;
the remainder appearing from the division is CRC.
The CRC of n bits interpreted in phase 2 restores the added 0s at the end of the data unit.
Example
Message D = 1010001101 (10 bits)
Predetermined P = 110101 (6 bits)
FCS R = to be calculated 5 bits
Hence, n = 15 K = 10 and (n – k) = 5
The message is generated through 25:accommodating 1010001101000
The product is divided by P.
The remainder is inserted to 25D to provide T = 101000110101110 that is sent.
Suppose that there are no errors, and the receiver gets T perfect. The received frame is divided
by P.
In block coding, the input data is taken and transformed into a longer block of encoded data by
adding some redundant data to it. This addition redundant data helps to detect and correct errors
that occur during transmission and storage.
Block coding method generally works on binary data which is represented in the form of 0s and
1s. To perform block coding, there are various types of techniques are available, such as parity
check codes, Hamming codes, Reed-Solomon codes, BCH codes, etc. Where, the parity check
codes is the simplest technique to perform block coding. However, this technique has some
limitations, such as it can detect only single-bit errors. The other block coding technique are
much advanced and can detect as well as correct the errors.
Block coding is extensively used in various fields of digital electronics, such as in wireless
communication, satellite data communication, optical fiber communication, digital data storage
devices, and more.
Types of Block Codes used in Digital Electronics
In digital electronics, there are several different types of block codes used to perform block
coding of data. Some common types of block codes are described below:
Parity Check Codes
Parity check codes are the simplest block codes used for error detection in digital electronics. In
this block coding technique, an extra parity bit is included with each block of data. The
calculation of the parity bit is done as per the number of 1s in the block of data. However, the
parity check codes can detect only 1-bit errors, also they cannot correct them.
Hamming Codes
Hamming codes are relatively advanced codes than parity check codes used for block coding in
digital electronics. These codes are able to detect as well as correct 1-bit errors. This method
adds additional redundant bits to each data block to create a specific code-word. The positions of
the redundant bits in the code-word allow for detection and correction of errors in the data.
Reed-Solomon Codes
Reed-Solomon codes are highly advanced codes used for block coding in digital electronic
systems where robust error detection and correction is desired. These codes have ability to detect
and correct multi-bit errors in a data block. The operation of Reed-Solomon Codes is based on
the combination of parity checks and polynomial mathematics, where parity check detects errors
in the data block, while the error locator polynomials correct them. Reed-Solomon codes are
extensively used in the field of digital communication, satellite communication and data storage
devices.
Bose-Chaudhuri-Hocquenghem (BCH) Codes
BCH codes are another type of block codes used for error detection and correct in data blocks.
These codes provide higher flexibility over Reed-Solomon codes in terms of number of errors
that they can correct. BCH codes are mainly used where error correction is required multiple
times like in magnetic storage devices.
Convolution Codes
Convolution codes are another type of block codes used for error correction. These are also
known as turbo codes. These codes involve the use of parallel concatenated convolution codes
for error correction in data block. Convolution codes use an iterative decoding process to provide
excellent error correction capabilities. These codes are primarily used in wireless and deep-space
communications, where noise levels are very high.
Low-Density Parity-Check (LDPC) Codes
LDPC codes are types of error correction codes known for their high performance and low
complexity. These codes are mainly employed in modern digital communication systems like
4G, 5G, Wi-Fi, etc. for error correction.
Advantages of Block Coding in Digital Electronics
Block coding offers several benefits in the field digital electronics. Some key advantages of
block coding in digital electronics are listed below:
Block coding improves the integrity of the received data by error detection and correction
occurred during transmission and storage.
Block coding improves overall reliability of the data transmission.
Block coding increases immunity of the communication channel against noise and
interference.
Block coding allows for efficient utilization of storage space and channel bandwidth
through the error correction.
Disadvantages of Block Coding in Digital Electronics
Apart from various advantages, block coding also has some disadvantages which are given
below:
Block coding increases redundancy in the data due to addition of extra bits for error
correction.
Block coding increases the overall data size of the block code, which consumes extra
storage space or channel bandwidth.
Block coding can reduce overall performance of the system, due to additional encoding
and decoding processes.
Block coding can cause delays in data transmission.
Block coding involves the utilization of complex algorithms and hardware resources that
introduce in its implementation.
Conclusion
Block coding is a method of error detection and correction used in data communication and
storage to ensure the integrity of the data. It involves the addition of redundancy to the original
data that allows for detection and correction of errors occurred during transmission and storage
of the data. Overall, block coding is an essential process in data transmission and storage to
ensure accuracy and reliability of the digital information.
Flow control is design issue at Data Link Layer. It is a technique that generally observes the
proper flow of data from sender to receiver. It is very essential because it is possible for sender to
transmit data or information at very fast rate and hence receiver can receive this information and
process it. This can happen only if receiver has very high load of traffic as compared to sender,
or if receiver has power of processing less as compared to sender. Flow control is basically a
technique that gives permission to two of stations that are working and processing at different
speeds to just communicate with one another. Flow control in Data Link Layer simply restricts
and coordinates number of frames or amount of data sender can send just before it waits for an
acknowledgement from receiver. Flow control is actually set of procedures that explains sender
about how much data or frames it can transfer or transmit before data overwhelms receiver. The
receiving device also contains only limited amount of speed and memory to store data. This is
why receiving device should be able to tell or inform the sender about stopping the transmission
or transferring of data on temporary basis before it reaches limit. It also needs buffer, large block
of memory for just storing data or frames until they are processed.
flow control can also be understand as a speed matching mechanism for two stations.
UNIT III
Medium Access Control Sublayer: Protocols - Stop and Wait, Go back n, Selective
Repeat, Sliding Window Protocols, Multiple access protocols: ALOHA, CSMA,
Collision free protocols, IEEE 802.3 standards, and HDLC. Network Layer:
Switching Techniques, Tunneling, Fragmentation, Logical addressing – IPV4, IPV6,
Address Mapping
The medium access control (MAC) is a sublayer of the data link layer of the open system
interconnections (OSI) reference model for data transmission. It is responsible for flow control
and multiplexing for transmission medium. It controls the transmission of data packets via
remotely shared channels. It sends data over the network interface card.
MAC Layer in the OSI Model
The Open System Interconnections (OSI) model is a layered networking framework that
conceptualizes how communications should be done between heterogeneous systems. The data
link layer is the second lowest layer. It is divided into two sublayers −
The logical link control (LLC) sublayer
The medium access control (MAC) sublayer
The following diagram depicts the position of the MAC layer −
The above figure shows the working of the stop and wait protocol. If there is a sender and
receiver, then sender sends the packet and that packet is known as a data packet. The sender will
not send the second packet without receiving the acknowledgment of the first packet. The
receiver sends the acknowledgment for the data packet that it has received. Once the
acknowledgment is received, the sender sends the next packet. This process continues until all
the packet are not sent. The main advantage of this protocol is its simplicity but it has some
disadvantages also. For example, if there are 1000 data packets to be sent, then all the 1000
packets cannot be sent at a time as in Stop and Wait protocol, one packet is sent at a time.
Disadvantages of Stop and Wait protocol
The following are the problems associated with a stop and wait protocol:
1. Problems occur due to lost data
Suppose the sender sends the data and the data is lost. The receiver is waiting for the data for a
long time. Since the data is not received by the receiver, so it does not send any
acknowledgment. Since the sender does not receive any acknowledgment so it will not send the
next packet. This problem occurs due to the lost data.
In this case, two problems occur:
ADVERTISEMENT
ADVERTISEMENT
ADVERTISEMENT
○ Sender waits for an infinite amount of time for an acknowledgment.
○ Receiver waits for an infinite amount of time for a data.
2. Problems occur due to lost acknowledgment
Suppose the sender sends the data and it has also been received by the receiver. On receiving the
packet, the receiver sends the acknowledgment. In this case, the acknowledgment is lost in a
network, so there is no chance for the sender to receive the acknowledgment. There is also no
chance for the sender to send the next packet as in stop and wait protocol, the next packet cannot
be sent until the acknowledgment of the previous packet is received.
In this case, one problem occurs:
○ Sender waits for an infinite amount of time for an acknowledgment.
3. Problem due to the delayed data or acknowledgment
Suppose the sender sends the data and it has also been received by the receiver. The receiver then
sends the acknowledgment but the acknowledgment is received after the timeout period on the
sender's side. As the acknowledgment is received late, so acknowledgment can be wrongly
considered as the acknowledgment of some other data packet.
Go-Back-N ARQ
Before understanding the working of Go-Back-N ARQ, we first look at the sliding window
protocol. As we know that the sliding window protocol is different from the stop-and-wait
protocol. In the stop-and-wait protocol, the sender can send only one frame at a time and cannot
send the next frame without receiving the acknowledgment of the previously sent frame,
whereas, in the case of sliding window protocol, the multiple frames can be sent at a time. The
variations of sliding window protocol are Go-Back-N ARQ and Selective Repeat ARQ. Let's
understand 'what is Go-Back-N ARQ'.
What is Go-Back-N ARQ?
In Go-Back-N ARQ, N is the sender's window size. Suppose we say that Go-Back-3, which
means that the three frames can be sent at a time before expecting the acknowledgment from the
receiver.
It uses the principle of protocol pipelining in which the multiple frames can be sent before
receiving the acknowledgment of the first frame. If we have five frames and the concept is
Go-Back-3, which means that the three frames can be sent, i.e., frame no 1, frame no 2, frame no
3 can be sent before expecting the acknowledgment of frame no 1.
In Go-Back-N ARQ, the frames are numbered sequentially as Go-Back-N ARQ sends the
multiple frames at a time that requires the numbering approach to distinguish the frame from
another frame, and these numbers are known as the sequential numbers.
ADVERTISEMENT
The number of frames that can be sent at a time totally depends on the size of the sender's
window. So, we can say that 'N' is the number of frames that can be sent at a time before
receiving the acknowledgment from the receiver.
If the acknowledgment of a frame is not received within an agreed-upon time period, then all the
frames available in the current window will be retransmitted. Suppose we have sent the frame no
5, but we didn't receive the acknowledgment of frame no 5, and the current window is holding
three frames, then these three frames will be retransmitted.
The sequence number of the outbound frames depends upon the size of the sender's window.
Suppose the sender's window size is 2, and we have ten frames to send, then the sequence
numbers will not be 1,2,3,4,5,6,7,8,9,10. Let's understand through an example.
ADVERTISEMENT
○ N is the sender's window size.
○ If the size of the sender's window is 4 then the sequence number will be
0,1,2,3,0,1,2,3,0,1,2, and so on.
The number of bits in the sequence number is 2 to generate the binary sequence 00,01,10,11.
Working of Go-Back-N ARQ
Suppose there are a sender and a receiver, and let's assume that there are 11 frames to be sent.
These frames are represented as 0,1,2,3,4,5,6,7,8,9,10, and these are the sequence numbers of the
frames. Mainly, the sequence number is decided by the sender's window size. But, for the better
understanding, we took the running sequence numbers, i.e., 0,1,2,3,4,5,6,7,8,9,10. Let's consider
the window size as 4, which means that the four frames can be sent at a time before expecting the
acknowledgment of the first frame.
Step 1: Firstly, the sender will send the first four frames to the receiver, i.e., 0,1,2,3, and now the
sender is expected to receive the acknowledgment of the 0th frame.
Let's assume that the receiver has sent the acknowledgment for the 0 frame, and the receiver has
successfully received it.
The sender will then send the next frame, i.e., 4, and the window slides containing four frames
(1,2,3,4).
The receiver will then send the acknowledgment for the frame no 1. After receiving the
acknowledgment, the sender will send the next frame, i.e., frame no 5, and the window will slide
having four frames (2,3,4,5).
Now, let's assume that the receiver is not acknowledging the frame no 2, either the frame is lost,
or the acknowledgment is lost. Instead of sending the frame no 6, the sender Go-Back to 2,
which is the first frame of the current window, retransmits all the frames in the current window,
i.e., 2,3,4,5.
Important points related to Go-Back-N ARQ:
○ In Go-Back-N, N determines the sender's window size, and the size of the receiver's
window is always 1.
○ It does not consider the corrupted frames and simply discards them.
○ It does not accept the frames which are out of order and discards them.
○ If the sender does not receive the acknowledgment, it leads to the retransmission of all
the current window frames.
Let's understand the Go-Back-N ARQ through an example.
Example 1: In GB4, if every 6th packet being transmitted is lost and if we have to spend 10
packets then how many transmissions are required?
Solution: Here, GB4 means that N is equal to 4. The size of the sender's window is 4.
Step 1: As the window size is 4, so four packets are transferred at a time, i.e., packet no 1, packet
no 2, packet no 3, and packet no 4.
Step 2: Once the transfer of window size is completed, the sender receives the acknowledgment
of the first frame, i.e., packet no1. As the acknowledgment receives, the sender sends the next
packet, i.e., packet no 5. In this case, the window slides having four packets, i.e., 2,3,4,5 and
excluded the packet 1 as the acknowledgment of the packet 1 has been received successfully.
Step 3: Now, the sender receives the acknowledgment of packet 2. After receiving the
acknowledgment for packet 2, the sender sends the next packet, i.e., packet no 6. As mentioned
in the question that every 6th is being lost, so this 6th packet is lost, but the sender does not know
that the 6th packet has been lost.
ADVERTISEMENT
Step 4: The sender receives the acknowledgment for the packet no 3. After receiving the
acknowledgment of 3rd packet, the sender sends the next packet, i.e., 7th packet. The window will
slide having four packets, i.e., 4, 5, 6, 7.
Step 5: When the packet 7 has been sent, then the sender receives the acknowledgment for the
packet no 4. When the sender has received the acknowledgment, then the sender sends the next
packet, i.e., the 8th packet. The window will slide having four packets, i.e., 5, 6, 7, 8.
ADVERTISEMENT
Step 6: When the packet 8 is sent, then the sender receives the acknowledgment of packet 5. On
receiving the acknowledgment of packet 5, the sender sends the next packet, i.e., 9th packet. The
window will slide having four packets, i.e., 6, 7, 8, 9.
Step 7: The current window is holding four packets, i.e., 6, 7, 8, 9, where the 6th packet is the
first packet in the window. As we know, the 6th packet has been lost, so the sender receives the
negative acknowledgment NAK(6). As we know that every 6th packet is being lost, so the
counter will be restarted from 1. So, the counter values 1, 2, 3 are given to the 7th packet, 8th
packet, 9th packet respectively.
Step 8: As it is Go-BACK, so it retransmits all the packets of the current window. It will resend
6, 7, 8, 9. The counter values of 6, 7, 8, 9 are 4, 5, 6, 1, respectively. In this case, the 8th packet is
lost as it has a 6-counter value, so the counter variable will again be restarted from 1.
Step 9: After the retransmission, the sender receives the acknowledgment of packet 6. On
receiving the acknowledgment of packet 6, the sender sends the 10th packet. Now, the current
window is holding four packets, i.e., 7, 8, 9, 10.
Step 10: When the 10th packet is sent, the sender receives the acknowledgment of packet 7. Now
the current window is holding three packets, 8, 9 and 10. The counter values of 8, 9, 10 are 6, 1,
2.
Step 11: As the 8th packet has 6 counter value which means that 8th packet has been lost, and the
sender receives NAK (8).
ADVERTISEMENT
Step 12: Since the sender has received the negative acknowledgment for the 8th packet, it resends
all the packets of the current window, i.e., 8, 9, 10.
Step 13: The counter values of 8, 9, 10 are 3, 4, 5, respectively, so their acknowledgments have
been received successfully.
We conclude from the above figure that total 17 transmissions are required.
Selective Repeat Protocol (SRP) is a type of error control protocol we use in computer networks
to ensure the reliable delivery of data packets. Additionally, we use it in conjunction with the
Transmission Control Protocol (TCP) to ensure that the receiver receives data transmitted
over the network without errors.
In the SRP, the sender divides the data into packets and sends them to the receiver. Furthermore,
the receiver sends an acknowledgment (ACK) for each packet received successfully. If the
sender doesn’t receive an ACK for a particular packet, it retransmits only that packet instead of
the entire set of packets.
The SRP uses a window-based flow control mechanism to ensure the sender doesn’t overwhelm
the receiver with too many packets. Additionally, the sender and receiver maintain a window
of packets. Based on the window size, the sender sends packets and waits for a specific amount
of time for acknowledgment from the receiver.
The receiver, in turn, maintains a window of packets that contains the frame number it’s
receiving from the sender. If a frame is lost during transmission, the receiver sends the sender a
negative acknowledgment attacking the frame number.
3. Steps
Now let’s discuss the steps involved in the SRP.
The first step is to divide data into packets. The sender divides the data into packets of a fixed
size. When the sender divides the data into packets, it assigns a unique sequence number to each
packet. The numbering of packets plays a crucial role in the SRP.
The next step is to send the packets to the receiver. The receiver receives the packets and sends
an acknowledgment (ACK) for each packet received successfully.
The sender and receiver maintain a window of packets indicating the number of frames we can
transmit or receive at a given time. Additionally, we determine the size of the window based on
the network conditions. As the sender sends packets, it updates its window to reflect the packets
that have been transmitted, and the ACKs received.
However, if the sender doesn’t receive an ACK for a particular packet within a certain timeout
period, it retransmits only that packet instead of the entire set of packets. The receiver only
accepts packets that are within its window. If the receiver receives a packet outside the window,
it discards the packet.
The receiver sends selective acknowledgments (SACKs) for packets received out of order
or lost. The sender processes the SACKs to determine which packets need to be retransmitted.
Finally, we continue this process until we successfully send the data packets or the number of
retransmissions exceeds a predetermined threshold.
4. Example
Let’s see how we can transmit data using the SRP. We divide our sample data into 6 data packets
or frames:
Additionally, we’re assuming the window size for the receiver and sender is 2. Hence, we
transmit two frames and wait for the receiver to acknowledge the frames transmitted before
sending the next frames. In case of a missing or unacknowledged frame, we need to resend it
before proceeding with the next set of frames.
4.1. No Error in Transmission
Let’s start sending the packets using the SRP. We send the first two frames to the receiver and
wait for the acknowledgment:
As we can see, the receiver successfully received and acknowledged the first two data frames.
One crucial point is that when the sender sends a frame, it waits for a specific time to get a
response. In this case, we receive responses from the receiver within the waiting time of each
frame. Hence, we move on to the next 2 data frames.
4.2. Frame Is Lost
Let’s discuss a scenario when a frame is lost during the transmission:
Here, frame 2 is lost during the transmission. Hence, the sender waits for a specific amount of
time to get a response from the receiver. In this case, we received a negative acknowledgment for
frame 2. Therefore, we need to resend frame 2 before we proceed further:
4.3. Acknowledgment Is Lost
Let’s take a look at another situation when the acknowledgment of a frame is lost during
transmission:
In this case, the receiver successfully receives frames 4 and 5, but the acknowledgment of
frame 5 is lost. Hence, the sender waits for a specific amount of time in order to receive an
acknowledgment for frame 5. After the waiting time is over, the sender sends frame 4 again:
5. Advantages and Disadvantages
The SRP offers several advantages over other error control protocols, including efficient
retransmission, selective acknowledgments, reduced delay, and higher throughput.
The main difference with other error control protocols is that it only retransmits lost packets
rather than retransmitting the entire set of packets. As a result, the SRP reduces unnecessary
network traffic and improves efficiency.
In the SRP, the receiver sends selective acknowledgments (SACKs) for packets received out of
order or lost. This allows the sender to know exactly which packets need to be retransmitted.
Furthermore, the SRP can reduce delay since the receiver can immediately start processing the
received packets, even if some packets are still missing.
Finally, the SRP can achieve higher throughput compared to other protocols like Go-Back-N,
especially when the network has a high error rate or high bandwidth-delay product.
Despite its advantages, the SRP also has some limitations and disadvantages.
It’s more complex compared to other error control protocols. Therefore it requires more
processing power and memory resources.
Additionally, the SRP requires more overhead since it uses selective acknowledgments (SACKs)
to notify the sender about lost or out-of-order packets. As a result, it can increase network traffic.
Furthermore, it requires more buffering on both the sender and receiver sides to store the
packets that are not yet acknowledged. This can be a problem if the network has limited
buffering capacity.
Finally, the SRP can increase delay since the sender needs to wait for an acknowledgment for
each packet before transmitting the next one.
Let’s take a look at the summary:
Sliding window protocols are data link layer protocols for reliable and sequential delivery of
data frames. The sliding window is also used in Transmission Control Protocol.
In this protocol, multiple frames can be sent by a sender at a time before receiving an
acknowledgment from the receiver. The term sliding window refers to the imaginary boxes to
hold frames. Sliding window method is also known as windowing.
Working Principle
In these protocols, the sender has a buffer called the sending window and the receiver has buffer
called the receiving window.
The size of the sending window determines the sequence number of the outbound frames. If the
sequence number of the frames is an n-bit field, then the range of sequence numbers that can be
assigned is 0 to 2𝑛−1. Consequently, the size of the sending window is 2𝑛−1. Thus in order to
accommodate a sending window size of 2𝑛−1, a n-bit sequence number is chosen.
The sequence numbers are numbered as modulo-n. For example, if the sending window size is 4,
then the sequence numbers will be 0, 1, 2, 3, 0, 1, 2, 3, 0, 1, and so on. The number of bits in the
sequence number is 2 to generate the binary sequence 00, 01, 10, 11.
The size of the receiving window is the maximum number of frames that the receiver can accept
at a time. It determines the maximum number of frames that the sender can send before receiving
acknowledgment.
Example
Suppose that we have sender window and receiver window each of size 4. So the sequence
numbering of both the windows will be 0,1,2,3,0,1,2 and so on. The following diagram shows
the positions of the windows after sending the frames and receiving acknowledgments.
Types of Sliding Window Protocols
The Sliding Window ARQ (Automatic Repeat reQuest) protocols are of two categories −
Go – Back – N ARQ
Go – Back – N ARQ provides for sending multiple frames before receiving the
acknowledgment for the first frame. It uses the concept of sliding window, and so is also
called sliding window protocol. The frames are sequentially numbered and a finite
number of frames are sent. If the acknowledgment of a frame is not received within the
time period, all frames starting from that frame are retransmitted.
Selective Repeat ARQ
This protocol also provides for sending multiple frames before receiving the
acknowledgment for the first frame. However, here only the erroneous or lost frames are
retransmitted, while the good frames are received and buffered.
Multiple access protocol- ALOHA, CSMA, CSMA/CA and CSMA/CD
Data Link Layer
The data link layer is used in a computer network to transmit the data between two devices or
nodes. It divides the layer into parts such as data link control and the multiple access
resolution/protocol. The upper layer has the responsibility to flow control and the error control
in the data link layer, and hence it is termed as logical of data link control. Whereas the lower
sub-layer is used to handle and reduce the collision or multiple access on a channel. Hence it is
termed as media access control or the multiple access resolutions.
Data Link Control
A data link control is a reliable channel for transmitting data over a dedicated link using various
techniques such as framing, error control and flow control of data packets in the computer
network.
What is a multiple access protocol?
When a sender and receiver have a dedicated link to transmit data packets, the data link control is
enough to handle the channel. Suppose there is no dedicated path to communicate or transfer the
data between two devices. In that case, multiple stations access the channel and simultaneously
transmits the data over the channel. It may create collision and cross talk. Hence, the multiple
access protocol is required to reduce the collision and avoid crosstalk between the channels.
For example, suppose that there is a classroom full of students. When a teacher asks a question,
all the students (small channels) in the class start answering the question at the same time
(transferring the data simultaneously). All the students respond at the same time due to which
data is overlap or data lost. Therefore it is the responsibility of a teacher (multiple access
protocol) to manage the students and make them one answer.
Following are the types of multiple access protocol that is subdivided into the different process
as:
A. Random Access Protocol
In this protocol, all the station has the equal priority to send the data over a channel. In random
access protocol, one or more stations cannot depend on another station nor any station control
another station. Depending on the channel's state (idle or busy), each station transmits the data
frame. However, if more than one station sends the data over a channel, there may be a collision
or data conflict. Due to the collision, the data frame packets may be lost or changed. And hence,
it does not receive by the receiver end.
Following are the different methods of random-access protocols for broadcasting frames on the
channel.
○ Aloha
○ CSMA
○ CSMA/CD
○ CSMA/CA
ALOHA Random Access Protocol
It is designed for wireless LAN (Local Area Network) but can also be used in a shared medium
to transmit data. Using this method, any station can transmit data across a network
simultaneously when a data frameset is available for transmission.
Aloha Rules
1. Any station can transmit data to a channel at any time.
2. It does not require any carrier sensing.
3. Collision and data frames may be lost during the transmission of data through multiple
stations.
4. Acknowledgment of the frames exists in Aloha. Hence, there is no collision detection.
5. It requires retransmission of data after some random amount of time.
Pure Aloha
Whenever data is available for sending over a channel at stations, we use Pure Aloha. In pure
Aloha, when each station transmits data to a channel without checking whether the channel is
idle or not, the chances of collision may occur, and the data frame can be lost. When any station
transmits the data frame to a channel, the pure Aloha waits for the receiver's acknowledgment. If
it does not acknowledge the receiver end within the specified time, the station waits for a random
amount of time, called the backoff time (Tb). And the station may assume the frame has been
lost or destroyed. Therefore, it retransmits the frame until all the data are successfully transmitted
to the receiver.
1. The total vulnerable time of pure Aloha is 2 * Tfr.
2. Maximum throughput occurs when G = 1/ 2 that is 18.4%.
3. Successful transmission of data frame is S = G * e ^ - 2 G.
As we can see in the figure above, there are four stations for accessing a shared channel and
transmitting data frames. Some frames collide because most stations send their frames at the
same time. Only two frames, frame 1.1 and frame 2.2, are successfully transmitted to the receiver
end. At the same time, other frames are lost or destroyed. Whenever two frames fall on a shared
channel simultaneously, collisions can occur, and both will suffer damage. If the new frame's
first bit enters the channel before finishing the last bit of the second frame. Both frames are
completely finished, and both stations must retransmit the data frame.
Slotted Aloha
The slotted Aloha is designed to overcome the pure Aloha's efficiency because pure Aloha has a
very high possibility of frame hitting. In slotted Aloha, the shared channel is divided into a fixed
time interval called slots. So that, if a station wants to send a frame to a shared channel, the
frame can only be sent at the beginning of the slot, and only one frame is allowed to be sent to
each slot. And if the stations are unable to send data to the beginning of the slot, the station will
have to wait until the beginning of the slot for the next time. However, the possibility of a
collision remains when trying to send a frame at the beginning of two or more station time slot.
1. Maximum throughput occurs in the slotted Aloha when G = 1 that is 37%.
2. The probability of successfully transmitting the data frame in the slotted Aloha is S = G *
e ^ - 2 G.
3. The total vulnerable time required in slotted Aloha is Tfr.
CSMA/ CD
It is a carrier sense multiple access/ collision detection network protocol to transmit data
frames. The CSMA/CD protocol works with a medium access control layer. Therefore, it first
senses the shared channel before broadcasting the frames, and if the channel is idle, it transmits a
frame to check whether the transmission was successful. If the frame is successfully received, the
station sends another frame. If any collision is detected in the CSMA/CD, the station sends a
jam/ stop signal to the shared channel to terminate data transmission. After that, it waits for a
random time before sending a frame to a channel.
CSMA/ CA
It is a carrier sense multiple access/collision avoidance network protocol for carrier
transmission of data frames. It is a protocol that works with a medium access control layer. When
a data frame is sent to a channel, it receives an acknowledgment to check whether the channel is
clear. If the station receives only a single (own) acknowledgments, that means the data frame has
been successfully transmitted to the receiver. But if it gets two signals (its own and one more in
which the collision of frames), a collision of the frame occurs in the shared channel. Detects the
collision of the frame when a sender receives an acknowledgment signal.
Following are the methods used in the CSMA/ CA to avoid the collision:
Interframe space: In this method, the station waits for the channel to become idle, and if it gets
the channel is idle, it does not immediately send the data. Instead of this, it waits for some time,
and this time period is called the Interframe space or IFS. However, the IFS time is often used
to define the priority of the station.
Contention window: In the Contention window, the total time is divided into different slots.
When the station/ sender is ready to transmit the data frame, it chooses a random slot number of
slots as wait time. If the channel is still busy, it does not restart the entire process, except that it
restarts the timer only to send data packets when the channel is inactive.
Acknowledgment: In the acknowledgment method, the sender station sends the data frame to
the shared channel if the acknowledgment is not received ahead of time.
Almost all collisions can be avoided in CSMA/CD but they can still occur during the contention
period. The collision during the contention period adversely affects the system performance, this
happens when the cable is long and length of packet are short. This problem becomes serious as
fiber optics network came into use. Here we shall discuss some protocols that resolve the
collision during the contention period.
● Bit-map Protocol
● Binary Countdown
● Limited Contention Protocols
● The Adaptive Tree Walk Protocol
Pure and slotted Aloha, CSMA and CSMA/CD are Contention based Protocols:
● Try-if collide-Retry
● No guarantee of performance
● What happen if the network load is high?
●
Collision Free Protocols:
● Pay constant overhead to achieve performance guarantee
● Good when network load is high
1. Bit-map Protocol:
Bit map protocol is collision free Protocol. In bitmap protocol method, each contention period
consists of exactly N slots. If any station has to send frame, then it transmits a 1 bit in the
corresponding slot. For example, if station 2 has a frame to send, it transmits a 1 bit to the 2nd
slot.
In general, Station 1 Announce the fact that it has a frame questions by inserting a 1 bit into slot
1. In this way, each station has complete knowledge of which station wishes to transmit. There
will never be any collisions because everyone agrees on who goes next. Protocols like this in
which the desire to transmit is broadcasting for the actual transmission are called Reservation
Protocols.
2. Binary Countdown:
Binary countdown protocol is used to overcome the overhead 1 bit per binary station. In binary
countdown, binary station addresses are used. A station wanting to use the channel broadcast its
address as binary bit string starting with the high order bit. All addresses are assumed of the
same length. Here, we will see the example to illustrate the working of the binary countdown.
In this method, different station addresses are read together who decide the priority of
transmitting. If these stations 0001, 1001, 1100, 1011 all are trying to seize the channel for
transmission. All the station at first broadcast their most significant address bit that is 0, 1, 1, 1
respectively. The most significant bits are read together. Station 0001 see the 1 MSB in another
station address and knows that a higher numbered station is competing for the channel, so it
gives up for the current round.
Other three stations 1001, 1100, 1011 continue. The next station at which next bit is 1 is at
station 1100, so station 1011 and 1001 give up because there 2nd bit is 0. Then station 1100 starts
transmitting a frame, after which another bidding cycle starts.
Binary Countdown fig (1.2)
3. Limited Contention Protocols:
● Collision based protocols (pure and slotted ALOHA, CSMA/CD) are good when the
network load is low.
● Collision free protocols (bitmap, binary Countdown) are good when load is high.
● How about combining their advantages :
1. Behave like the ALOHA scheme under light load
2. Behave like the bitmap scheme under heavy load.
4. Adaptive Tree Walk Protocol:
● partition the group of station and limit the contention for each slot.
● Under light load, everyone can try for each slot like aloha
● Under heavy load, only a group can try for each slot
● How do we do it :
1. treat every stations as the leaf of a binary tree
2. first slot (after successful transmission), all stations
can try to get the slot(under the root node).
3. If no conflict, fine.
4. Else, in case of conflict, only nodes under a subtree get to try for the next one. (depth
first search)
Adaptive Tree Walk Protocol fig (1.3)
Slot-0 : C*, E*, F*, H* (all nodes under node 0 can try which are going to send), conflict
Slot-1 : C* (all nodes under node 1 can try}, C sends
Slot-2 : E*, F*, H*(all nodes under node 2 can try}, conflict
Slot-3 : E*, F* (all nodes under node 5 can try to send), conflict
Slot-4 : E* (all nodes under E can try), E sends
Slot-5 : F* (all nodes under F can try), F sends
Slot-6 : H* (all nodes under node 6 can try to send), H sends.
Ethernet is a set of technologies and protocols that are used primarily in LANs. It was first
standardized in 1980s by IEEE 802.3 standard. IEEE 802.3 defines the physical layer and the
medium access control (MAC) sub-layer of the data link layer for wired Ethernet networks.
Ethernet is classified into two categories: classic Ethernet and switched Ethernet.
Classic Ethernet is the original form of Ethernet that provides data rates between 3 to 10 Mbps.
The varieties are commonly referred as 10BASE-X. Here, 10 is the maximum throughput, i.e. 10
Mbps, BASE denoted use of baseband transmission, and X is the type of medium used. Most
varieties of classic Ethernet have become obsolete in present communication scenario.
A switched Ethernet uses switches to connect to the stations in the LAN. It replaces the repeaters
used in classic Ethernet and allows full bandwidth utilization.
IEEE 802.3 Popular Versions
There are a number of versions of IEEE 802.3 protocol. The most popular ones are -
IEEE 802.3: This was the original standard given for 10BASE-5. It used a thick single
coaxial cable into which a connection can be tapped by drilling into the cable to the core.
Here, 10 is the maximum throughput, i.e. 10 Mbps, BASE denoted use of baseband
transmission, and 5 refers to the maximum segment length of 500m.
IEEE 802.3a: This gave the standard for thin coax (10BASE-2), which is a thinner
variety where the segments of coaxial cables are connected by BNC connectors. The 2
refers to the maximum segment length of about 200m (185m to be precise).
IEEE 802.3i: This gave the standard for twisted pair (10BASE-T) that uses unshielded
twisted pair (UTP) copper wires as physical layer medium. The further variations were
given by IEEE 802.3u for 100BASE-TX, 100BASE-T4 and 100BASE-FX.
IEEE 802.3i: This gave the standard for Ethernet over Fiber (10BASE-F) that uses fiber
optic cables as medium of transmission.
HDLC Frame
HDLC is a bit - oriented protocol where each frame contains up to six fields. The structure varies
according to the type of frame. The fields of a HDLC frame are −
Flag − It is an 8-bit sequence that marks the beginning and the end of the frame. The bit
pattern of the flag is 01111110.
Address − It contains the address of the receiver. If the frame is sent by the primary
station, it contains the address(es) of the secondary station(s). If it is sent by the
secondary station, it contains the address of the primary station. The address field may be
from 1 byte to several bytes.
Control − It is 1 or 2 bytes containing flow and error control information.
Payload − This carries the data from the network layer. Its length may vary from one
network to another.
FCS − It is a 2 byte or 4 bytes frame check sequence for error detection. The standard
code used is CRC (cyclic redundancy code)
Process of Switching
The switching process involves the following steps:
● Frame Reception: The switch receives a data frame or packet from a computer
connected to its ports.
● MAC Address Extraction: The switch reads the header of the data frame and
collects the destination MAC Address from it.
● MAC Address Table Lookup: Once the switch has retrieved the MAC Address, it
performs a lookup in its Switching table to find a port that leads to the MAC Address
of the data frame.
● Forwarding Decision and Switching Table Update: If the switch matches the
destination MAC Address of the frame to the MAC address in its switching table, it
forwards the data frame to the respective port. However, if the destination MAC
Address does not exist in its forwarding table, it follows the flooding process, in
which it sends the data frame to all its ports except the one it came from and records
all the MAC Addresses to which the frame was delivered. This way, the switch finds
the new MAC Address and updates its forwarding table.
● Frame Transition: Once the destination port is found, the switch sends the data
frame to that port and forwards it to its target computer/network.
Types of Switching
There are three types of switching methods:
● Message Switching
● Circuit Switching
● Packet Switching
○ Datagram Packet Switching
○ Virtual Circuit Packet Switching
Tunneling
The task is sent on an IP packet from host A of Ethernet-1 to host B of Ethernet-2 via a WAN.
Steps
● Host A constructs a packet that contains the IP address of Host B.
● It then inserts this IP packet into an Ethernet frame and this frame is addressed to the
multiprotocol router M1
● Host A then puts this frame on Ethernet.
● When M1 receives this frame, it removes the IP packet, inserts it in the payload
packet of the WAN network layer packet, and addresses the WAN packet to M2. The
multiprotocol router M2 removes the IP packet and sends it to host B in an Ethernet
frame.
How Does Encapsulation Work?
Data travels from one place to another in the form of packets, and a packet has two parts, the first
one is the header which consists of the destination address and the working protocol and the
second thing is its contents.
In simple terminology, Encapsulation is the process of adding a new packet within the existing
packet or a packet inside a packet. In an encapsulated packet, the header part of the first packet is
remain surrounded by the payload section of the surrounding packet, which has actual contents.
Why is this Technique Called Tunneling?
In this particular example, the IP packet does not have to deal with WAN, and the host’s A and B
also do not have to deal with the WAN. The multiprotocol routers M1 and M2 will have to
understand IP and WAN packets. Therefore, the WAN can be imagined to be equivalent to a big
tunnel extending between multiprotocol routers M1 and M2 and the technique is called
Tunneling.
Types of Tunneling Protocols
1. Generic Routing Encapsulation
2. Internet Protocol Security
3. Ip-in-IP
4. SSH
5. Point-to-Point Tunneling Protocol
6. Secure Socket Tunneling Protocol
7. Layer 2 Tunneling Protocol
8. Virtual Extensible Local Area Network
1. Generic Routing Encapsulation (GRE)
Generic Routing Encapsulation is a method of encapsulation of IP packets in a GRE header that
hides the original IP packet. Also, a new header named delivery header is added above the GRE
header which contains the new source and destination address.
GRE header act as a new IP header with a Delivery header containing a new source and
destination address. Only routers between which GRE is configured can decrypt and encrypt the
GRE header. The original IP packet enters a router, travels in encrypted form, and emerges out of
another GRE-configured router as the original IP packet as they have traveled through a tunnel.
Hence, this process is called GRE tunneling.
2. Internet Protocol Security (IPsec)
IP security (IPSec) is an Internet Engineering Task Force (IETF) standard suite of protocols
between 2 communication points across the IP network that provide data authentication,
integrity, and confidentiality. It also defines the encrypted, decrypted, and authenticated packets.
The protocols needed for secure key exchange and key management are defined in it.
3. IP-in-IP
IP-in-IP is a Tunneling Protocol for encapsulating IP packets inside another IP packet.
4. Secure Shell (SSH)
SSH(Secure Shell) is an access credential that is used in the SSH Protocol. In other words, it is a
cryptographic network protocol that is used for transferring encrypted data over the network. It
allows you to connect to a server, or multiple servers, without having to remember or enter your
password for each system which is to log in remotely from one system to another.
5. Point-to-Point Tunneling Protocol (PPTP)
PPTP or Point-to-Point Tunneling Protocol generates a tunnel and confines the data packet.
Point-to-Point Protocol (PPP) is used to encrypt the data between the connection. PPTP is one of
the most widely used VPN protocols and has been in use since the early release of Windows.
PPTP is also used on Mac and Linux apart from Windows.
● Since there are 16 bits for total length in IP header so, the maximum size of IP
datagram = 216 – 1 = 65, 535 bytes.
● It is done by the network layer at the destination side and is usually done at routers.
● Source side does not require fragmentation due to wise (good) segmentation by
transport layer i.e. instead of doing segmentation at the transport layer and
fragmentation at the network layer, the transport layer looks at datagram data limit
and frame data limit and does segmentation in such a way that resulting data can
easily fit in a frame without the need of fragmentation.
● Receiver identifies the frame with the identification (16 bits) field in the IP header.
Each fragment of a frame has the same identification number.
● Receiver identifies the sequence of frames using the fragment offset(13 bits) field in
the IP header
● Overhead at the network layer is present due to the extra header introduced due to
fragmentation.
the need of Fragmentation at Network Layer:
Fragmentation at the Network Layer is a process of dividing a large data packet into smaller
pieces, known as fragments, to improve the efficiency of data transmission over a network. The
need for fragmentation at the network layer arises from several factors:
1.Maximum Transmission Unit (MTU): Different networks have different Maximum
Transmission Unit (MTU) sizes, which determine the maximum size of a data packet that can be
transmitted over that network. If the size of a data packet exceeds the MTU, it needs to be
fragmented into smaller fragments that can be transmitted over the network.
2.Network Performance: Large data packets can consume a significant amount of network
resources and can cause congestion in the network. Fragmentation helps to reduce the impact of
large data packets on network performance by breaking them down into smaller fragments that
can be transmitted more efficiently.
3.Bandwidth Utilization: Large data packets may consume a significant amount of network
bandwidth, causing other network traffic to be slowed down. Fragmentation helps to reduce the
impact of large data packets on network bandwidth utilization by breaking them down into
smaller fragments that can be transmitted more efficiently.
Fragmentation at the network layer is necessary in order to ensure efficient and reliable
transmission of data over communication networks.
1.Large Packet Size: In some cases, the size of the packet to be transmitted may be too large for
the underlying communication network to handle. Fragmentation at the network layer allows the
large packet to be divided into smaller fragments that can be transmitted over the network.
2.Path MTU: The Maximum Transmission Unit (MTU) of a network defines the largest packet
size that can be transmitted over the network. Fragmentation at the network layer allows the
packet to be divided into smaller fragments that can be transmitted over networks with different
MTU values.
3.Reliable Transmission: Fragmentation at the network layer increases the reliability of data
transmission, as smaller fragments are less likely to be lost or corrupted during transmission.
Fields in IP header for fragmentation –
● Identification (16 bits) – use to identify fragments of the same frame.
● Fragment offset (13 bits) – use to identify the sequence of fragments in the frame. It
generally indicates a number of data bytes preceding or ahead of the fragment.
Maximum fragment offset possible = (65535 – 20) = 65515
{where 65535 is the maximum size of datagram and 20 is the minimum size of IP
header}
So, we need ceil(log265515) = 16 bits for a fragment offset but the fragment offset
field has only 13 bits. So, to represent efficiently we need to scale down the fragment
offset field by 216/213 = 8 which acts as a scaling factor. Hence, all fragments except
the last fragment should have data in multiples of 8 so that fragment offset ∈ N.
● More fragments (MF = 1 bit) – tells if more fragments are ahead of this fragment
i.e. if MF = 1, more fragments are ahead of this fragment and if MF = 0, it is the last
fragment.
● Don’t fragment (DF = 1 bit) – if we don’t want the packet to be fragmented then DF
is set i.e. DF = 1.
Reassembly of Fragments –
It takes place only at the destination and not at routers since packets take an independent
path(datagram packet switching), so all may not meet at a router and hence a need of
fragmentation may arise again. The fragments may arrive out of order also.
Algorithm –
1. Destination should identify that datagram is fragmented from MF, Fragment offset
field.
2. Destination should identify all fragments belonging to same datagram from
Identification field.
3. Identify the 1st fragment(offset = 0).
4. Identify subsequent fragments using header length, fragment offset.
5. Repeat until MF = 0.
Efficiency –
Efficiency (e) = useful/total = (Data without header)/(Data with header)
It can generate 4.29×109 The address space of IPv6 is quite large it can
address space produce 3.4×1038 address space
Address representation of
Address representation of IPv6 is in hexadecimal
IPv4 is in decimal
Fragmentation performed by
In IPv6 fragmentation is performed only by the
Sender and forwarding
sender
routers
In IPv4 Packet flow In IPv6 packet flow identification are Available and
identification is not available uses the flow label field in the header
IPv4 supports
VLSM(Variable Length IPv6 does not support VLSM.
subnet mask).
Note: MAC address: The MAC address is used to identify the actual device.
○ If ARP cache is empty, then device broadcast the message to the entire network asking
each device for a matching MAC address.
○ The device that has the matching IP address will then respond back to the sender with its
MAC address
○ Once the MAC address is received by the device, then the communication can take place
between two devices.
○ If the device receives the MAC address, then the MAC address gets stored in the ARP
cache. We can check the ARP cache in command prompt by using a command arp -a.
In the above screenshot, we observe the association of IP address to the MAC address.
There are two types of ARP entries:
○ Dynamic entry: It is an entry which is created automatically when the sender broadcast
its message to the entire network. Dynamic entries are not permanent, and they are
removed periodically.
○ Static entry: It is an entry where someone manually enters the IP to MAC address
association by using the ARP command utility.
RARP
○ RARP stands for Reverse Address Resolution Protocol.
○ If the host wants to know its IP address, then it broadcast the RARP query packet that
contains its physical address to the entire network. A RARP server on the network
recognizes the RARP packet and responds back with the host IP address.
○ The protocol which is used to obtain the IP address from a server is known as Reverse
Address Resolution Protocol.
○ The message format of the RARP protocol is similar to the ARP protocol.
○ Like ARP frame, RARP frame is sent from one machine to another encapsulated in the
data portion of a frame.
ICMP
○ ICMP stands for Internet Control Message Protocol.
○ The ICMP is a network layer protocol used by hosts and routers to send the notifications
of IP datagram problems back to the sender.
○ ICMP uses echo test/reply to check whether the destination is reachable and responding.
○ ICMP handles both control and error messages, but its main function is to report the error
but not to correct them.
○ An IP datagram contains the addresses of both source and destination, but it does not
know the address of the previous router through which it has been passed. Due to this
reason, ICMP can only send the messages to the source, but not to the immediate routers.
○ ICMP protocol communicates the error messages to the sender. ICMP messages cause the
errors to be returned back to the user processes.
○ ICMP messages are transmitted within IP datagram.
The Format of an ICMP message
IGMP
○ IGMP stands for Internet Group Message Protocol.
○ The IP protocol supports two types of communication:
○ Unicasting: It is a communication between one sender and one receiver.
Therefore, we can say that it is one-to-one communication.
○ Multicasting: Sometimes the sender wants to send the same message to a large
number of receivers simultaneously. This process is known as multicasting which
has one-to-many communication.
○ The IGMP protocol is used by the hosts and router to support multicasting.
○ The IGMP protocol is used by the hosts and router to identify the hosts in a LAN that are
the members of a group.
○ IGMP is a part of the IP layer, and IGMP has a fixed-size message.
○ The IGMP message is encapsulated within an IP datagram.
Where,
Type: It determines the type of IGMP message. There are three types of IGMP message:
Membership Query, Membership Report and Leave Report.
Maximum Response Time: This field is used only by the Membership Query message. It
determines the maximum time the host can send the Membership Report message in response to
the Membership Query message.
Checksum: It determines the entire payload of the IP datagram in which IGMP message is
encapsulated.
Group Address: The behavior of this field depends on the type of the message sent.
○ For Membership Query, the group address is set to zero for General Query and set to
multicast group address for a specific query.
○ For Membership Report, the group address is set to the multicast group address.
○ For Leave Group, it is set to the multicast group address.
IGMP Messages
● Inside global address – IP address that represents one or more inside local IP
addresses to the outside world. This is the inside host as seen from the outside
network.
● Outside local address – This is the actual IP address of the destination host in the
local network after translation.
● Outside global address – This is the outside host as seen from the outside network. It
is the IP address of the outside destination host before translation.
3. Port Address Translation (PAT) – This is also known as NAT overload. In this,
many local (private) IP addresses can be translated to a single registered IP address.
Port numbers are used to distinguish the traffic i.e., which traffic belongs to which IP
address. This is most frequently used as it is cost-effective as thousands of users can
be connected to the Internet by using only one real global (public) IP address.
Centralized algorithm − In centralized routing, one centralized node has the total
network information and takes the routing decisions. It finds the least-cost path between
source and destination nodes by using global knowledge about the network. So, it is also
known as global routing algorithm. The advantage of this routing is that only the central
node is required to store network information and so the resource requirement of the
other nodes may be less. However, routing performance is too much dependent upon the
central node. An example of centralized routing is link state routing algorithm.
Isolated algorithm − In this algorithm, the nodes make the routing decisions based upon
local information available to them instead of gathering information from other nodes.
They do not have information regarding the link status. While this helps in fast decision
making, the nodes may transmit data packets along congested network resulting in delay.
The examples of isolated routing are hot potato routing and backward learning.
Distributed algorithm − This is a decentralized algorithm where each node receives
information from its neighbouring nodes and takes the decision based upon the received
information. The least-cost path between source and destination is computed iteratively in
a distributed manner. An advantage is that each node can dynamically change routing
decisions based upon the changes in the network. However, on the flip side, delays may
be introduced due to time required to gather information. Example of distributed
algorithm is distance vector routing algorithm.
Non-adaptive routing algorithms, also known as static routing algorithms, do not change the
selected routing decisions for transferring data packets from the source to the destination. They
construct a static routing table in advance to determine the path through which packets are to be
sent.
The static routing table is constructed based upon the routing information stored in the routers
when the network is booted up. Once the static paths are available to all the routers, they transmit
the data packets along these paths. The changing network topology and traffic conditions do not
affect the routing decisions.
Types of Non − adaptive Routing Algorithms
Flooding − In flooding, when a data packet arrives at a router, it is sent to all the
outgoing links except the one it has arrived on. Flooding may be of three types−
Uncontrolled flooding − Here, each router unconditionally transmits the
incoming data packets to all its neighbours.
Controlled flooding − They use some methods to control the transmission of
packets to the neighbouring nodes. The two popular algorithms for controlled
flooding are Sequence Number Controlled Flooding (SNCF) and Reverse Path
Forwarding (RPF).
Selective flooding − Here, the routers don't transmit the incoming packets only
along those paths which are heading towards approximately in the right direction,
instead of every available paths.
Random walks (RW) − This is a probabilistic algorithm where a data packet is sent by a
router to any one of its neighbours randomly. The transmission path thereby formed is a
random walk. RW can explore the alternative routes very efficiently. RW is very simple
to implement, requires small memory footprints, does not topology information of the
network and has inherent load balancing property. RW is suitable for very small devices
and for dynamic networks.
Unit-4
Transport Layer: Transport Services, Connection Management using three-way handshake
principle, User Datagram Protocol (UDP), Transmission Control Protocol (TCP), SCTP,
Congestion Control Policies, QoS Techniques: Leaky Bucket and Token Bucket algorithm.
1) Transport layer services(notes)
2) Three way handshake process(notes)
3)User Datagram Protocol (UDP) is a Transport Layer protocol. UDP is a part of the Internet
Protocol suite, referred to as UDP/IP suite. Unlike TCP, it is an unreliable and connectionless
protocol. So, there is no need to establish a connection before data transfer. The UDP helps to
establish low-latency and loss-tolerating connections over the network. The UDP enables
process-to-process communication.
What is User Datagram Protocol?
User Datagram Protocol (UDP) is one of the core protocols of the Internet Protocol (IP) suite. It
is a communication protocol used across the internet for time-sensitive transmissions such as
video playback or DNS lookups. Unlike Transmission Control Protocol (TCP), UDP is
connectionless and does not guarantee delivery, order, or error checking, making it a lightweight
and efficient option for certain types of data transmission.
UDP Header
UDP header is an 8-byte fixed and simple header, while for TCP it may vary from 20 bytes to 60
bytes. The first 8 Bytes contain all necessary header information and the remaining part consists
of data. UDP port number fields are each 16 bits long, therefore the range for port numbers is
defined from 0 to 65535; port number 0 is reserved. Port numbers help to distinguish different
user requests or processes.
UDP Header
● Source Port: Source Port is a 2 Byte long field used to identify the port number of
the source.
● Destination Port: It is a 2 Byte long field, used to identify the port of the destined
packet.
● Length: Length is the length of UDP including the header and the data. It is a 16-bits
field.
● Checksum: Checksum is 2 Bytes long field. It is the 16-bit one’s complement of the
one’s complement sum of the UDP header, the pseudo-header of information from the
IP header, and the data, padded with zero octets at the end (if necessary) to make a
multiple of two octets.
Notes – Unlike TCP, the Checksum calculation is not mandatory in UDP. No Error control or
flow control is provided by UDP. Hence UDP depends on IP and ICMP for error reporting. Also
UDP provides port numbers so that is can differentiate between users requests.
Applications of UDP
● Used for simple request-response communication when the size of data is less and
hence there is lesser concern about flow and error control.
● It is a suitable protocol for multicasting as UDP supports packet switching.
● UDP is used for some routing update protocols like RIP(Routing Information
Protocol).
● Normally used for real-time applications which can not tolerate uneven delays
between sections of a received message.
● VoIP (Voice over Internet Protocol) services, such as Skype and WhatsApp, use UDP
for real-time voice communication. The delay in voice communication can be
noticeable if packets are delayed due to congestion control, so UDP is used to ensure
fast and efficient data transmission.
● DNS (Domain Name System) also uses UDP for its query/response messages. DNS
queries are typically small and require a quick response time, making UDP a suitable
protocol for this application.
● DHCP (Dynamic Host Configuration Protocol) uses UDP to dynamically assign IP
addresses to devices on a network. DHCP messages are typically small, and the delay
caused by packet loss or retransmission is generally not critical for this application.
● Following implementations uses UDP as a transport layer protocol:
○ NTP (Network Time Protocol)
○ DNS (Domain Name Service)
○ BOOTP, DHCP.
○ NNP (Network News Protocol)
○ Quote of the day protocol
○ TFTP, RTSP, RIP.
● The application layer can do some of the tasks through UDP-
○ Trace Route
○ Record Route
○ Timestamp
● UDP takes a datagram from Network Layer, attaches its header, and sends it to the
user. So, it works fast.
TCP vs UDP
User
Transmission Control Datagram
Basis
Protocol (TCP) Protocol
(UDP)
UDP is the
Datagram-oriented
protocol. This is
because there is no
TCP is a connection-oriented overhead for opening
protocol. Connection orientation a connection,
means that the communicating maintaining a
Type of
devices should establish a connection, or
Service
connection before transmitting data terminating a
and should close the connection connection. UDP is
after transmitting the data. efficient for
broadcast and
multicast types of
network
transmission.
The delivery
of data to the
TCP is reliable as it
destination
Reliability guarantees the delivery of
cannot be
data to the destination router.
guaranteed in
UDP.
No
Acknowledg An acknowledgment
acknowledgm
me nt segment is present.
ent segment.
There is no
sequencing of
data in UDP.
Sequencing of data is a feature of If the order is
Transmission Control Protocol required, it
Sequence
(TCP). this means that packets has to be
arrive in order at the receiver. managed by
the
application
layer.
UDP is faster,
TCP is comparatively slower simpler, and
Speed
than UDP. more efficient
than TCP.
There is no
retransmissio
n of lost
Retransmission of lost
Retransmissi packets in the
packets is possible in TCP,
on User
but not in UDP.
Datagram
Protocol
(UDP).
UDP has an 8
Header TCP has a (20-60) bytes bytes
Length variable length header. fixed-length
header.
UDP is
Weight TCP is heavy-weight.
lightweight.
It’s a
connectionles
Handshakin Uses handshakes such as
s protocol i.e.
g Techniques SYN, ACK, SYN-ACK
No
handshake
UDP
TCP doesn’t support
Broadcasting supports
Broadcasting.
Broadcasting.
UDP is used by
TCP is used by HTTP, HTTPs, FTP, DNS, DHCP, TFTP,
Protocols
SMTP and Telnet. SNMP, RIP, and
VoIP.
UDP
The TCP connection is a connection is
Stream Type
byte stream. a message
stream.
This protocol
is used in
situations
where quick
communicati
This protocol is primarily on is
utilized in situations when a necessary but
safe and trustworthy where
Applications communication procedure is dependability
necessary, such as in email, is not a
on the web surfing, and in concern, such
military services. as VoIP, game
streaming,
video, and
music
streaming,
etc.
Advantages of UDP
● Speed: UDP is faster than TCP because it does not have the overhead of establishing
a connection and ensuring reliable data delivery.
● Lower latency: Since there is no connection establishment, there is lower latency and
faster response time.
● Simplicity: UDP has a simpler protocol design than TCP, making it easier to
implement and manage.
● Broadcast support: UDP supports broadcasting to multiple recipients, making it
useful for applications such as video streaming and online gaming.
● Smaller packet size: UDP uses smaller packet sizes than TCP, which can reduce
network congestion and improve overall network performance.
● User Datagram Protocol (UDP) is more efficient in terms of both latency and
bandwidth.
4)TCP
TCP stands for Transmission Control Protocol. TCP protocol provides transport layer services to
applications. TCP protocol is a connection-oriented protocol. A secured connection is being
established between the sender and the receiver. For a generation of a secured connection, a
virtual circuit is generated between the sender and the receiver. The data transmitted by TCP
protocol is in the form of continuous byte streams. A unique sequence number is assigned to
each byte. With the help of this unique number, a positive acknowledgment is received from
receipt. If the acknowledgment is not received within a specific period the data is retransmitted
to the specified destination.
TCP Segment
A TCP segment’s header may have 20–60 bytes. The options take about 40 bytes. A header
consists of 20 bytes by default, although it can contain up to 60 bytes.
● Source Port Address: The port address of the programme sending the data segment
is stored in the 16-bit field known as the source port address.
● Destination Port Address: The port address of the application running on the host
receiving the data segment is stored in the destination port address, a 16-bit field.
● Sequence Number: The sequence number, or the byte number of the first byte sent in
that specific segment, is stored in a 32-bit field. At the receiving end, it is used to put
the message back together once it has been received out of sequence.
● Acknowledgement Number : The acknowledgement number, or the byte number
that the recipient anticipates receiving next, is stored in a 32-bit field called the
acknowledgement number. It serves as a confirmation that the earlier bytes were
successfully received.
● Header Length (HLEN): This 4-bit field stores the number of 4-byte words in the
TCP header, indicating how long the header is. For example, if the header is 20 bytes
(the minimum length of the TCP header), this field will store 5 because 5 x 4 = 20,
and if the header is 60 bytes (the maximum length), it will store 15 because 15 x 4 =
60. As a result, this field’s value is always between 5 and 15.
● Control flags: These are six 1-bit control bits that regulate flow control, method of
transfer, connection abortion, termination, and establishment. They serve the
following purposes:
○ Urgent: This pointer is legitimate
○ ACK: The acknowledgement number (used in cumulative
acknowledgement cases) is valid.
○ PSH: Push request
○ RST: Restart the link.
○ SYN: Sequence number synchronisation
○ FIN: Cut off the communication
○ Window size: This parameter provides the sender TCP’s window
size in bytes.
● Checksum: The checksum for error control is stored in this field. Unlike UDP, it is
required for TCP.
● Urgent pointer: This field is used to point to data that must urgently reach the
receiving process as soon as possible. It is only valid if the URG control flag is set.
To obtain the byte number of the final urgent byte, the value of this field is appended
to the sequence number.
Advantages of TCP
● TCP supports multiple routing protocols.
● TCP protocol operates independently of that of the operating system.
● TCP protocol provides the features of error control and flow control.
● TCP provides a connection-oriented protocol and provides the delivery of data.
Disadvantages of TCP
● TCP protocol cannot be used for broadcast or multicast transmission.
● TCP protocol has no block boundaries.
● No clear separation is being offered by TCP protocol between its interface, services,
and protocols.
● In TCP/IP replacement of protocol is difficult.
5) SCTP
SCTP stands for Stream Control Transmission Protocol. SCTP is a connection-oriented protocol.
Stream Control Transmission Protocol transmits the data from sender to receiver in full duplex
mode. SCTP is a unicast protocol that provides a point-to-point connection and uses different
hosts for reaching the destination. SCTP protocol provides a simpler way to build a connection
over a wireless network. SCTP protocol provides a reliable transmission of data. SCTP provides
a reliable and easier telephone conversation over the internet. SCTP protocol supports the feature
of multihoming ie. it can establish more than one connection path between the two points of
communication and does not depend on the IP layer. SCTP protocol also ensures security by not
allowing the half-open connections.
Advantages of SCTP
● SCTP provides a full duplex connection. It can send and receive the data
simultaneously.
● SCTP protocol possesses the properties of both TCP and UDP protocol.
● SCTP protocol does not depend on the IP layer.
● SCTP is a secure protocol.
Disadvantages of SCTP
● To handle multiple streams simultaneously the applications need to be modified
accordingly.
● The transport stack on the node needs to be changed for the SCTP protocol.
● Modification is required in applications if SCTP is used instead of TCP or UDP
protocol.
6) Qos Techniques:
When too many packets are present in the network it causes packet delay and loss of packet
which degrades the performance of the system. This situation is called congestion.
The network layer and transport layer share the responsibility for handling congestions. One
of the most effective ways to control congestion is trying to reduce the load that transport layer is
placing on the network. To maintain this, the network and transport layers have to work together.
With too much traffic, performance drops sharply.
There are two types of Congestion control algorithms, which are as follows −
Leaky Bucket Algorithm
Token Bucket Algorithm
Leaky Bucket Algorithm
Let see the working condition of Leaky Bucket Algorithm −
Leaky Bucket Algorithm mainly controls the total amount and the rate of the traffic sent to the
network.
Step 1 − Let us imagine a bucket with a small hole at the bottom where the rate at which water is
poured into the bucket is not constant and can vary but it leaks from the bucket at a constant rate.
Step 2 − So (up to water is present in the bucket), the rate at which the water leaks does not
depend on the rate at which the water is input to the bucket.
Step 3 − If the bucket is full, additional water that enters into the bucket that spills over the sides
and is lost.
Step 4 − Thus the same concept applied to packets in the network. Consider that data is coming
from the source at variable speeds. Suppose that a source sends data at 10 Mbps for 4 seconds.
Then there is no data for 3 seconds. The source again transmits data at a rate of 8 Mbps for 2
seconds. Thus, in a time span of 8 seconds, 68 Mb data has been transmitted.
That’s why if a leaky bucket algorithm is used, the data flow would be 8 Mbps for 9 seconds.
Thus, the constant flow is maintained.
Token bucket algorithm is one of the techniques for congestion control algorithms. When too
many packets are present in the network it causes packet delay and loss of packet which degrades
the performance of the system. This situation is called congestion.
The network layer and transport layer share the responsibility for handling congestions. One
of the most effective ways to control congestion is trying to reduce the load that transport layer is
placing on the network. To maintain this network and transport layers have to work together.
The Token Bucket Algorithm is diagrammatically represented as follows −
If bucket is full, token is discarded but If bucket is full, then packets are
not the packet. discarded.
Congestion control refers to the techniques used to control or prevent congestion. Congestion
control techniques can be broadly classified into two categories:
1. Retransmission Policy :
It is the policy in which retransmission of the packets are taken care of. If the sender
feels that a sent packet is lost or corrupted, the packet needs to be retransmitted. This
transmission may increase the congestion in the network.
To prevent congestion, retransmission timers must be designed to prevent congestion
and also able to optimize efficiency.
2. Window Policy :
The type of window at the sender’s side may also affect the congestion. Several
packets in the Go-back-n window are re-sent, although some packets may be received
successfully at the receiver side. This duplication may increase the congestion in the
network and make it worse.
Therefore, Selective repeat window should be adopted as it sends the specific packet
that may have been lost.
3. Discarding Policy :
A good discarding policy adopted by the routers is that the routers may prevent
congestion and at the same time partially discard the corrupted or less sensitive
packages and also be able to maintain the quality of a message.
In case of audio file transmission, routers can discard less sensitive packets to prevent
congestion and also maintain the quality of the audio file.
4. Acknowledgment Policy :
Since acknowledgements are also the part of the load in the network, the
acknowledgment policy imposed by the receiver may also affect congestion. Several
approaches can be used to prevent congestion related to acknowledgment.
The receiver should send acknowledgement for N packets rather than sending
acknowledgement for a single packet. The receiver should send an acknowledgment
only if it has to send a packet or a timer expires.
5. Admission Policy :
In admission policy a mechanism should be used to prevent congestion. Switches in a
flow should first check the resource requirement of a network flow before
transmitting it further. If there is a chance of a congestion or there is a congestion in
the network, router should deny establishing a virtual network connection to prevent
further congestion.
All the above policies are adopted to prevent congestion before it happens in the network.
1. Backpressure :
Backpressure is a technique in which a congested node stops receiving packets from upstream
node. This may cause the upstream node or nodes to become congested and reject receiving data
from above nodes. Backpressure is a node-to-node congestion control technique that propagate
in the opposite direction of data flow. The backpressure technique can be applied only to virtual
circuit where each node has information of its above upstream node.
In above diagram the 3rd node is congested and stops receiving packets as a result 2nd
node may be get congested due to slowing down of the output data flow. Similarly 1st node may
get congested and inform the source to slow down.
3. Implicit Signaling :
In implicit signaling, there is no communication between the congested nodes and the source.
The source guesses that there is congestion in a network. For example when sender sends several
packets and there is no acknowledgment for a while, one assumption is that there is a congestion.
4. Explicit Signaling :
In explicit signaling, if a node experiences congestion it can explicitly sends a packet to the
source or destination to inform about congestion. The difference between choke packet and
explicit signaling is that the signal is included in the packets that carry data rather than creating a
different packet as in case of choke packet technique.
Explicit signaling can occur in either forward or backward direction.
● Forward Signaling : In forward signaling, a signal is sent in the direction of the
congestion. The destination is warned about congestion. The receiver in this case
adopt policies to prevent further congestion.
● Backward Signaling : In backward signaling, a signal is sent in the opposite
direction of the congestion. The source is warned about congestion and it needs to
slow down.
UNIT V
Application Layer: DNS, TELNET, E-MAIL, FTP, WWW, HTTP, SNMP, Bluetooth,
Firewalls.
1)DNS (Domain Name System)
Description: DNS is a hierarchical and decentralized naming system used to resolve
human-readable domain names (like www.example.com) into IP addresses that computers use to
identify each other on the network.
○ DNS stands for Domain Name System.
○ DNS is a directory service that provides a mapping between the name of a host on the
network and its numerical address.
○ DNS is required for the functioning of the internet.
○ Each node in a tree has a domain name, and a full domain name is a sequence of symbols
specified by dots.
○ DNS is a service that translates the domain name into IP addresses. This allows the users
of networks to utilize user-friendly names when looking for other hosts instead of
remembering the IP addresses.
○ For example, suppose the FTP site at EduSoft had an IP address of 132.147.165.50, most
people would reach this site by specifying ftp.EduSoft.com. Therefore, the domain name
is more reliable than IP address.
DNS is a TCP/IP protocol used on different platforms. The domain name space is divided into
three different sections: generic domains, country domains, and inverse domain.
Advantages:
1. Simplifies User Access: Users can access websites using easy-to-remember domain
names instead of numeric IP addresses.
2. Decentralization: The hierarchical structure allows for distributed management and
redundancy.
3. Scalability: Can handle a vast number of domain names efficiently.
4. Flexibility: Supports various types of records (A, MX, CNAME, etc.) for different
purposes.
Disadvantages:
1. Security Vulnerabilities: Susceptible to attacks like DNS spoofing or cache poisoning.
2. Complexity: Managing DNS records can be complex, especially for large organizations.
3. Latency: DNS lookups can add latency to the initial connection time.
Applications:
● Translating domain names to IP addresses for web browsing, email, and other Internet
services.
● Load balancing by distributing traffic among multiple servers.
● Supporting CDN (Content Delivery Network) services by directing users to the nearest
server.
2)Telnet
Description: Telnet is a protocol that allows for remote access to another computer over a
network. It provides a command-line interface for communication with remote devices.
○ The main task of the internet is to provide services to users. For example, users want to
run different application programs at the remote site and transfers a result to the local site.
This requires a client-server program such as FTP, SMTP. But this would not allow us to
create a specific program for each demand.
○ The better solution is to provide a general client-server program that lets the user access
any application program on a remote computer. Therefore, a program that allows a user to
log on to a remote computer. A popular client-server program Telnet is used to meet such
demands. Telnet is an abbreviation for Terminal Network.
○ Telnet provides a connection to the remote computer in such a way that a local terminal
appears to be at the remote side.
There are two types of login:
○ Local Login
○ When a user logs into a local computer, then it is known as local login.
○ When the workstation running terminal emulator, the keystrokes entered by
the user are accepted by the terminal driver. The terminal driver then passes
these characters to the operating system which in turn, invokes the desired
application program.
○ However, the operating system has special meaning to special characters. For
example, in UNIX some combination of characters have special meanings
such as control character with "z" means suspend. Such situations do not
create any problem as the terminal driver knows the meaning of such
characters. But, it can cause the problems in remote login.
○ Remote login
○ When the user wants to access an application program on a remote computer, then
the user must perform remote login.
How remote login occurs
At the local site
The user sends the keystrokes to the terminal driver, the characters are then sent to the TELNET
client. The TELNET client which in turn, transforms the characters to a universal character set
known as network virtual terminal characters and delivers them to the local TCP/IP stack
At the remote site
The commands in NVT forms are transmitted to the TCP/IP at the remote machine. Here, the
characters are delivered to the operating system and then pass to the TELNET server. The
TELNET server transforms the characters which can be understandable by a remote computer.
However, the characters cannot be directly passed to the operating system as a remote operating
system does not receive the characters from the TELNET server. Therefore it requires some
piece of software that can accept the characters from the TELNET server. The operating system
then passes these characters to the appropriate application program.
Advantages:
1. Simplicity: Easy to set up and use for basic remote management.
2. Flexibility: Can be used on various operating systems and network devices.
3. Low Overhead: Minimal bandwidth usage due to text-based communication.
Disadvantages:
1. Lack of Security: Transmits data, including passwords, in plain text, making it
vulnerable to interception.
2. Limited Features: Basic compared to more modern protocols like SSH.
3. Compatibility Issues: Not all modern devices and systems support Telnet due to its
security limitations.
Applications:
● Remote management of servers and network devices.
● Troubleshooting network services and connectivity issues.
● Legacy systems and devices that do not support more secure protocols.
3)FTP
○ FTP stands for File transfer protocol.
○ FTP is a standard internet protocol provided by TCP/IP used for transmitting the files
from one host to another.
○ It is mainly used for transferring the web page files from their creator to the computer
that acts as a server for other computers on the internet.
○ It is also used for downloading the files to computer from other servers.
Objectives of FTP
○ It provides the sharing of files.
○ It is used to encourage the use of remote computers.
○ It transfers the data more reliably and efficiently.
Why FTP?
Although transferring files from one system to another is very simple and straightforward, but
sometimes it can cause problems. For example, two systems may have different file conventions.
Two systems may have different ways to represent text and data. Two systems may have
different directory structures. FTP protocol overcomes these problems by establishing two
connections between hosts. One connection is used for data transfer, and another connection is
used for the control connection.
Mechanism of FTP
The above figure shows the basic model of the FTP. The FTP client has three components: the
user interface, control process, and data transfer process. The server has two components: the
server control process and the server data transfer process.
There are two types of connections in FTP:
○ Control Connection: The control connection uses very simple rules for communication.
Through control connection, we can transfer a line of command or line of response at a
time. The control connection is made between the control processes. The control
connection remains connected during the entire interactive FTP session.
○ Data Connection: The Data Connection uses very complex rules as data types may vary.
The data connection is made between data transfer processes. The data connection opens
when a command comes for transferring the files and closes when the file is transferred.
FTP Clients
○ FTP client is a program that implements a file transfer protocol which allows you to
transfer files between two hosts on the internet.
○ It allows a user to connect to a remote host and upload or download the files.
○ It has a set of commands that we can use to connect to a host, transfer the files between
you and your host and close the connection.
○ The FTP program is also available as a built-in component in a Web browser. This GUI
based FTP client makes the file transfer very easy and also does not require to remember
the FTP commands.
Advantages of FTP:
○ Speed: One of the biggest advantages of FTP is speed. The FTP is one of the fastest way
to transfer the files from one computer to another computer.
○ Efficient: It is more efficient as we do not need to complete all the operations to get the
entire file.
○ Security: To access the FTP server, we need to login with the username and password.
Therefore, we can say that FTP is more secure.
○ Back & forth movement: FTP allows us to transfer the files back and forth. Suppose
you are a manager of the company, you send some information to all the employees, and
they all send information back on the same server.
Disadvantages of FTP:
○ The standard requirement of the industry is that all the FTP transmissions should be
encrypted. However, not all the FTP providers are equal and not all the providers offer
encryption. So, we will have to look out for the FTP providers that provides encryption.
○ FTP serves two operations, i.e., to send and receive large files on a network. However,
the size limit of the file is 2GB that can be sent. It also doesn't allow you to run
simultaneous transfers to multiple receivers.
○ Passwords and file contents are sent in clear text that allows unwanted eavesdropping.
So, it is quite possible that attackers can carry out the brute force attack by trying to guess
the FTP password.
○ It is not compatible with every system.
4)SNMP
○ SNMP stands for Simple Network Management Protocol.
○ SNMP is a framework used for managing devices on the internet.
○ It provides a set of operations for monitoring and managing the internet.
SNMP Concept
SMI
The SMI (Structure of management information) is a component used in network management.
Its main function is to define the type of data that can be stored in an object and to show how to
encode the data for the transmission over a network.
MIB
○ The MIB (Management information base) is a second component for the network
management.
○ Each agent has its own MIB, which is a collection of all the objects that the manager can
manage. MIB is categorized into eight groups: system, interface, address translation, ip,
icmp, tcp, udp, and egp. These groups are under the mib object.
SNMP
SNMP defines five types of messages: GetRequest, GetNextRequest, SetRequest, GetResponse,
and Trap.
GetRequest: The GetRequest message is sent from a manager (client) to the agent (server) to
retrieve the value of a variable.
GetNextRequest: The GetNextRequest message is sent from the manager to agent to retrieve
the value of a variable. This type of message is used to retrieve the values of the entries in a
table. If the manager does not know the indexes of the entries, then it will not be able to retrieve
the values. In such situations, GetNextRequest message is used to define an object.
GetResponse: The GetResponse message is sent from an agent to the manager in response to the
GetRequest and GetNextRequest message. This message contains the value of a variable
requested by the manager.
SetRequest: The SetRequest message is sent from a manager to the agent to set a value in a
variable.
Trap: The Trap message is sent from an agent to the manager to report an event. For example, if
the agent is rebooted, then it informs the manager as well as sends the time of rebooting.
5)HTTP
HTTP stands for HyperText Transfer Protocol.
○ It is a protocol used to access the data on the World Wide Web (www).
○ The HTTP protocol can be used to transfer the data in the form of plain text, hypertext,
audio, video, and so on.
○ This protocol is known as HyperText Transfer Protocol because of its efficiency that
allows us to use in a hypertext environment where there are rapid jumps from one
document to another document.
○ HTTP is similar to the FTP as it also transfers the files from one host to another host. But,
HTTP is simpler than FTP as HTTP uses only one connection, i.e., no control connection
to transfer the files.
○ HTTP is used to carry the data in the form of MIME-like format.
○ HTTP is similar to SMTP as the data is transferred between client and server. The HTTP
differs from the SMTP in the way the messages are sent from the client to the server and
from server to the client. SMTP messages are stored and forwarded while HTTP
messages are delivered immediately.
Features of HTTP:
○ Connectionless protocol: HTTP is a connectionless protocol. HTTP client initiates a
request and waits for a response from the server. When the server receives the request,
the server processes the request and sends back the response to the HTTP client after
which the client disconnects the connection. The connection between client and server
exist only during the current request and response time only.
○ Media independent: HTTP protocol is a media independent as data can be sent as long
as both the client and server know how to handle the data content. It is required for both
the client and server to specify the content type in MIME-type header.
○ Stateless: HTTP is a stateless protocol as both the client and server know each other only
during the current request. Due to this nature of the protocol, both the client and server do
not retain the information between various requests of the web pages.
HTTP Transactions
The above figure shows the HTTP transaction between client and server. The client initiates a
transaction by sending a request message to the server. The server replies to the request message
by sending a response message.
Messages
HTTP messages are of two types: request and response. Both the message types follow the same
message format.
Request Message: The request message is sent by the client that consists of a request line,
headers, and sometimes a body.
Response Message: The response message is sent by the server to the client that consists of a
status line, headers, and sometimes a body.
○ Method: The method is the protocol used to retrieve the document from a server. For
example, HTTP.
○ Host: The host is the computer where the information is stored, and the computer is
given an alias name. Web pages are mainly stored in the computers and the computers are
given an alias name that begins with the characters "www". This field is not mandatory.
○ Port: The URL can also contain the port number of the server, but it's an optional field. If
the port number is included, then it must come between the host and path and it should be
separated from the host by a colon.
○ Path: Path is the pathname of the file where the information is stored. The path itself
contain slashes that separate the directories from the subdirectories and files.
The World Wide Web (WWW), often called the Web, is a system of interconnected webpages
and information that you can access using the Internet. It was created to help people share and
find information easily, using links that connect different pages together. The Web allows us to
browse websites, watch videos, shop online, and connect with others around the world through
our computers and phones.
All public websites or web pages that people may access on their local computers and other
devices through the internet are collectively known as the World Wide Web or W3. Users can get
further information by navigating to links interconnecting these pages and documents. This data
may be presented in text, picture, audio, or video formats on the internet.
What is WWW?
WWW stands for World Wide Web and is commonly known as the Web. The WWW was started
by CERN in 1989. WWW is defined as the collection of different websites around the world,
containing different information shared via local servers(or computers).
Web pages are linked together using hyperlinks which are HTML-formatted and, also referred to
as hypertext, these are the fundamental units of the Internet and are accessed through Hypertext
Transfer Protocol(HTTP). Such digital connections, or links, allow users to easily access desired
information by connecting relevant pieces of information. The benefit of hypertext is it allows
you to pick a word or phrase from the text and click on other sites that have more information
about it.
History of the WWW
It is a project created, by Tim Berner Lee in 1989, for researchers to work together effectively at
CERN. It is an organization, named the World Wide Web Consortium (W3C), which was
developed for further development of the web. This organization is directed by Tim Berner’s
Lee, aka the father of the web. CERN, where Tim Berners worked, is a community of more than
1700 researchers from more than 100 countries. These researchers spend a little time on CERN
and the rest of the time they work at their colleges and national research facilities in their home
country, so there was a requirement for solid communication so that they can exchange data.
System Architecture
From the user’s point of view, the web consists of a vast, worldwide connection of documents or
web pages. Each page may contain links to other pages anywhere in the world. The pages can be
retrieved and viewed by using browsers of which internet explorer, Netscape Navigator, Google
Chrome, etc are the popular ones. The browser fetches the page requested interprets the text and
formatting commands on it, and displays the page, properly formatted, on the screen.
The basic model of how the web works are shown in the figure below. Here the browser is
displaying a web page on the client machine. When the user clicks on a line of text that is linked
to a page on the abd.com server, the browser follows the hyperlink by sending a message to the
abd.com server asking it for the page.
Here the browser displays a web page on the client machine when the user clicks on a line of text
that is linked to a page on abd.com, the browser follows the hyperlink by sending a message to
the abd.com server asking for the page.
Working of WWW
A Web browser is used to access web pages. Web browsers can be defined as programs which
display text, data, pictures, animation and video on the Internet. Hyperlinked resources on the
World Wide Web can be accessed using software interfaces provided by Web browsers. Initially,
Web browsers were used only for surfing the Web but now they have become more universal.
The below diagram indicates how the Web operates just like client-server architecture of the
internet. When users request web pages or other information, then the web browser of your
system request to the server for the information and then the web server provide requested
services to web browser back and finally the requested service is utilized by the user who made
the request.
World Wide Web
Web browsers can be used for several tasks including conducting searches, mailing, transferring
files, and much more. Some of the commonly used browsers are Internet Explorer, Opera Mini,
and Google Chrome.
Features of WWW
● WWW is open source.
● It is a distributed system spread across various websites.
● It is a Hypertext Information System.
● It is Cross-Platform.
● Uses Web Browsers to provide a single interface for many services.
● Dynamic, Interactive and Evolving.
Components of the Web
There are 3 components of the web:
● Uniform Resource Locator (URL): URL serves as a system for resources on the
web.
● Hyper Text Transfer Protocol (HTTP): HTTP specifies communication of browser
and server.
● Hyper Text Markup Language (HTML): HTML defines the structure, organisation
and content of a web page.
Email protocols
Email protocols are a collection of protocols that are used to send and receive emails properly.
The email protocols provide the ability for the client to transmit the mail to or from the intended
mail server. Email protocols are a set of commands for sharing mails between two computers.
Email protocols establish communication between the sender and receiver for the transmission of
email. Email forwarding includes components like two computers sending and receiving emails
and the mail server. There are three basic types of email protocols.
Types of Email Protocols:
Three basic types of email protocols involved for sending and receiving mails are:
● SMTP
● POP3
● IMAP
Bluetooth is used for short-range wireless voice and data communication. It is a Wireless
Personal Area Network (WPAN) technology and is used for data communications over smaller
distances. This generation changed into being invented via Ericson in 1994. It operates within the
unlicensed, business, scientific, and clinical (ISM) bands from 2.4 GHz to 2.485 GHz.
Bluetooth stages up to 10 meters. Depending upon the version, it presents information up to at
least 1 Mbps or 3 Mbps. The spreading method that it uses is FHSS (Frequency-hopping unfold
spectrum). A Bluetooth network is called a piconet and a group of interconnected piconets is
called a scatter net.
What is Bluetooth?
Bluetooth is a wireless technology that lets devices like phones, tablets, and headphones connect
to each other and share information without needing cables. Bluetooth simply follows the
principle of transmitting and receiving data using radio waves. It can be paired with the other
device which has also Bluetooth but it should be within the estimated communication range to
connect. When two devices start to share data, they form a network called piconet which can
further accommodate more than five devices.
Key Features of Bluetooth
● The transmission capacity of Bluetooth is 720 kbps.
● Bluetooth is a wireless device.
● Bluetooth is a Low-cost and short-distance radio communications standard.
● Bluetooth is robust and flexible.
● The basic architecture unit of Bluetooth is a piconet.
Architecture of Bluetooth
The architecture of Bluetooth defines two types of networks:
Piconet
Piconet is a type of Bluetooth network that contains one primary node called the master node and
seven active secondary nodes called slave nodes. Thus, we can say that there is a total of 8 active
nodes which are present at a distance of 10 meters. The communication between the primary and
secondary nodes can be one-to-one or one-to-many. Possible communication is only between the
master and slave; Slave-slave communication is not possible. It also has 255 parked nodes, these
are secondary nodes and cannot take participation in communication unless it gets converted to
the active state.
Scatternet
It is formed by using various piconets. A slave that is present in one piconet can act as master or
we can say primary in another piconet. This kind of node can receive a message from a master in
one piconet and deliver the message to its slave in the other piconet where it is acting as a master.
This type of node is referred to as a bridge node. A station cannot be mastered in two piconets.
Bluetooth Architecture
Bluetooth Protocol Stack
● Radio (RF) Layer: It specifies the details of the air interface, including frequency,
the use of frequency hopping and transmit power. It performs
modulation/demodulation of the data into RF signals. It defines the physical
characteristics of Bluetooth transceivers. It defines two types of physical links:
connection-less and connection-oriented.
● Baseband Link Layer: The baseband is the digital engine of a Bluetooth system and
is equivalent to the MAC sublayer in LANs. It performs the connection
establishment within a piconet, addressing, packet format, timing and power control.
● Link Manager Protocol Layer: It performs the management of the already
established links which includes authentication and encryption processes. It is
responsible for creating the links, monitoring their health, and terminating them
gracefully upon command or failure.
● Logical Link Control and Adaption (L2CAP) Protocol Layer: It is also known as
the heart of the Bluetooth protocol stack. It allows the communication between upper
and lower layers of the Bluetooth protocol stack. It packages the data packets
received from upper layers into the form expected by lower layers. It also performs
segmentation and multiplexing.
● Service Discovery Protocol (SDP) Layer: It is short for Service Discovery Protocol.
It allows discovering the services available on another Bluetooth-enabled device.
● RF Comm Layer: It is a cabal replacement protocol. It is short for Radio Frontend
Component. It provides a serial interface with WAP and OBEX. It also provides
emulation of serial ports over the logical link control and adaption protocol(L2CAP).
The protocol is based on the ETSI standard TS 07.10.
● OBEX: It is short for Object Exchange. It is a communication protocol to exchange
objects between 2 devices.
● WAP: It is short for Wireless Access Protocol. It is used for internet access.
● TCS: It is short for Telephony Control Protocol. It provides telephony service. The
basic function of this layer is call control (setup & release) and group management for
the gateway serving multiple devices.
● Application Layer: It enables the user to interact with the application.