CN Complete Notes
CN Complete Notes
0|Page
Computer Networks UNIT-1 NOTES
UNIT-I
Contents
Introduction:
Physical Layer:
1|Page
Computer Networks UNIT-1 NOTES
• Computer Network is a group of computers connected with each other through wires, optical fibers
or optical links so that various devices can interact with each other through a network.
• The aim of the computer network is the sharing of resources among various devices.
• In the case of computer network technology, there are several types of networks that vary from simple
to complex level.
There are two types of NIC: wireless NIC and wired NIC.
• Wireless NIC: All the modern laptops use the wireless NIC. In Wireless NIC, a connection is made
using the antenna that employs the radio wave technology.
• Wired NIC: Cables use the wired NIC to transfer the data over the medium.
Hub: Hub is a central device that splits the network connection into multiple devices. When computer
requests for information from a computer, it sends the request to the Hub. Hub distributes this request to
all the interconnected computers.
2|Page
Computer Networks UNIT-1 NOTES
Switches: Switch is a networking device that groups all the devices over the network to transfer the
data to another device. A switch is better than Hub as it does not broadcast the message over the network,
i.e., it sends the message to the device for which it belongs to. Therefore, we can say that switch sends
the message directly from source to the destination.
Cables and connectors: Cable is a transmission media that transmits the communication signals.
• Twisted pair cable: It is a high-speed cable that transmits the data over 1Gbps or more.
• Coaxial cable: Coaxial cable resembles like a TV installation cable. Coaxial cable is more expensive
than twisted pair cable, but it provides the high data transmission speed.
• Fibre optic cable: Fibre optic cable is a high-speed cable that transmits the data using light beams.
It provides high data transmission speed as compared to other cables. It is more expensive as
compared to other cables, so it is installed at the government level.
Router: Router is a device that connects the LAN to the internet. The router is mainly used to connect
the distinct networks or connect the internet to multiple computers.
Modem: Modem connects the computer to the internet over the existing telephone line. A modem is not
integrated with the computer motherboard. A modem is a separate part on the PC slot found on the
motherboard.
3|Page
Computer Networks UNIT-1 NOTES
• doing business electronically, especially with customers and suppliers. This new model is called e-
commerce (electronic commerce) and it has grown rapidly in recent years.
2 .Home Applications
• Peer-to-Peer communication
• Person-to-Person communication
• electronic commerce
• Entertainment. (Game playing,)
3 Mobile Users
• Text messaging or texting
• Smart phones,
• GPS (Global Positioning System)
• m-commerce
• NFC (Near Field Communication)
4 Social Issues: With the good comes the bad, as this new-found freedom brings with it many unsolved
social, political, and ethical issues.
Social networks, message boards, content sharing sites, and a host of other applications allow people
to share their views with like-minded individuals. As long as the subjects are restricted to technical topics
or hobbies like gardening, not too many problems will arise.
The trouble comes with topics that people actually care about, like politics, religion, or sex. Views
that are publicly posted may be deeply offensive to some people. Worse yet, they may not be politically
correct. Furthermore, opinions need not be limited to text; high-resolution color photographs and video
clips are easily shared over computer networks. Some people take a live-and-let-live view, but others feel
that posting certain material (e.g., verbal attacks on particular countries or religions, pornography, etc.)
is simply unacceptable and that such content must be censored. Different countries have different and
conflicting laws in this area. Thus, the debate rages.
Computer networks make it very easy to communicate. They also make it easy for the people who
run the network to snoop on the traffic. This sets up conflicts over issues such as employee rights versus
employer rights. Many people read and write email at work. Many employers have claimed the right to
read and possibly censor employee messages, including messages sent from a home computer outside
working hours. Not all employees agree with this, especially the latter part.
A new twist with mobile devices is location privacy. As part of the process of providing service to your
mobile device the network operators learn where you are at different times of day. This allows them to
4|Page
Computer Networks UNIT-1 NOTES
track your movements. They may know which nightclub you frequent and which medical center you
visit.
Phishing ATTACK: Phishing is a type of social engineering attack often used to steal user data,
including login credentials and credit card numbers. It occurs when an attacker, masquerading as a trusted
entity, dupes a victim into opening an email, instant message, or text message.
File sharing: File sharing is one of the major advantages of the computer network. Computer network
provides us to share the files with each other.
Back up and Roll back is easy: Since the files are stored in the main server which is centrally
located. Therefore, it is easy to take the back up from the main server.
Software and Hardware sharing: We can install the applications on the main server, therefore,
the user can access the applications centrally. So, we do not need to install the software on every machine.
Similarly, hardware can also be shared.
Security: Network allows the security by ensuring that the user has the right to access the certain files
and applications.
5|Page
Computer Networks UNIT-1 NOTES
Scalability: Scalability means that we can add the new components on the network. Network must be
scalable so that we can extend the network by adding new devices. But, it decreases the speed of the
connection and data of the transmission speed also decreases, this increases the chances of error
occurring. This problem can be overcome by using the routing or switching devices.
Reliability: Computer network can use the alternative source for the data communication in case of
any hardware failure.
Data communication: Data communication is the process of exchange of data between two devices
via some form of Transmission medium such as a wire cable
There are five major component of data communication. Brief description is given below.
Transmission modes
• The way in which data is transmitted from one device to another device is known as transmission
mode.
• The transmission mode is also known as the communication mode.
• Each communication channel has a direction associated with it, and transmission media provide
the direction. Therefore, the transmission mode is also known as a directional mode.
• The transmission mode is defined in the physical layer.
6|Page
Computer Networks UNIT-1 NOTES
Send/Receive A device can only send the Both the devices can Both the devices can send and
data but cannot receive it or send and receive the receive the data simultaneously.
it can only receive the data data, but one at a time.
but cannot send it.
Performance The performance of half- The performance of The Full-duplex mode has better
duplex mode is better than full-duplex mode is performance among simplex and
the simplex mode. better than the half- half-duplex mode as it doubles the
duplex mode. utilization of the capacity of the
communication channel.
Example Examples of Simplex mode Example of half-duplex Example of the Full-duplex mode
are radio, keyboard, and is Walkie-Talkies. is a telephone network.
monitor.
Network Topology: Topology defines the structure of the network of how all the components are
interconnected to each other. There are two types of topology: physical and logical topology.
7|Page
Computer Networks UNIT-1 NOTES
1. Bus Topology:
• The bus topology is designed in such a way that all the stations are connected through a single cable
known as a backbone cable.
• Each node is either connected to the backbone cable by drop cable or directly connected to the
backbone cable.
• When a node wants to send a message over the network, it puts a message over the network. All the
stations available in the network will receive the message whether it has been addressed or not.
• The bus topology is mainly used in 802.3 (ethernet) and 802.4 standard networks.
• The configuration of a bus topology is quite simpler as compared to other topologies.
8|Page
Computer Networks UNIT-1 NOTES
• Low-cost cable: In bus topology, nodes are directly connected to the cable without passing through
a hub. Therefore, the initial cost of installation is low.
• Moderate data speeds: Coaxial or twisted pair cables are mainly used in bus-based networks that
support upto 10 Mbps.
• Familiar technology: Bus topology is a familiar technology as the installation and troubleshooting
techniques are well known, and hardware components are easily available.
• Limited failure: A failure in one node will not have any effect on other nodes.
• Extensive cabling: A bus topology is quite simpler, but still it requires a lot of cabling.
• Difficult troubleshooting: It requires specialized test equipment to determine the cable faults. If any
fault occurs in the cable, then it would disrupt the communication for all the nodes.
• Signal interference: If two nodes send the messages simultaneously, then the signals of both the
nodes collide with each other.
• Reconfiguration difficult: Adding new devices to the network would slow down the network.
• Attenuation: Attenuation is a loss of signal leads to communication issues. Repeaters are used to
regenerate the signal.
2. Ring Topology:
9|Page
Computer Networks UNIT-1 NOTES
• Network Management: Faulty devices can be removed from the network without bringing the
network down.
• Product availability: Many hardware and software tools for network operation and monitoring are
available.
• Cost: Twisted pair cabling is inexpensive and easily available. Therefore, the installation cost is
very low.
• Reliable: It is a more reliable network because the communication system is not dependent on the
single host computer.
• Difficult troubleshooting: It requires specialized test equipment to determine the cable faults. If
any fault occurs in the cable, then it would disrupt the communication for all the nodes.
• Failure: The breakdown in one station leads to the failure of the overall network.
• Reconfiguration difficult: Adding new devices to the network would slow down the network.
• Delay: Communication delay is directly proportional to the number of nodes. Adding new devices
increases the communication delay.
3. Star Topology:
• Star topology is an arrangement of the network in which every node is connected to the central hub,
switch or a central computer.
• The central computer is known as a server, and the
peripheral devices attached to the server are known
as clients.
• Coaxial cable or RJ-45 cables are used to connect the
computers.
• Hubs or Switches are mainly used as connection devices
in a physical star topology.
• Star topology is the most popular topology in network implementation.
10 | P a g e
Computer Networks UNIT-1 NOTES
• A Central point of failure: If the central hub or switch goes down, then all the connected nodes will
not be able to communicate with each other.
• Cable: Sometimes cable routing becomes difficult when a significant amount of routing is required.
4. Tree topology:
• Tree topology combines the characteristics of bus topology and star topology.
• A tree topology is a type of structure in which all the computers are connected with each other in
hierarchical fashion.
• The top-most node in tree topology is known as a root node, and all other nodes are the descendants
of the root node.
11 | P a g e
Computer Networks UNIT-1 NOTES
• There is only one path exists between two nodes for the data transmission. Thus, it forms a parent-
child hierarchy.
• Support for broadband transmission: Tree topology is mainly used to provide broadband
transmission, i.e., signals are sent over long distances without being attenuated.
• Easily expandable: We can add the new device to the existing network. Therefore, we can say that
tree topology is easily expandable.
• Easily manageable: In tree topology, the whole network is divided into segments known as star
networks which can be easily managed and maintained.
• Error detection: Error detection and error correction are very easy in a tree topology.
• Limited failure: The breakdown in one station does not affect the entire network.
• Point-to-point wiring: It has point-to-point wiring for individual segments.
• Difficult troubleshooting: If any fault occurs in the node, then it becomes difficult to troubleshoot
the problem.
• High cost: Devices required for broadband transmission are very costly.
• Failure: A tree topology mainly relies on main bus cable and failure in main bus cable will damage
the overall network.
• Reconfiguration difficult: If new devices are added, then it becomes difficult to reconfigure.
12 | P a g e
Computer Networks UNIT-1 NOTES
5. Mesh topology:
• Mesh technology is an arrangement of the network in which computers are interconnected with each
other through various redundant connections.
• There are multiple paths from one computer to another computer.
• It does not contain the switch, hub or any central computer which acts as a central point of
communication.
• Reliable: The mesh topology networks are very reliable as if any link breakdown will not affect the
communication between connected computers.
• Fast Communication: Communication is very fast between the nodes.
• Easier Reconfiguration: Adding new devices would not disrupt the communication between other
devices.
13 | P a g e
Computer Networks UNIT-1 NOTES
6. Hybrid Topology:
• For example, if there exist a ring topology in one branch of ICICI bank and bus topology in another
branch of ICICI bank, connecting these two topologies will result in Hybrid topology.
14 | P a g e
Computer Networks UNIT-1 NOTES
• Costly infrastructure: The infrastructure cost is very high as a hybrid network requires a lot of
cabling, network devices, etc.
Types Networks: A computer network is a group of computers linked to each other that enables
the computer to communicate with another computer and share their resources, data, and applications.
A computer network can be categorized by their size. A computer network is mainly of 3 types:
• LAN(Local Area Network)
• MAN(Metropolitan Area Network)
• WAN(Wide Area Network)
Advantages of LAN:
Disadvantage of LAN:
• Installation and reconfiguration always requires technical and skilled man power.
• Due to sharing of resource, sometime operation speed may be slow down.
15 | P a g e
Computer Networks UNIT-1 NOTES
Advantages of MAN:
16 | P a g e
Computer Networks UNIT-1 NOTES
Disadvantages of MAN:
• A Wide Area Network is a network that extends over a large geographical area such as states or
countries.
• A Wide Area Network is quite bigger network than the LAN.
• A Wide Area Network is not limited to a single location, but it spans over a large geographical area
through a telephone line, fibre optic cable or satellite links.
• The internet is one of the biggest WAN in the world.
• A Wide Area Network is widely used in the field of Business, government, and education.
17 | P a g e
Computer Networks UNIT-1 NOTES
• Get updated files: Software companies work on the live server. Therefore, the programmers get
the updated files within seconds.
• Exchange messages: In a WAN network, messages are transmitted fast. The web application like
Facebook, WhatsApp, Skype allows you to communicate with friends.
• Sharing of software and resources: In WAN network, we can share the software and other
resources like a hard drive, RAM.
• Global business: We can do the business over the internet globally.
• High bandwidth: If we use the leased lines for our company then this gives the high bandwidth.
The high bandwidth increases the data transfer rate which in turn increases the productivity of our
company.
Reference Models
A communication subsystem is a complex piece of Hardware and software. Early attempts for
implementing the software for such subsystems were based on a single, complex, unstructured program
with many interacting components. The resultant software was very difficult to test and modify. To
overcome such problem, the ISO has developed a layered approach. In a layered approach, networking
concept is divided into several layers, and each layer is assigned a particular task. Therefore, we can say
that networking tasks depend upon the layers.
18 | P a g e
Computer Networks UNIT-1 NOTES
Layered Architecture:
• The main aim of the layered architecture is to divide the design into small pieces.
• Each lower layer adds its services to the higher layer to provide a full set of services to manage
communications and run the applications.
• It provides modularity and clear interfaces, i.e., provides interaction between subsystems.
• It ensures the independence between layers by providing the services from lower to higher layer without
defining how the services are implemented. Therefore, any modification in a layer will not affect the other
layers.
• The number of layers, functions, contents of each layer will vary from network to network. However, the
purpose of each layer is to provide the service from lower to a higher layer and hiding the details from the
layers of how the services are implemented.
• The basic elements of layered architecture are services, protocols, and interfaces.
• Service: It is a set of actions that a layer provides to the higher layer.
• Protocol: It defines a set of rules that a layer uses to exchange the information with peer entity. These
rules mainly concern about both the contents and order of the messages used.
• Interface: It is a way through which the message is transferred from one layer to another layer.
1. OSI Model
• OSI stands for Open System Interconnection is a reference model that describes how
information from a software application in one computer moves through a physical medium to
the software application in another computer.
• OSI consists of seven layers, and each layer performs a particular network function.
• OSI model was developed by the International Organization for Standardization (ISO) in 1984,
and it is now considered as an architectural model for the inter-computer communications.
• OSI model divides the whole task into seven smaller and manageable tasks. Each layer is
assigned a particular task.
• Each layer is self-contained, so that task assigned to each layer can be performed independently.
• The OSI model is divided into two layers: upper layers and lower layers.
• The upper layer of the OSI model mainly deals with the application related issues, and they are
implemented only in the software. The application layer is closest to the end user. Both the end user
19 | P a g e
Computer Networks UNIT-1 NOTES
and the application layer interact with the software applications. An upper layer refers to the layer
just above another layer.
• The lower layer of the OSI model deals with the data transport issues. The data link layer and the
physical layer are implemented in hardware and software. The physical layer is the lowest layer of
the OSI model and is closest to the physical medium. The physical layer is mainly responsible for
placing the information on the physical medium.
The interaction between layers in the OSI model
20 | P a g e
Computer Networks UNIT-1 NOTES
• Physical Layer
• Data-Link Layer
• Network Layer
• Transport Layer
• Session Layer
• Presentation Layer
• Application Layer
1. Physical layer:
• The main functionality of the physical layer is to transmit the individual bits from one node
to another node.
• It is the lowest layer of the OSI model.
• It establishes, maintains and deactivates the physical connection.
• It specifies the mechanical, electrical and procedural network interface specifications.
21 | P a g e
Computer Networks UNIT-1 NOTES
• Line Configuration: It defines the way how two or more devices can be connected
physically.
• Data Transmission: It defines the transmission mode whether it is simplex, half-duplex or
full-duplex mode between the two devices on the network.
• Topology: It defines the way how network devices are arranged.
• Signals: It determines the type of the signal used for transmitting the information.
2. Data-Link Layer:
• It is responsible for transferring the packets to the Network layer of the receiver that is
receiving.
• It identifies the address of the network layer protocol from the header.
• It also provides flow control.
22 | P a g e
Computer Networks UNIT-1 NOTES
• A Media access control layer is a link between the Logical Link Control layer and the
network's physical layer.
• It is used for transferring the packets over the network.
• Framing: The data link layer translates the physical's raw bit stream into packets known as
Frames. The Data link layer adds the header and trailer to the frame. The header which is added
to the frame contains the hardware destination and source address.
• Physical Addressing: The Data link layer adds a header to the frame that contains a destination
address. The frame is transmitted to the destination address mentioned in the header.
• Flow Control: Flow control is the main functionality of the Data-link layer. It is the technique
through which the constant data rate is maintained on both the sides so that no data get corrupted.
It ensures that the transmitting station such as a server with higher processing speed does not
exceed the receiving station, with lower processing speed.
• Error Control: Error control is achieved by adding a calculated value CRC (Cyclic Redundancy
Check) that is placed to the Data link layer's trailer which is added to the message frame before it
is sent to the physical layer. If any error seems to occurr, then the receiver sends the
acknowledgment for the retransmission of the corrupted frames.
• Access Control: When two or more devices are connected to the same communication channel,
then the data link layer protocols are used to determine which device has control over the link at a
given time.
3. Network Layer:
• It is a layer 3 that manages device addressing, tracks the location of devices on the network.
• It determines the best path to move data from source to the destination based on the network
conditions, the priority of service, and other factors.
• The Data link layer is responsible for routing and forwarding the packets.
23 | P a g e
Computer Networks UNIT-1 NOTES
• Routers are the layer 3 devices, they are specified in this layer and used to provide the routing services
within an internetwork.
• The protocols used to route the network traffic are known as Network layer protocols. Examples of
protocols are IP and Ipv6.
4. Transport Layer:
• The Transport layer is a Layer 4 ensures that messages are transmitted in the order in which they
are sent and there is no duplication of data.
• The main responsibility of the transport layer is to transfer the data completely.
24 | P a g e
Computer Networks UNIT-1 NOTES
• It receives the data from the upper layer and converts them into smaller units known as segments.
• This layer can be termed as an end-to-end layer as it provides a point-to-point connection between
source and destination to deliver the data reliably.
25 | P a g e
Computer Networks UNIT-1 NOTES
26 | P a g e
Computer Networks UNIT-1 NOTES
• Dialog control: Session layer acts as a dialog controller that creates a dialog between two
processes or we can say that it allows the communication between two processes which can be
either half-duplex or full-duplex.
• Synchronization: Session layer adds some checkpoints when transmitting the data in a sequence.
If some error occurs in the middle of the transmission of data, then the transmission will take
place again from the checkpoint. This process is known as Synchronization and recovery.
6. Presentation Layer:
• A Presentation layer is mainly concerned with the syntax and semantics of the information exchanged
between the two systems.
• It acts as a data translator for a network.
• This layer is a part of the operating system that converts the data from one presentation format to
another format.
• The Presentation layer is also known as the syntax layer.
27 | P a g e
Computer Networks UNIT-1 NOTES
• Translation: The processes in two systems exchange the information in the form of character strings,
numbers and so on. Different computers use different encoding methods, the presentation layer
handles the interoperability between the different encoding methods. It converts the data from sender-
dependent format into a common format and changes the common format into receiver-dependent
format at the receiving end.
• Encryption: Encryption is needed to maintain privacy. Encryption is a process of converting the
sender-transmitted information into another form and sends the resulting message over the network.
• Compression: Data compression is a process of compressing the data, i.e., it reduces the number of
bits to be transmitted. Data compression is very important in multimedia such as text, audio, video.
7. Application Layer:
• An application layer serves as a window for users and application processes to access network service.
• It handles issues such as network transparency, resource allocation, etc.
• An application layer is not an application, but it performs the application layer functions.
• This layer provides the network services to the end-users.
28 | P a g e
Computer Networks UNIT-1 NOTES
• File transfer, access, and management (FTAM): An application layer allows a user to access the
files in a remote computer, to retrieve the files from a computer and to manage the files in a remote
computer.
• Mail services: An application layer provides the facility for email forwarding and storage.
• Directory services: An application provides the distributed database sources and is used to provide
that global information about various objects.
SUMMARY:
29 | P a g e
Computer Networks UNIT-1 NOTES
TCP/IP model:
• The TCP/IP model was developed prior to the OSI model.
• The TCP/IP model is not exactly similar to the OSI model.
• The TCP/IP model consists of five layers: the application layer, transport layer, network layer, data link layer
and physical layer.
• The first four layers provide physical standards, network interface, internetworking, and transport functions
that correspond to the first four layers of the OSI model and these four layers are represented in TCP/IP
model by a single layer called the application layer.
• TCP/IP is a hierarchical protocol made up of interactive modules, and each of them provides specific
functionality.
• Here, hierarchical means that each upper-layer protocol is supported by two or more lower-level
protocols.
30 | P a g e
Computer Networks UNIT-1 NOTES
31 | P a g e
Computer Networks UNIT-1 NOTES
• The functions carried out by this layer are encapsulating the IP datagram into frames transmitted by
the network and mapping of IP addresses into physical addresses.
• The protocols used by this layer are ethernet, token ring, FDDI, X.25, frame relay.
2. Internet Layer
• An internet layer is the second layer of the TCP/IP model.
• An internet layer is also known as the network layer.
• The main responsibility of the internet layer is to send the packets from any network, and they arrive
at the destination irrespective of the route they take.
1. IP Protocol: IP protocol is used in this layer, and it is the most significant part of the entire TCP/IP
suite.
• IP Addressing: This protocol implements logical host addresses known as IP addresses. The IP
addresses are used by the internet and higher layers to identify the device and to provide internetwork
routing.
• Host-to-host communication: It determines the path through which the data is to be transmitted.
2. ARP Protocol
3. ICMP Protocol
3.Transport Layer
The transport layer is responsible for the reliability, flow control, and correction of data which is being
sent over the network.
32 | P a g e
Computer Networks UNIT-1 NOTES
The two protocols used in the transport layer are User Datagram protocol and Transmission control
protocol.
33 | P a g e
Computer Networks UNIT-1 NOTES
4.Application Layer
• HTTP: HTTP stands for Hypertext transfer protocol. This protocol allows us to access the data
over the World Wide Web. It transfers the data in the form of plain text, audio, video. It is known
as a Hypertext transfer protocol as it has the efficiency to use in a hypertext environment where
there are rapid jumps from one document to another.
• SNMP: SNMP stands for Simple Network Management Protocol. It is a framework used for
managing the devices on the internet by using the TCP/IP protocol suite.
• SMTP: SMTP stands for Simple mail transfer protocol. The TCP/IP protocol that supports the e-
mail is known as a Simple mail transfer protocol. This protocol is used to send the data to another
e-mail address.
• DNS: DNS stands for Domain Name System. An IP address is used to identify the connection of a
host to the internet uniquely. But, people prefer to use the names instead of addresses. Therefore,
the system that maps the name to the address is known as Domain Name System.
34 | P a g e
Computer Networks UNIT-1 NOTES
• TELNET: It is an abbreviation for Terminal Network. It establishes the connection between the
local computer and remote computer in such a way that the local terminal appears to be a terminal
at the remote system.
• FTP: FTP stands for File Transfer Protocol. FTP is a standard internet protocol used for
transmitting the files from one computer to another computer.
35 | P a g e
Computer Networks UNIT-1 NOTES
OSI TCP/IP
OSI represents Open System Interconnection. TCP/IP model represents the Transmission Control
Protocol / Internet Protocol.
OSI model has been developed by ISO It was developed by ARPANET (Advanced Research
(International Standard Organization). Project Agency Network).
In this model, the network layer provides both The network layer provides only connectionless service.
connection-oriented and connectionless service.
In the OSI model, the transport layer provides a The transport layer does not provide the surety for the
guarantee for the delivery of the packets. delivery of packets. But still, we can say that it is a
reliable model.
OSI is a generic, protocol independent standard. TCP/IP model depends on standard protocols about which
It is acting as an interaction gateway between the the computer network has created. It is a connection
network and the final-user. protocol that assigns the network of hosts over the internet.
The OSI model was developed first, and then The protocols were created first and then built the TCP/IP
protocols were created to fit the network model.
architecture’s needs.
The OSI model represents defines It does not mention the services, interfaces, and protocols.
administration, interfaces and conventions. It
describes clearly which layer provides services.
The protocols of the OSI model are better unseen The TCP/IP model protocols are not hidden, and we cannot
and can be returned with another appropriate fit a new protocol stack in it.
protocol quickly.
36 | P a g e
Computer Networks UNIT-1 NOTES
Internet:
Internet is called the network of networks. It is a global communication system that links together
thousands of individual networks. In other words, internet is a collection of interlinked computer
networks, connected by copper wires, fiber-optic cables, wireless connections, etc. As a result, a
computer can virtually connect to other computers in any network. These connections allow users to
interchange messages, to communicate in real time (getting instant messages and responses), to share
data and programs and to access limitless information.
Internet is a global communication system that links together thousands of individual networks. It allows
exchange of information between two or more computers on a network. Thus internet helps in transfer
of messages through mail, chat, video & audio conference, etc. It has become mandatory for day-to-day
activities: bills payment, online shopping and surfing, tutoring, working, communicating with peers, etc.
Process :
TCP/IP provides end to end transmission, i.e., each and every node on one network has the ability to
communicate with any other node on the network.
37 | P a g e
Computer Networks UNIT-1 NOTES
IP:
In order to communicate, we need our data to be encapsulated as Internet Protocol (IP) packets. These
IP packets travel across number of hosts in a network through routing to reach the destination. However
IP does not support error detection and error recovery, and is incapable of detecting loss of packets.
TCP:
TCP stands for "Transmission Control Protocol". It provides end to end transmission of data, i.e., from
source to destination. It is a very complex protocol as it supports recovery of lost packets.
Application Protocol:
Third layer in internet architecture is the application layer which has different protocols on which the
internet services are built. Some of the examples of internet services include email (SMTP facilitates
email feature), file transfer (FTP facilitates file transfer feature), etc.
The Internet has come a long way since the 1960s. The Internet today is not a simple hierarchical
structure. It is made up of many wide- and local-area networks joined by connecting devices and
switching stations. It is difficult to give an accurate representation of the Internet because it is continually
changing-new networks are being added, existing networks are adding addresses, and networks of
defunct companies are being removed. Today most end users who want Internet connection use the
services of Internet service providers (lSPs). There are international service providers, national service
providers, regional service providers, and local service providers. The Internet today is run by private
companies, not the government. Figure 1.13 shows a conceptual (not geographic) view of the Internet
International Internet Service Providers: At the top of the hierarchy are the international service
providers that connect nations together.
National Internet Service Providers: The national Internet service providers are backbone networks
created and maintained by specialized companies. There are many national ISPs operating in North
America; some of the most well-known are Sprint Link, PSINet, UUNet Technology, AGIS, and internet
Mel. To provide connectivity between the end users, these backbone networks are connected by complex
switching stations (normally run by a third party) called network access points (NAPs). Some national
ISP networks are also connected to one another by private switching stations called peering points. These
normally operate at a high data rate (up to 600 Mbps).
38 | P a g e
Computer Networks UNIT-1 NOTES
Regional Internet Service Providers: Regional internet service providers or regional ISPs are smaller
ISPs that are connected to one or more national ISPs. They are at the third level of the hierarchy with a
smaller data rate. Local Internet Service Providers:
Local Internet service providers provide direct service to the end users. The local ISPs can be connected
to regional ISPs or directly to national ISPs. Most end users are connected to the local ISPs. Note that in
this sense, a local ISP can be a company that just provides Internet services, a corporation with a network
that supplies services to its own employees, or a nonprofit organization, such as a college or a
university,that runs its own network. Each of these local ISPs can be connected to a regional or national
service provider
Physical layer:
Transmission media
• Transmission media is a communication channel that carries the information from the sender to
the receiver. Data is transmitted through the electromagnetic signals.
• The main functionality of the transmission media is to carry the information in the form of bits
through LAN(Local Area Network).
39 | P a g e
Computer Networks UNIT-1 NOTES
Bandwidth: All the factors are remaining constant, the greater the bandwidth of a medium, the higher
the data transmission rate of a signal.
Transmission impairment: When the received signal is not identical to the transmitted one due to
the transmission impairment. The quality of the signals will get destroyed due to transmission
impairment.
Interference: An interference is defined as the process of disrupting a signal when it travels over a
communication medium on the addition of some unwanted signal.
Attenuation: Attenuation means the loss of energy, i.e., the strength of the signal decreases with
increasing the distance which causes the loss of energy.
40 | P a g e
Computer Networks UNIT-1 NOTES
Distortion: Distortion occurs when there is a change in the shape of the signal. This type of distortion
is examined from different signals having different frequencies. Each frequency component has its
own propagation speed, so they reach at a different time which leads to the delay distortion.
Noise: When data is travelled over a transmission medium, some unwanted signal is added to it which
creates the noise.
Guided Media
It is defined as the physical medium through which the signals are transmitted. It is also known as
Bounded media.
• Twisted pair
• Coaxial Cable
• Fiber Optic Cable
Twisted pair:
Twisted pair is a physical media made up of a pair of cables twisted with each other. A twisted pair cable
is cheap as compared to other transmission media. Installation of the twisted pair cable is easy, and it is
a lightweight cable. The frequency range for twisted pair cable is from 0 to 3.5 KHz.
A twisted pair consists of two insulated copper wires arranged in a regular spiral pattern.
The degree of reduction in noise interference is determined by the number of turns per foot. Increasing
the number of turns per foot decreases noise interference.
41 | P a g e
Computer Networks UNIT-1 NOTES
An unshielded twisted pair is widely used in telecommunication. Following are the categories of the
unshielded twisted pair cable:
• Category 1: Category 1 is used for telephone lines that have low-speed data.
• Category 2: It can support up to 4Mbps.
• Category 3: It can support up to 16Mbps.
• Category 4: It can support up to 20Mbps. Therefore, it can be used for long-distance
communication.
• Category 5: It can support up to 200Mbps.
Disadvantage:
• This cable can only be used for shorter distances because of attenuation.
42 | P a g e
Computer Networks UNIT-1 NOTES
43 | P a g e
Computer Networks UNIT-1 NOTES
44 | P a g e
Computer Networks UNIT-1 NOTES
Jacket: The protective coating consisting of plastic is known as a jacket. The main purpose of a jacket
is to preserve the fiber strength, absorb shock and extra fiber protection.
45 | P a g e
Computer Networks UNIT-1 NOTES
Comparison among Twisted Pair Cables, Co-axial Cables, and Fiber Optic Cables
Unguided Media
An unguided transmission transmits the electromagnetic waves without using any physical medium. Therefore
it is also known as wireless transmission.
In unguided media, air is the media through which the electromagnetic energy can flow easily. This
type of communication is often referred to as wireless communication.
1. Radio Waves
2. Microwaves
3. Infrared
46 | P a g e
Computer Networks UNIT-1 NOTES
• Unguided signals can travel from the source to destination in several ways: Ground wave
propagation, Sky wave propagation, and Space wave or line-of-sight(LOS) propagation, as
shown in Figure
Ground propagation:
• Ground wave propagation is a type of radio propagation which is also known as a surface wave.
• These waves propagate over the earth’s surface in low and medium frequencies.
• These are mainly used for transmission between the surface of the earth and the ionosphere.
47 | P a g e
Computer Networks UNIT-1 NOTES
Sky propagation:
48 | P a g e
Computer Networks UNIT-1 NOTES
Radio waves:
• Radio waves are the electromagnetic waves that are transmitted in all the
directions of free space.
• Radio waves are omnidirectional, i.e., the signals are propagated in all the
directions.
• The range in frequencies of radio waves is from 3KHz to 1 GHz.
• In the case of radio waves, the sending and receiving antenna are not aligned,
i.e., the wave sent by the sending antenna can be received by any receiving
antenna.
• An example of the radio wave is FM radio.
• A Radio wave is useful for multicasting when there is one sender and many receivers.
• An FM radio, television, cordless phones are examples of a radio wave.
49 | P a g e
Computer Networks UNIT-1 NOTES
Microwaves:
Example : Cellular phone, Satellite networks and in Wireless LANs (WiFi), GPS
Characteristics of Microwave:
• Frequency range: The frequency range of terrestrial microwave is from 4-6 GHz to 21-23 GHz.
• Bandwidth: It supports the bandwidth from 1 to 10 Mbps.
• Short distance: It is inexpensive for short distance.
• Long distance: It is expensive as it requires a higher tower for a longer distance.
• Attenuation: Attenuation means loss of signal. It is affected by environmental conditions and
antenna size.
Advantages of Microwave:
50 | P a g e
Computer Networks UNIT-1 NOTES
Infrared
• An infrared transmission is a wireless technology used for communication over short ranges.
• The frequency of the infrared in the range from 300 GHz to 400 THz.
• It is used for short-range communication such as data transfer between two cell phones, TV
remote operation, data transfer between a computer and cell phone resides in the same closed
area.
Characteristics of Infrared:
• It supports high bandwidth, and hence the data rate will be very high.
• Infrared waves cannot penetrate the walls. Therefore, the infrared communication in one room
cannot be interrupted by the nearby rooms.
• An infrared communication provides better security with minimum interference.
• Infrared communication is unreliable outside the building because the sun rays will interfere
with the infrared waves.
Switching:
• When a user accesses the internet or another computer network outside their immediate location, messages
are sent through the network of transmission media. This technique of transferring the information from
one computer network to another network is known as switching.
• Switching in a computer network is achieved by using switches. A switch is a small hardware device which
is used to join multiple computers together with one local area network (LAN).
• Network switches operate at layer 2 (Data link layer) in the OSI model.
• Switches are used to forward the packets based on MAC addresses.
• A Switch is used to transfer the data only to the device that has been addressed. It verifies the destination
address to route the packet appropriately.
51 | P a g e
Computer Networks UNIT-1 NOTES
Advantages of Switching:
Disadvantages of Switching:
Switching techniques
In large networks, there can be multiple paths from sender to receiver. The switching technique will
decide the best route for data transmission.
Switching technique is used to connect the systems for making one-to-one communication.
52 | P a g e
Computer Networks UNIT-1 NOTES
1. Circuit Switching
• Circuit switching is a switching technique that establishes a dedicated path between sender and receiver.
• In the Circuit Switching Technique, once the connection is established then the dedicated path will remain
to exist until the connection is terminated.
• Circuit switching in a network operates in a similar way as the telephone works.
• Circuit switching is used in public telephone network. It is used for voice transmission.
• Fixed data can be transferred at a time in circuit switching technology.
• Connection setup
• Data transfer
• Connection teardown
• The Connection setup phase: creating dedicated channels between the switches.
53 | P a g e
Computer Networks UNIT-1 NOTES
• Data Transfer Phase: After the establishment of the dedicated circuit (channels), the two parties
can transfer data.
• Connection Teardown Phase: When one of the parties needs to disconnect , a signal is sent to
each switch to release the resources.
54 | P a g e
Computer Networks UNIT-1 NOTES
• Data channels are shared among the communicating devices that improve the efficiency of
using available bandwidth.
• Traffic congestion can be reduced because the message is temporarily stored in the nodes.
• Message priority can be used to manage the network.
• The size of the message which is sent over the network can be varied. Therefore, it supports
the data of unlimited size.
• The message switches must be equipped with sufficient storage to enable them to store the
messages until the message is forwarded.
• The Long delay can occur due to the storing and forwarding facility provided by the message
switching technique.
3. Packet Switching:
• The packet switching is a switching technique in which the message is sent in one go, but it is
divided into smaller pieces, and they are sent individually.
• The message splits into smaller pieces known as packets and packets are given a unique number
to identify their order at the receiving end.
• Every packet contains some information in its headers such as source address, destination address
and sequence number.
• Packets will travel across the network, taking the shortest path as possible.
• All the packets are reassembled at the receiving end in correct order.
• If any packet is missing or corrupted, then the message will be sent to resend the message.
• If the correct order of the packets is reached, then the acknowledgment message will be sent.
55 | P a g e
Computer Networks UNIT-1 NOTES
57 | P a g e
Computer Networks UNIT-1 NOTES
Node takes routing decisions to forward the Node does not take any routing decision.
packets.
Congestion cannot occur as all the packets travel in Congestion can occur when the node is busy,
different directions. and it does not allow other packets to pass
through.
It is more flexible as all the packets are treated as It is not very flexible.
an independent entity.
Reliable: If any node is busy, then the packets can be rerouted. This ensures that the Packet Switching
technique provides reliable communication.
Efficient: Packet Switching is an efficient technique. It does not require any established path prior to
the transmission, and many users can use the same communication channel simultaneously, hence
makes use of available bandwidth very efficiently.
58 | P a g e
Computer Networks UNIT-1 NOTES
Sequence Order Message arrives in Message arrives in Packets do not appear in sequence at
Sequence. Sequence. the destination.
Use Bandwidth
Bandwidth is used to its Bandwidth is used to its maximum
Wastage
maximum extent. extent.
59 | P a g e
Computer Networks UNIT-1 NOTES
Important Questions
1.a ) Explain about different types of network Topologies used in computer Networks
b) Explain about uses of Computer Networks
2. a) Explain in detail about layering scenario.
b) Explain the functionality of each layer in OSI reference model and list out differences between
TCP/IP and OSI model
3. a) Explain TCP/IP Protocol Suit with neat sketch
b) Explain the advantages and disadvantages of TCP/IP Reference Model
5. a) Write the advantages of optical fiber over twisted-pair and coaxial cables.
b) Explain about various transmission media in physical layer with a neat sketch.
60 | P a g e
COMPUTER NETWORKS UNIT-2 DATA LINK LAYER
Contents
1.DATA LINK LAYER
• Design issues
• Error detection& correction
• Elementary data link layer protocols
• Sliding window protocols
1
COMPUTER NETWORKS UNIT-2 DATA LINK LAYER
Introduction:
• In the OSI model, the data link layer is a 4th layer from the top and 2nd layer from the bottom.
• The communication channel that connects the adjacent nodes is known as links, and in order to
move the datagram from source to the destination, the datagram must be moved across an
individual link.
• The main responsibility of the Data Link Layer is to transfer the datagram across an individual
link.
• The Data link layer protocol defines the format of the packet exchanged across the nodes as well as
the actions such as Error detection, retransmission, flow control, and random access.
• The Data Link Layer protocols are Ethernet, token ring, FDDI and PPP.
• An important characteristic of a Data Link Layer is that datagram can be handled by different link
layer protocols on different links in a path. For example, the datagram is handled by Ethernet on
the first link, PPP on the second link.
The data link layer takes the packets it gets from the network layer and encapsulates them into frames
for transmission. Each frame contains a frame header, a payload field for holding the packet, and a
frame trailer
Parts of a Frame:
A frame has the following parts: −
• Frame Header: − It contains the source and destination addresses of the frame.
2
COMPUTER NETWORKS UNIT-2 DATA LINK LAYER
Framing & Link access: Data Link Layer protocols encapsulate each network frame within a Link
layer frame before the transmission across the link. A frame consists of a data field in which network
layer datagram is inserted and a number of data fields. It specifies the structure of the frame as well as
a channel access protocol by which frame is to be transmitted over the link.
Reliable delivery: Data Link Layer provides a reliable delivery service, i.e., transmits the network
layer datagram without any error. A reliable delivery service is accomplished with transmissions and
acknowledgements. A data link layer mainly provides the reliable delivery service over the links as
they have higher error rates and they can be corrected locally, link at which an error occurs rather than
forcing to retransmit the data.
3
COMPUTER NETWORKS UNIT-2 DATA LINK LAYER
Flow control: A receiving node can receive the frames at a faster rate than it can process the frame.
Without flow control, the receiver's buffer can overflow, and frames can get lost. To overcome this
problem, the data link layer uses the flow control to prevent the sending node on one side of the link
from overwhelming the receiving node on another side of the link.
Error detection: Errors can be introduced by signal attenuation and noise. Data Link Layer protocol
provides a mechanism to detect one or more errors. This is achieved by adding error detection bits in
the frame and then receiving node can perform an error check.
Error correction: Error correction is similar to the Error detection, except that receiving node not
only detects the errors but also determine where the errors have occurred in the frame.
Half-Duplex & Full-Duplex: In a Full-Duplex mode, both the nodes can transmit the data at the same
time. In a Half-Duplex mode, only one node can transmit the data at the same time.
The data link layer can be designed to offer various services. The actual services offered can vary
from system to system.
4
COMPUTER NETWORKS UNIT-2 DATA LINK LAYER
• When this service is offered, still there are no logical connections used, but each frame is sent
individually acknowledged.
• In this way, the sender knows whether a frame has arrived correctly. If it has not arrived within
a specified time interval, it can be sent again. This service is useful over unreliable channels,
such as wireless systems.
• If individual frames are acknowledged and retransmitted, entire packets get through much
faster.
Here, the source and destination machines establish a connection before any data are transferred. Each
frame sent over the connection is numbered, and the data link layer guarantees that each frame sent is
indeed received.
Furthermore, it guarantees that each frame is received exactly once and that all frames are received in
the right order.
2. FRAMING
The usual approach is for the data link layer to break the bit stream up into discrete frames and
compute the checksum for each frame (framing).
5
COMPUTER NETWORKS UNIT-2 DATA LINK LAYER
When a frame arrives at the destination, the checksum is recomputed. If the newly computed checksum
is different from the one contained in the frame, the data link layer knows that an error has occurred
and takes steps to deal with it
• Example., discarding the bad frame and possibly also sending back an error report
1. Character count.
2. Byte stuffing.
3. Bit stuffing.
4. Physical layer coding violations.
The first framing method uses a field in the header to specify the number of characters in the frame.
When the data link layer at the destination sees the character count, it knows how many characters
follow and hence where the end of the frame is. This technique is shown in fig a) four frames of sizes
5,5,8,8 characters respectively (without errors) fig.) with errors.
• The trouble with this algorithm is that the count can be garbled by a transmission error.
• For example, if the character count of 5 in the second frame of Fig. (b) becomes a 7, the
destination will get out of synchronization and will be unable to locate the start of the next frame.
6
COMPUTER NETWORKS UNIT-2 DATA LINK LAYER
Even if the checksum is incorrect so the destination knows that the frame is bad, it still has no way
of telling where the next frame starts.
• Sending a frame back to the source asking for a retransmission does not help either, since the
destination does not know how many characters to skip over to get to the start of the
retransmission. For this reason, the character count method is rarely used anymore.
• In the past, the starting and ending bytes were different, but in recent years most protocols have
used the same byte, called a flag byte, as both the starting and ending delimiter, as shown in
below figure as FLAG.
• In this way, if the receiver ever loses synchronization, it can just search for the flag byte to find
the end of the current frame. Two consecutive flag bytes indicate the end of one frame and
start of the next one.
• Each frame starts and ends with a FLAG byte. Thus adjacent frames are separated by two flag
bytes.
• A serious problem occurs with this method is when binary data is transmitted, It is possible that
FLAG is actually a part of the data.
• Solution: At the sender an escape byte (ESC) character is inserted just before the FLAG byte
present in the data. The data link layer at the receiver end removes the ESC is from the data
before sending it to the network layer. This technique is called as byte stuffing or character
stuffing.
• Thus, a framing flag bye can be distinguished from one in the data by absence or presence of an
escape byte before it.
• Now if an ESC is present in the data then an extra ESC is inserted before it in the data. This
extra ESC is removed at the receiver.
7
COMPUTER NETWORKS UNIT-2 DATA LINK LAYER
• The major disadvantage of using this framing method is that it is closely tied to the use of 8-bit
characters.
• Whenever the sender's data link layer encounters five consecutive 1s in the data, it automatically
stuffs a 0 bit into the outgoing bit stream.
• This bit stuffing is analogous to byte stuffing, in which an escape bye is stuffed into the
outgoing character stream before a flag byte in the data.
• When the receiver sees five consecutive incoming 1 bits, followed by a 0 bit, it automatically
de- stuffs (i.e., deletes) the 0 bit. Just as byte stuffing is completely transparent to the network
layer in both computers, so is bit stuffing.
8
COMPUTER NETWORKS UNIT-2 DATA LINK LAYER
3.Error Detection:
When data is transmitted from one device to another device, the system does not guarantee whether the
data received by the device is identical to the data transmitted by another device. An Error is a
situation when the message received at the receiver end is not identical to the message transmitted.
Types of Errors:
Single-Bit Error:
The only one bit of a given data unit is changed from 1 to 0 or from 0 to 1.
In the above figure, the message which is sent is corrupted as single-bit, i.e., 0 bit is changed to 1.
Single-Bit Error does not appear more likely in Serial Data Transmission. Single-Bit Error mainly
occurs in Parallel Data Transmission.
Burst Error:
The two or more bits are changed from 0 to 1 or from 1 to 0 is known as Burst Error. The Burst Error
is determined from the first corrupted bit to the last corrupted bit.
9
COMPUTER NETWORKS UNIT-2 DATA LINK LAYER
The duration of noise in Burst Error is more than the duration of noise in Single-Bit.
Burst Errors are most likely to occur in Serial Data Transmission.
The number of affected bits depends on the duration of the noise and data rate.
10
COMPUTER NETWORKS UNIT-2 DATA LINK LAYER
11
COMPUTER NETWORKS UNIT-2 DATA LINK LAYER
2.Checksum Checker:
A Checksum is verified at the receiving side. The receiver subdivides the incoming data into equal
segments of n bits each, and all these segments are added together, and then this sum is complemented.
If the complement of the sum is zero, then the data is accepted otherwise data is rejected.
12
COMPUTER NETWORKS UNIT-2 DATA LINK LAYER
Example
• Suppose that the sender wants to send 4 frames each of 8 bits, where the frames are 11001100,
10101010, 11110000 and 11000011.
• The sender adds the bits using 1s complement arithmetic. While adding two numbers using 1s
complement arithmetic, if there is a carry over, it is added to the sum.
• After adding all the 4 frames, the sender complements the sum to get the checksum, 11010011,
and sends it along with the data frames.
• The receiver performs 1s complement arithmetic sum of all the frames including the checksum.
• The result is complemented and found to be 0. Hence, the receiver assumes that no error has
occurred.
13
COMPUTER NETWORKS UNIT-2 DATA LINK LAYER
• In CRC technique, a string of n 0s is appended to the data unit, and this n number is less than the
number of bits in a predetermined number, known as division which is n+1 bits.
• Secondly, the newly extended data is divided by a divisor using a process is known as binary
division. The remainder generated from this division is known as CRC remainder.
• Thirdly, the CRC remainder replaces the appended 0s at the end of the original data. This newly
generated unit is sent to the receiver.
• The receiver receives the data followed by the CRC remainder. The receiver will treat this whole
unit as a single unit, and it is divided by the same divisor that was used to find the CRC remainder.
If the resultant of this division is zero which means that it has no error, and the data is accepted.
If the resultant of this division is not zero which means that the data consists of an error. Therefore, the
data is discarded.
14
COMPUTER NETWORKS UNIT-2 DATA LINK LAYER
CRC Generator:
• A CRC generator uses a modulo-2 division. Firstly, three zeroes are appended at the end of the
data as the length of the divisor is 4 and we know that the length of the string 0s to be appended
is always one less than the length of the divisor.
• Now, the string becomes 11100000, and the resultant string is divided by the divisor 1001.
• The remainder generated from the binary division is known as CRC remainder. The generated
value of the CRC remainder is 111.
• CRC remainder replaces the appended string of 0s at the end of the data unit, and the final string
would be 11100111 which is sent across the network.
CRC Checker:
15
COMPUTER NETWORKS UNIT-2 DATA LINK LAYER
• In this case, CRC checker generates the remainder of zero. Therefore, the data is accepted.
Error Correction:
Error Correction codes are used to detect and correct the errors when data is transmitted from the
sender to the receiver.
• Backward error correction: Once the error is discovered, the receiver requests the sender
to retransmit the entire data unit.
• Forward error correction: In this case, the receiver uses the error-correcting code which
automatically corrects the errors.
A single additional bit can detect the error, but cannot correct it.
For correcting the errors, one has to know the exact position of the error. For example, If we want to
calculate a single-bit error, the error correction code will determine which one of seven bits is in error.
To achieve this, we have to add some additional redundant bits.
Suppose r is the number of redundant bits and d is the total number of the data bits. The number of
redundant bits r can be calculated by using the formula:
16
COMPUTER NETWORKS UNIT-2 DATA LINK LAYER
The value of r is calculated by using the above formula. For example, if the value of d is 4, then the
possible smallest value that satisfies the above relation would be 3.
To determine the position of the bit which is in error, a technique developed by R.W Hamming is
Hamming code which can be applied to any length of the data unit and uses the relationship between
data units and redundant units.
Hamming Code:
Parity bits: The bit which is appended to the original data of binary bits so that the total number of 1s
is even or odd.
Even parity: To check for even parity, if the total number of 1s is even, then the value of the parity bit
is 0. If the total number of 1s occurrences is odd, then the value of the parity bit is 1.
Odd Parity: To check for odd parity, if the total number of 1s is even, then the value of parity bit is 1.
If the total number of 1s is odd, then the value of parity bit is 0.
• An information of ’d’ bits are added to the redundant bits 'r' to form d+r.
• The location of each of the (d+r) digits is assigned a decimal value.
• The 'r' bits are placed in the positions 1,2,.....2k-1.
• At the receiving end, the parity bits are recalculated. The decimal value of the parity bits
determines the position of an error.
17
COMPUTER NETWORKS UNIT-2 DATA LINK LAYER
The number of redundant bits is 3. The three bits are represented by r1, r2, r4. The position of the
redundant bits is calculated with corresponds to the raised power of 2. Therefore, their corresponding
positions are 1, 21, 22.
The r1 bit is calculated by performing a parity check on the bit positions whose binary representation
includes 1 in the first position.
18
COMPUTER NETWORKS UNIT-2 DATA LINK LAYER
We observe from the above figure that the bit positions that include 1 in the first position are 1, 3, 5, 7.
Now, we perform the even-parity check at these bit positions. The total number of 1 at these bit
positions corresponding to r1 is even, therefore, the value of the r1 bit is 0.
Determining r2 bit
The r2 bit is calculated by performing a parity check on the bit positions whose binary representation
includes 1 in the second position.
We observe from the above figure that the bit positions that include 1 in the second position are 2, 3, 6,
7. Now, we perform the even-parity check at these bit positions. The total number of 1 at these bit
positions corresponding to r2 is odd; therefore, the value of the r2 bit is 1.
Determining r4 bit
The r4 bit is calculated by performing a parity check on the bit positions whose binary representation
includes 1 in the third position.
We observe from the above figure that the bit positions that include 1 in the third position are 4, 5, 6, 7.
Now, we perform the even-parity check at these bit positions. The total number of 1 at these bit
positions corresponding to r4 is even, therefore, the value of the r4 bit is 0.
19
COMPUTER NETWORKS UNIT-2 DATA LINK LAYER
R1 bit:
We observe from the above figure that the binary representation of r1 is 1100. Now, we perform the
even-parity check, the total number of 1s appearing in the r1 bit is an even number. Therefore, the
value of r1 is 0.
R2 bit:
The bit positions of r2 bit are 2,3,6,7.
We observe from the above figure that the binary representation of r2 is 1001. Now, we perform the
even-parity check, the total number of 1s appearing in the r2 bit is an even number. Therefore, the
value of r2 is 0.
R4 bit:
20
COMPUTER NETWORKS UNIT-2 DATA LINK LAYER
We observe from the above figure that the binary representation of r4 is 1011. Now, we perform the
even-parity check, the total number of 1s appearing in the r4 bit is an odd number. Therefore, the value
of r4 is 1.
The binary representation of redundant bits, i.e., r4r2r1 is 100, and its corresponding decimal
value is 4. Therefore, the error occurs in a 4th bit position. The bit value must be changed from
1 to 0 to correct the error.
21
COMPUTER NETWORKS UNIT-2 DATA LINK LAYER
• This is unrealistic protocol, because it does not handle either flow control or error correction
22
COMPUTER NETWORKS UNIT-2 DATA LINK LAYER
• The problem here is how to prevent the sender from flooding the receiver.
• Stop – and – Wait protocol is for noiseless channel too. It provides unidirectional data
transmission without any error control facilities. However, it provides for flow control so that a
fast sender does not drown a slow receiver
• The receiver send an acknowledge frame back to the sender telling the sender that the last
received frame has been processed and passed to the host; permission to send the next frame is
granted.
• The sender, after having sent a frame, must wait for the acknowledge frame from the receiver
before sending another frame.
Drawbacks:
23
COMPUTER NETWORKS UNIT-2 DATA LINK LAYER
Stop & Wait ARQ is a sliding window protocol for flow control and it overcomes the limitations of
Stop & Wait, we can say that it is the improved or modified version of Stop & Wait protocol.
Working of Stop & Wait ARQ is almost like Stop & Wait protocol, the only difference is that it
includes some additional components, which are:
• When the frame arrives at the receiver site, it is checked and if it is corrupted, it is silently
discarded.
• Lost frames are more difficult to handle than corrupted ones. In our previous protocols, there was
no way to identify a frame.
• When the receiver receives a data frame that is out of order, this means that frames were The
received frame could be the correct one, or a duplicate, or a frame out of order. The solution is to
number the frames.
• The lost frames need to be resent in this protocol. If the receiver does not respond when there is an
error, how can the sender know which frame to resend?
• To remedy this problem, the sender keeps a copy of the sent frame. At the same time, it starts a
timer. If the timer expires and there is no ACK for the sent frame, the frame is resent, the copy is
held, and the timer is restarted.
• Error correction in Stop-and-Wait ARQ is done by keeping a copy of the sent frame and
retransmitting of the frame when the timer expires.
24
COMPUTER NETWORKS UNIT-2 DATA LINK LAYER
Operation:
The sender transmits the frame, when frame arrives at the receiver it checks for damage and
acknowledges to the sender accordingly. While transmitting a frame there can be 4 situations.
1. Normal operation
25
COMPUTER NETWORKS UNIT-2 DATA LINK LAYER
a) Normal operation:
In normal operation the sender sends frame 0 and waits for acknowledgment ACK1.After receiving
ACK1, sender sends next frame 1 and waits for its acknowledgment ACK 0.This operation is
repeated and shown in fig.
26
COMPUTER NETWORKS UNIT-2 DATA LINK LAYER
c) Lost acknowledgement:
When an acknowledgement is lost, the sender does not know whether the frame is received by
receiver. After the timer expires, the sender re-transmits the same frame. On the other hand, receiver
has already received this frame earlier hence the second copy of the frame is discarded. Fig. shows lost
ACK.
D) Delayed acknowledgement:
Suppose the sender sends the data and it has also been received by the receiver. The receiver then
sends the acknowledgment but the acknowledgment is received after the timeout period on the sender's
side. As the acknowledgment is received late, so acknowledgment can be wrongly considered as the
acknowledgment of some other data packet.
27
COMPUTER NETWORKS UNIT-2 DATA LINK LAYER
It assumes that the communication channel is perfect It assumes that the communication channel is
and noise free. imperfect and noisy.
Data packet sent by the sender can never get corrupt. Data packet sent by the sender may get corrupt.
Sender starts the time out timer after sending the data
There is no concept of time out timer.
packet.
Limitation of Stop and Wait ARQ: - The major limitation of Stop and Wait ARQ is its
very less efficiency. To increase the efficiency, protocols like Go back N and Selective
Repeat are used.
• Sliding window protocol allows the sender to send multiple frames before needing the
acknowledgements.
• It is more efficient.
Implementations:-
Various implementations of sliding window protocol are-
1. Go back N
2. Selective Repeat
28
COMPUTER NETWORKS UNIT-2 DATA LINK LAYER
In the stop-and-wait protocol, the sender can send only one frame at a time and cannot send the next
frame without receiving the acknowledgment of the previously sent frame, whereas, in the case of
sliding window protocol, the multiple frames can be sent at a time.
Go-back N ARQ (Automatic Repeat Request) protocol is a practical implementation of the sliding
window protocol. In Go-Back-N ARQ; N is the sender's window size. Suppose we say that Go-Back-3,
which means that the three frames can be sent at a time before expecting the acknowledgment from the
receiver.
It uses the principle of protocol pipelining in which the multiple frames can be sent before receiving
the acknowledgment of the first frame. If we have five frames and the concept is Go-Back-3, which
means that the three frames can be sent, i.e., frame no 1, frame no 2, frame no 3 can be sent before
expecting the acknowledgment of frame no 1.
In Go-Back-N ARQ, the frames are numbered sequentially as Go-Back-N ARQ sends the multiple
frames at a time that requires the numbering approach to distinguish the frame from another frame, and
these numbers are known as the sequential numbers.
The number of frames that can be sent at a time totally depends on the size of the sender's window. So,
we can say that 'N' is the number of frames that can be sent at a time before receiving the
acknowledgment from the receiver.
The number of bits in the sequence number is 2 to generate the binary sequence 00,01,10,11
29
COMPUTER NETWORKS UNIT-2 DATA LINK LAYER
Suppose there are a sender and a receiver, and let's assume that there are 11 frames to be sent. These
frames are represented as 0,1,2,3,4,5,6,7,8,9,10, and these are the sequence numbers of the frames.
Mainly, the sequence number is decided by the sender's window size. But, for the better understanding,
we took the running sequence numbers, i.e., 0,1,2,3,4,5,6,7,8,9,10. Let's consider the window size as 4,
which means that the four frames can be sent at a time before expecting the acknowledgment of the
first frame.
30
COMPUTER NETWORKS UNIT-2 DATA LINK LAYER
Step 1: Firstly, the sender will send the first four frames to the receiver, i.e., 0,1,2,3, and now the
sender is expected to receive the acknowledgment of the 0th frame.
Let's assume that the receiver has sent the acknowledgment for the 0 frame, and the receiver has
successfully received it.
The sender will then send the next frame, i.e., 4, and the window slides containing four frames
(1,2,3,4).
31
COMPUTER NETWORKS UNIT-2 DATA LINK LAYER
The receiver will then send the acknowledgment for the frame no 1. After receiving the
acknowledgment, the sender will send the next frame, i.e., frame no 5, and the window will slide
having four frames (2,3,4,5).
Now, let's assume that the receiver is not acknowledging the frame no 2, either the frame is lost, or the
acknowledgment is lost. Instead of sending the frame no 6, the sender Go-Back to 2, which is the first
frame of the current window, retransmits all the frames in the current window, i.e., 2,3,4,5.
32
COMPUTER NETWORKS UNIT-2 DATA LINK LAYER
• In Go-Back-N, N determines the sender's window size, and the size of the receiver's window is
always 1.
• It does not consider the corrupted frames and simply discards them.
• It does not accept the frames which are out of order and discards them.
• If the sender does not receive the acknowledgment, it leads to the retransmission of all the current
window frames.
The example of Go-Back-N ARQ is shown below in the figure.
33
COMPUTER NETWORKS UNIT-2 DATA LINK LAYER
Comparison Table:
Go back N and
Selective Repeat
Efficiency 1 / (1+2a) N / (1+2a) N / (1+2a) gives better efficiency
than Stop and Wait
ARQ.
Buffer requirement in
Sender Window Sender Window Selective Repeat is
Sender Window
Size = N Size = N very large.
Size = 1
Window Size If the system does not
Receiver Window Receiver Receiver Window
Window Size = 1 Size = N have lots of memory,
Size = 1 then it is better to
choose Go back N.
34
COMPUTER NETWORKS UNIT-2 DATA LINK LAYER
Bandwidth
requirement is
high because
even if a single
packet is lost, Selective Repeat is
Bandwidth Bandwidth
Bandwidth entire window better than Go back N
requirement is requirement is
Requirement has to be in terms of bandwidth
Low moderate
retransmitted. requirement.
Thus, if error
rate is high, it
wastes a lot of
bandwidth.
High due to
Go back N is better
searching and
than Selective Repeat
CPU usage Low Moderate sorting required at
in terms of CPU
sender and
usage.
receiver side
Go back N is better
Complex as it
Level of difficulty than Selective Repeat
requires extra
in Low Moderate in terms of
logic and sorting
Implementation implementation
and searching
difficulty.
Sending cumulative
Uses cumulative
acknowledgements
acknowledgemen reduces the traffic in
Uses independent Uses independent
Acknowledgeme ts (but may use the network but if it is
acknowledgement acknowledgement
nts independent lost, then the ACKs
for each packet for each packet for all the
acknowledgemen
ts as well) corresponding packets
are lost.
Go back N and
Type of Selective Repeat are
Half duplex Full duplex Full duplex
Transmission better in terms of
channel usage.
35
COMPUTER NETWORKS UNIT-2 DATA LINK LAYER
If the receiver receives a corrupt frame, it does not directly discard it. It sends a negative
acknowledgment to the sender. The sender sends that frame again as soon as on the receiving negative
acknowledgment. There is no waiting for any time-out to send that frame. The design of the Selective
Repeat ARQ protocol is shown below.
36
COMPUTER NETWORKS UNIT-2 DATA LINK LAYER
The example of the Selective Repeat ARQ protocol is shown below in the figure.
If a frame is corrupted or lost in it, all In this, only the frame is sent again, which is
subsequent frames have to be sent again. corrupted or lost.
If it has a high error rate,it wastes a lot of There is a loss of low bandwidth.
bandwidth.
It does not require sorting. In this, sorting is done to get the frames in the
correct order.
37
COMPUTER NETWORKS UNIT-2 DATA LINK LAYER
Multiple Access Protocols: Multiple access protocols are a set of protocols operating in the
Medium Access Control sublayer (MAC sublayer) of the Open Systems Interconnection (OSI) model.
These protocols allow a number of nodes or users to access a shared network channel. Several data
streams originating from several nodes are transferred through the multi-point transmission channel.
In this protocol, all the station has the equal priority to send the data over a channel. In random access
protocol, one or more stations cannot depend on another station nor any station control another station.
Depending on the channel's state (idle or busy), each station transmits the data frame. However, if
more than one station sends the data over a channel, there may be a collision or data conflict. Due to
the collision, the data frame packets may be lost or changed. And hence, it does not receive by the
receiver end.
Given below are the protocols that lie under the category of Random Access protocol:
1. ALOHA
2. CSMA (Carrier sense multiple access)
3. CSMA/CD (Carrier sense multiple access with collision detection)
4. CSMA/CA (Carrier sense multiple access with collision avoidance)
38
COMPUTER NETWORKS UNIT-2 DATA LINK LAYER
3.Channelization Protocols:
Channelization is another method used for multiple accesses in which the available bandwidth of the
link is shared in the time, frequency, or through the code in between the different stations.
1.1 ALOHA: It is designed for wireless LAN (Local Area Network) but can also be used in a shared
medium to transmit data. Using this method, any station can transmit data across a network
simultaneously when a data frameset is available for transmission.
ALOHA Rules:
1. Any station can transmit data to a channel at any time.
2. It does not require any carrier sensing.
3. Collision and data frames may be lost during the transmission of data through multiple stations.
4. Acknowledgment of the frames exists in Aloha. Hence, there is no collision detection.
5. It requires retransmission of data after some random amount of time.
Pure ALOHA: Whenever data is available for sending over a channel at stations, we use Pure Aloha.
In pure Aloha, when each station transmits data to a channel without checking whether the channel is
idle or not, the chances of collision may occur, and the data frame can be lost. When any station
39
COMPUTER NETWORKS UNIT-2 DATA LINK LAYER
transmits the data frame to a channel, the pure Aloha waits for the receiver's acknowledgment. If it
does not acknowledge the receiver end within the specified time, the station waits for a random amount
of time, called the back off time (Tb). And the station may assume the frame has been lost or
destroyed. Therefore, it retransmits the frame until all the data are successfully transmitted to the
receiver.
As we can see in the figure above, there are four stations for accessing a shared channel and
transmitting data frames. Some frames collide because most stations send their frames at the same
time. Only two frames, frame 1.1 and frame 2.2, are successfully transmitted to the receiver end. At the
same time, other frames are lost or destroyed. Whenever two frames fall on a shared channel
simultaneously, collisions can occur, and both will suffer damage. If the new frame's first bit enters the
channel before finishing the last bit of the second frame. Both frames are completely finished, and both
stations must retransmit the data frame.
Slotted ALOHA:
The slotted Aloha is designed to overcome the pure Aloha's efficiency because pure Aloha has a very
high possibility of frame hitting. In slotted Aloha, the shared channel is divided into a fixed time
interval called slots. So that, if a station wants to send a frame to a shared channel, the frame can only
40
COMPUTER NETWORKS UNIT-2 DATA LINK LAYER
be sent at the beginning of the slot, and only one frame is allowed to be sent to each slot. And if the
stations are unable to send data to the beginning of the slot, the station will have to wait until the
beginning of the slot for the next time. However, the possibility of a collision remains when trying to
send a frame at the beginning of two or more station time slot.
In other words, CSMA is based on the principle "sense before transmit" or "listen before talk."
41
COMPUTER NETWORKS UNIT-2 DATA LINK LAYER
What should a station do if the channel is busy? What should a station do if the channel is idle? Three
methods have been devised to answer these questions:
• 1-persistent method
• non-persistent method
• P-persistent method
• O-persistent method
1-Persistent: In the 1-Persistent mode of CSMA that defines each node, first sense the shared channel
and if the channel is idle, it immediately sends the data. Else it must wait and keep track of the status
of the channel to be idle and broadcast the frame unconditionally as soon as the channel is idle.
Non-Persistent: It is the access mode of CSMA that defines before transmitting the data, each node
must sense the channel, and if the channel is inactive, it immediately sends the data. Otherwise, the
station must wait for a random time (not continuously), and when the channel is found to be idle, it
transmits the frames.
P-Persistent: It is the combination of 1-Persistent and Non-persistent modes. The P-Persistent mode
defines that each node senses the channel, and if the channel is inactive, it sends a frame with
a P probability. If the data is not transmitted, it waits for a (q = 1-p probability) random time and
resumes the frame with the next time slot.
O- Persistent: It is an O-persistent method that defines the superiority of the station before the
transmission of the frame on the shared channel. If it is found that the channel is inactive, each station
waits for its turn to retransmit the data.
42
COMPUTER NETWORKS UNIT-2 DATA LINK LAYER
It is a carrier senses multiple access/ collision detection network protocol to transmit data frames.
The CSMA/CD protocol works with a medium access control layer. Therefore, it first senses the
shared channel before broadcasting the frames, and if the channel is idle, it transmits a frame to check
whether the transmission was successful. If the frame is successfully received, the station sends
another frame. If any collision is detected in the CSMA/CD, the station sends a jam/ stop signal to the
shared channel to terminate data transmission. After that, it waits for a random time before sending a
frame to a channel.
43
COMPUTER NETWORKS UNIT-2 DATA LINK LAYER
Sender has to keep on checking, if transmission link/medium is idle. For this it continuously
senses transmissions from other nodes. Sender sends dummy data on the link. If it does not receive
any collision signal, this means the link is idle at the moment. If it senses that the carrier is free and
there are no collisions, it sends the data. Otherwise it refrains from sending data.
Sender transmits its data on the link. CSMA/CD does not use ‘acknowledgement’ system.
• It checks for the successful and unsuccessful transmissions through collision signals. During
transmission, if collision signal is received by the node, transmission is stopped.
• The station then transmits a jam signal onto the link and waits for random time interval before
it resends the frame. After some random time, it again attempts to transfer the data and repeats
above process.
Step 4: If no collision was detected in propagation, the sender completes its frame transmission and
resets the counters.
It is a carrier sense multiple access/collision avoidance network protocol for carrier transmission of
data frames. It is a protocol that works with a medium access control layer. When a data frame is sent
to a channel, it receives an acknowledgment to check whether the channel is clear. If the station
receives only a single (own) acknowledgment, that means the data frame has been successfully
transmitted to the receiver. But if it gets two signals (its own and one more in which the collision of
frames), a collision of the frame occurs in the shared channel. Detects the collision of the frame when a
sender receives an acknowledgment signal.
44
COMPUTER NETWORKS UNIT-2 DATA LINK LAYER
Following are the methods used in the CSMA/ CA to avoid the collision:
Inter frame space: In this method, the station waits for the channel to become idle, and if it gets the
channel is idle, it does not immediately send the data. Instead of this, it waits for some time, and this
time period is called the Inter frame space or IFS. However, the IFS time is often used to define the
priority of the station.
Contention window: In the Contention window, the total time is divided into different slots. When the
station/ sender is ready to transmit the data frame, it chooses a random slot number of slots as wait
time. If the channel is still busy, it does not restart the entire process, except that it restarts the timer
only to send data packets when the channel is inactive.
Acknowledgment: In the acknowledgment method, the sender station sends the data frame to the
shared channel if the acknowledgment is not received ahead of time.
45
COMPUTER NETWORKS UNIT-2 DATA LINK LAYER
46
COMPUTER NETWORKS UNIT-2 DATA LINK LAYER
Initially all nodes (A, B ……. G, H) are permitted to compete for the channel. If a node is successful in
acquiring the channel, it transmits its frame. In case of collision, the nodes are divided into two groups
(A, B, C, D in one group and E, F, G, H in another group). Nodes belonging to only one of them are
permitted for competing. This process continues until successful transmission occurs.
STANDARD ETHERNET
The original Ethernet was created in 1976 at Xerox’s Palo Alto Research Center (PARC). Since then, it
has gone through four generations. We briefly discuss the Standard (or traditional) Ethernet in this section.
Ethernet is the most widely used LAN technology used today. Ethernet operates in the data link layer
and the physical layer. It is a family of networking technologies that are defined in the IEEE 802.2 and
802.3 standards. Ethernet supports data bandwidths of:
• 10 Mb/s
• 100 Mb/s
• 1000 Mb/s (1 Gb/s)
• 10,000 Mb/s (10 Gb/s)
• 40,000 Mb/s (40 Gb/s)
• 100,000 Mb/s (100 Gb/s)
47
COMPUTER NETWORKS UNIT-2 DATA LINK LAYER
MAC Sublayer
In Standard Ethernet, the MAC sub layer governs the operation of the access method. It also frames
data received from the upper layer and passes them to the physical layer.
Frame Format
The Ethernet frame contains seven fields: preamble, SFD, DA, SA, length or type of protocol data
unit (PDU), upper-layer data, and the CRC.
Ethernet does not provide any mechanism for acknowledging received frames, making it what is
known as an unreliable medium. Acknowledgments must be implemented at the higher layers. The
format of the MAC frame is shown in Figure.
Preamble: Alerts the receiving system to the coming frame and enables it to synchronize its input
timing. The preamble is actually added at the physical layer and is not (formally) part of the frame.
Start frame delimiter (SFD): The second field (l byte: 10101011) signals the beginning of the frame.
The SFD warns the station or stations that this is the last chance for synchronization. The last 2-bits is
11 and alerts the receiver that the next field is the destination address.
48
COMPUTER NETWORKS UNIT-2 DATA LINK LAYER
Destination address (DA): The DA field is 6 bytes and contains the physical address of the
destination station or stations to receive the packet.
Source address (SA): The SA field is also 6 bytes and contains the physical address of the sender of
the packet.
Length or type: The IEEE standard used it as the length field to define the number of bytes in the data
field. Both uses are common today.
Data: This field carries data encapsulated from the upper-layer protocols. It is a minimum of 46 and a
maximum of 1500 bytes.
CRC. The last field contains error detection information, in this case a CRC-32
Frame Length
• Ethernet has imposed restrictions on both the minimum and maximum lengths of a frame, as
shown in below Figure
Addressing:
• The Ethernet address is 6 bytes (48 bits), normally written in hexadecimal notation, with a
colon between the bytes.
49
COMPUTER NETWORKS UNIT-2 DATA LINK LAYER
Unicast, Multicast, and Broadcast Addresses: A source address is always a unicast address-the
frame comes from only one station. The destination address, however, can be unicast, multicast, or
broadcast. Below Figure shows how to distinguish a unicast address from a multicast address. If the
least significant bit of the first byte in a destination address is 0, the address is unicast; otherwise, it is
multicast. The broadcast destination address is a special case of the multicast address in which all bits
are 1s.
The Standard Ethernet defines several physical layer implementations; four of the most common, are
shown in Figure
50
COMPUTER NETWORKS UNIT-2 DATA LINK LAYER
lOBase5: Thick Ethernet: The first implementation is called 10Base5, thick Ethernet, or Thicknet. The
nickname derives from the size of the cable, which is roughly the size of a garden hose and too stiff to
bend with your hands. 10Base5 was the first Ethernet specification to use a bus topology with an
external transceiver (transmitter/receiver) connected via a tap to a thick coaxial cable.
The transceiver is responsible for transmitting, receiving, and detecting collisions. The transceiver is
connected to the station via a transceiver cable that provides separate paths for sending and receiving.
This means that collision can only happen in the coaxial cable. The maximum length of the coaxial
cable must not exceed 500 m, otherwise, there is excessive degradation of the signal. If a length of
more than 500 m is needed, up to five segments, each a maximum of 500-meter, can be connected
using repeaters.
51
COMPUTER NETWORKS UNIT-2 DATA LINK LAYER
The second implementation is called 10Base2, thin Ethernet, or Cheaper net. 10Base2 also uses a bus
topology, but the cable is much thinner and more flexible. The cable can be bent to pass very close to
the stations. In this case, the transceiver is normally part of the network interface card (NIC), which is
installed inside the station.
• It uses a physical star topology. The stations are connected to a hub via two pairs of twisted
cable, as shown in Figure
• The maximum length of the twisted cable here is defined as 100 m, to minimize the effect of
attenuation in the twisted cable
Although there are several Ethernet, the most common is called 10Base-F.
52
COMPUTER NETWORKS UNIT-2 DATA LINK LAYER
• 10Base-F uses a star topology to connect stations to a hub. The stations are connected to the
hub using two fiber-optic cables, as shown in Figure
FAST ETHERNET:
Fast Ethernet was designed to compete with LAN protocols such as FDDI or Fiber Channel. IEEE
created Fast Ethernet under the name 802.3u. Fast Ethernet is backward-compatible with Standard
Ethernet, but it can transmit data 10 times faster at a rate of 100 Mbps.
53
COMPUTER NETWORKS UNIT-2 DATA LINK LAYER
GIGABIT ETHERNET
54
COMPUTER NETWORKS UNIT-2 DATA LINK LAYER
Uses of bridges:
• A bridge is a network device that connects multiple LANs (local area networks) together to
form a larger LAN.
• The process of aggregating networks is called network bridging. A bridge connects the
different components so that they appear as parts of a single network.
55
COMPUTER NETWORKS UNIT-2 DATA LINK LAYER
• By joining multiple LANs, bridges help in multiplying the network capacity of a single LAN.
• Since they operate at data link layer, they transmit data as data frames. On receiving a data
frame, the bridge consults a database to decide whether to pass, transmit or discard the frame.
➢ If the frame has a destination MAC (media access control) address in the same network,
the bridge passes the frame to that node and then discards it.
➢ If the frame has a destination MAC address in a connected network, it will forward the
frame toward it.
➢ Key features of a bridge are mentioned below:
56
COMPUTER NETWORKS UNIT-2 DATA LINK LAYER
Learning Bridges:
Bridge is a device that joins networks to create a much larger network. A learning bridge, also called
an adaptive bridge, “learns" which network addresses are on one side of the bridge and which are
on the other so it knows how to forward packets it receives.
A better solution to the static table is a dynamic table that maps addresses to ports automatically. To
make a table dynamic, we need a bridge that gradually learns from the frame movements. To do this,
the bridge inspects both the destination and the source addresses. The destination address is used for
the forwarding decision (table lookup); the source address is used for adding entries to the table and for
updating purposes. Let us elaborate on this process by using Figure
57
COMPUTER NETWORKS UNIT-2 DATA LINK LAYER
1. When station A sends a frame to station D, the bridge does not have an entry for either D or A. The
frame goes out from all three ports; the frame floods the network. However, by looking at the source
address, the bridge learns that station A must be located on the LAN connected to port 1. This means
that frames destined for A, in the future, must be sent out through port 1. The bridge adds this entry to
its table. The table has its first entry now.
2. When station E sends a frame to station A, the bridge has an entry for A, so it forwards the frame
only to port 1. There is no flooding. In addition, it uses the source address of the frame, E, to add a
second entry to the table.
3. When station B sends a frame to C, the bridge has no entry for C, so once again it floods the
network and adds one more entry to the table.
Loop Problem: Transparent bridges work fine as long as there are no redundant bridges in the system.
Systems administrators, however, like to have redundant bridges (more than one bridge between a pair
of LANs) to make the system more reliable. If a bridge fails, another bridge takes over until the failed
one is repaired or replaced.
Solution of Loop Problem: To solve the looping problem, the IEEE specification requires that bridges
use the spanning tree algorithm to create a loop less topology.
58
COMPUTER NETWORKS UNIT-2 DATA LINK LAYER
We have shown the physical system and its representation in graph theory. We have shown both LANs
and bridges as nodes. The connecting arcs show the connection of a LAN to a bridge and vice versa.
• To find the spanning tree, we need to assign a cost (metric) to each arc. The interpretation of
the cost is left up to the systems administrator.
• It may be the path with minimum hops (nodes), the path with minimum delay, or the path with
maximum bandwidth.
• If two ports have the same shortest value, the systems administrator just chooses one. We have
chosen the minimum hops.
59
COMPUTER NETWORKS UNIT-2 DATA LINK LAYER
• Every bridge has a built-in ID (normally the serial number, which is unique). Each bridge
broadcasts this ID so that all bridges know which one has the smallest ID. The bridge with the
smallest ID is selected as the root bridge (root of the tree). We assume that bridge B1 has the
smallest ID. It is, therefore, selected as the root bridge.
• The algorithm tries to find the shortest path (a path with the shortest cost) from the root bridge
to every other bridge or LAN. The shortest path can be found by examining the total cost from
the root bridge to the destination. Figure shows the shortest paths.
• The combination of the shortest paths creates the shortest tree, which is also shown in Figure.
• Based on the spanning tree, we mark the ports that are part of the spanning tree, the forwarding
ports, which forward a frame that the bridge receives. We also mark those ports that are not
part of the spanning tree, the blocking ports, which block the frames received by the bridge.
Figure 15.10 shows the physical systems of LANs with forwarding points (solid lines) and
blocking ports (broken lines).
Note that there is only one single path from any LAN to any other LAN in the spanning tree system.
This means there is only one single path from one LAN to any other LAN. No loops are created. You
can prove to yourself that there is only one path from LAN 1 to LAN 2, LAN 3, or LAN 4. Similarly,
there is only one path from LAN 2 to LAN 1, LAN 3, and LAN 4. The same is true for LAN 3 and
LAN 4.
In this section, we divide connecting devices into five different categories based on the layer in which
they operate in a network, as shown in Figure 15.1.
60
COMPUTER NETWORKS UNIT-2 DATA LINK LAYER
1. Repeaters:
• A repeater operates at the physical layer. Its job is to regenerate the signal over the same
network before the signal becomes too weak or corrupted.
• An important point to be noted about repeaters is that they do not amplify the signal. When the
signal becomes weak, they copy the signal bit by bit and regenerate it at the original strength. It
is a 2-port device.
• A repeater receives a signal and, before it becomes too weak or corrupted, regenerates the
original bit pattern. The repeater then sends the refreshed signal.
• A repeater does not actually connect two LANs; it connects two segments of the same LAN.
The segments connected are still part of one single LAN. A repeater is not a device that can
connect two LANs of different protocols
61
COMPUTER NETWORKS UNIT-2 DATA LINK LAYER
• The repeater acts as a two-port node, but operates only in the physical layer. When it receives a
frame from any of the ports, it regenerates and forwards it to the other port. A repeater
forwards every frame; it has no filtering capability.
• A repeater connects different segments of a LAN
• A repeater forwards every bit it receives
• A repeater is a regenerator, not an amplifier
• It can be used to create a single extended LAN
2. Hubs
• A hub is basically a multiport repeater. A hub connects multiple wires coming from different
branches, for example, the connector in star topology which connects different stations.
• Hubs cannot filter data, so data packets are sent to all connected devices.
• A hub connects multiple wires coming from different branches, for example, the connector in
star topology which connects different stations. Hubs cannot filter data, so data packets are sent
to all connected devices. Hub is a generic term, but commonly refers to a multiport repeater. It
can be used to create multiple levels of hierarchy of stations.
62
COMPUTER NETWORKS UNIT-2 DATA LINK LAYER
3. Bridge:
• A bridge is a repeater; with add on the functionality of filtering content by reading the MAC
addresses of source and destination.
• It is also used for interconnecting two LANs working on the same protocol.
• It has a single input and single output port, thus making it a 2-port device.
• A bridge operates in both the physical and the data link layer. As a physical layer device, it
regenerates the
• signal it receives. As a data link layer device, the bridge can check the physical (MAC)
addresses (source and destination) contained in the frame.
63
COMPUTER NETWORKS UNIT-2 DATA LINK LAYER
4. Switch:
A switch is a multi-port bridge with a buffer and a design that can boost its efficiency (large
number of ports imply less traffic) and performance. Switch is data link layer device. Switch can
perform error checking before forwarding data that makes it very efficient as it does not forward
packets that have errors and forward good packets selectively to correct port only. In other words,
switch divides collision domain of hosts, but broadcast domain remains same.
• A switch is a device that connects other devices together. Multiple data cables are plugged into
a switch to enable communication between different networked devices. A switch is a data link
layer device.
• The switch can perform error checking before forwarding data that makes it very efficient as it
does not forward packets that have errors and forward good packets selectively to correct port
only.
A switch is essentially a fast bridge having additional sophistication that allows faster processing of
frames. Some of important functionalities are:
Cut-through: A switch forwards a frame immediately after receiving the destination address.
As a consequence, the switch forwards the frame without collision and error detection.
Collision-free: In this case, the switch forwards the frame after receiving 64 bytes, which
allows detection of collision. However, error detection is not possible because switch is yet to
receive the entire frame.
Fully buffered: In this case, the switch forwards the frame only after receiving the entire
frame. So, the switch can detect both collision and error free frames are forwarded.
64
COMPUTER NETWORKS UNIT-2 DATA LINK LAYER
5. Routers: Router is a device like a switch that routes data packets based on their IP addresses.
Router is mainly a Network Layer device. Routers normally connect LANs and WANs together and
have a dynamically updating routing table based on which they make decisions on routing the data
packets.
6. Gateways:
65
COMPUTER NETWORKS UNIT-2 DATA LINK LAYER
HUB work on Physical Switch work on Data Link Router work on Network Layer
Layer of OSI Model Layer of OSI Model of OSI Model
Hub sends data in the form Switch sends data in the Router sends data in the form
of binary bits form of frames packets
Only one device can send Multiple devices can send Multiple devices can send data at
data at a time data at the same time the same time
66
COMPUTER NETWORKS UNIT-2 DATA LINK LAYER
Important Questions:
1. Briefly explain ALOHA, CSMA, CSMA/CD and CSMA/CA protocols and compare its
performance.
2. Explain about Bridges, learning bridges, Spanning tree bridges, Repeaters and Hubs.
3. (a) Define cyclic redundancy code. Discuss in detail about cyclic redundancy check of error
checking.
(b) Explain the CRC error detection technique using generator polynomial x4+x3+1 and data
11100011
4. Explain the working of sliding window protocol and also discuss about the operation of 1-bit sliding
window protocol.
67
COMPUTER NETWORKS UNIT-3 NETWORK LAYER
UNIT-3
NETWORK LAYER
Syllabus
Network Layer Design issues
store and forward packet switching
connection less and connection oriented networks
Routing algorithms
Optimality principle
Shortest path
Flooding
Distance Vector Routing
Count to Infinity Problem
Link State Routing
Path Vector Routing
Hierarchical Routing
Congestion control algorithms
IP Addresses
CIDR
Sub Netting
Super Netting
IPv4
Packet Fragmentation
IPv6 protocol
Transition from IPv4 to IPv6
ARP
RARP
1
COMPUTER NETWORKS UNIT-3 NETWORK LAYER
Introduction:
What is packet?
• All data sent over the Internet is broken down into smaller chunks called "packets. “
• A packet has two parts: the header, which contains senders and receivers IP address, and
the body, which is the actual data being sent.
2
COMPUTER NETWORKS UNIT-3 NETWORK LAYER
3
COMPUTER NETWORKS UNIT-3 NETWORK LAYER
• The idea behind virtual circuits is to avoid having to choose a new route for every packet
sent.
• When a connection is established, a route from the source machine to the destination
machine is chosen as part of the connection setup and stored in tables inside the routers.
• When the connection is released, the virtual circuit is also terminated. With connection-
oriented service, each packet carries an identifier telling which virtual circuit it belongs to.
4
COMPUTER NETWORKS UNIT-3 NETWORK LAYER
5
COMPUTER NETWORKS UNIT-3 NETWORK LAYER
Routing algorithms:
In order to transfer the packets from source to the destination, the network layer must
determine the best route through which packets can be transmitted.
Whether the network layer provides datagram service or virtual circuit service, the main job of
the network layer is to provide the best route. The routing protocol provides this job.
The routing protocol is a routing algorithm that provides the best path from the source to the
destination. The best path is the path that has the "least-cost path" from source to the
destination.
Routing is the process of forwarding the packets from source to the destination but the best
route to send the packets is determined by the routing algorithm.
1. Non-Adaptive Algorithms –
These are the algorithms which do not change their routing decisions once they have been selected.
This is also known as static routing as route to be taken is computed in advance and downloaded to
routers when router is booted.
2. Adaptive Algorithms -
These are the algorithms which change their routing decisions whenever network topology or
traffic load changes. The changes in routing decisions are reflected in the topology as well as
traffic of the network.
6
COMPUTER NETWORKS UNIT-3 NETWORK LAYER
• Flooding
• Hierarchical Routing
7
COMPUTER NETWORKS UNIT-3 NETWORK LAYER
8
COMPUTER NETWORKS UNIT-3 NETWORK LAYER
Now, suppose we find a better route from J to K is found, say along J-M-N-K. Consequently, we
will also need to update the optimal route from I to K as I-GJ- M-N-K, since the previous route
ceases to be optimal in this situation. This new optimal path is shown line orange lines in the
following figure −
Dijkstra’s Algorithm
An algorithm that is used for finding the shortest distance, or path, from starting node to target
node in a weighted graph is known as Dijkstra’s Algorithm.
Dijkstra's algorithm makes use of weights of the edges for finding the path that minimizes the
total distance (weight) among the source node and all other nodes. This algorithm is also known as
the single-source shortest path algorithm.
It is important to note that Dijkstra’s algorithm is only applicable when all weights are positive
because, during the execution, the weights of the edges are added to find the shortest path.
9
COMPUTER NETWORKS UNIT-3 NETWORK LAYER
5. Examine all adjacent nodes and find the smallest label, make it as working node.
6. Steps 4 and 5 are repeated till destination node reaches.
Example :
10
COMPUTER NETWORKS UNIT-3 NETWORK LAYER
Each router maintains a Distance Vector table containing the distance between itself and ALL
possible destination nodes. Distances, based on a chosen metric, are computed using information
from the neighbors’ distance vectors.
Let dx(y) be the cost of the least-cost path from node x to node y. The least costs are related by
Bellman-Ford equation,
Where the min v is the equation taken for all x neighbors. After traveling from x to v, if we
consider the least-cost path from v to y, the path cost will be c(x,v)+dv(y). The least cost from x to
y is the minimum of c(x,v)+dv(y) taken over all neighbors.
Step-01:
Each router prepares its routing table. By their local knowledge. Each router knows about-
All the routers present in the network
Distance to its neighboring routers
All the routers present in the network
Distance to its neighboring routers
Step-02:
Each router exchanges its distance vector with its neighboring routers.
Each router prepares a new routing table using the distance vectors it has obtained from its
neighbors.
This step is repeated for (n-2) times if there are n routers in the network.
After this, routing tables converge / become stable.
11
COMPUTER NETWORKS UNIT-3 NETWORK LAYER
Example2: Consider-
There is a network consisting of 4 routers.
The weights are mentioned on the edges.
Weights could be distances or costs or delays.
12
COMPUTER NETWORKS UNIT-3 NETWORK LAYER
A 0 A
B 2 B
C 5 B
D 1 D
A 2 A
B 0 B
C 3 C
D 3 A
13
COMPUTER NETWORKS UNIT-3 NETWORK LAYER
At Router C-
A 5 B
B 3 B
C 0 C
D 6 B
At Router D-
A 1 A
B 3 A
C 6 A
D 0 D
14
COMPUTER NETWORKS UNIT-3 NETWORK LAYER
Example3:
Part (a) shows a subnet. The first four columns of part (b) show the delay vectors received from
the neighbors of router J. Suppose that J has measured or estimated its delay to its neighbors, A, I,
H, and K as 8, 10, 12, and 6 msec, respectively.
Advantages:
1. Distance vector routing protocol is easy to implement in small networks. Debugging is very
easy in the distance vector routing protocol.
2. This protocol has a very limited redundancy in a small network.
Disadvantage:
1. A broken link between the routers should be updated to every other router in the network
immediately. The distance vector routing takes a considerable time for the updation. This
problem is also known as count-to-infinity.
2. The time required by every router in a network to produce an accurate routing table is
called convergence time. In the large and complex network, this time is excessive.
3. Every change in the routing table is propagated to other neighboring routers periodically which
create traffic on the network.
15
COMPUTER NETWORKS UNIT-3 NETWORK LAYER
4. Flooding Algorithm:
Flooding is a non-adaptive routing technique following this simple method: when a data packet
arrives at a router, it is sent to all the outgoing links except the one it has arrived on.
For example, let us consider the network in the figure, having six routers that are connected
through transmission lines.
16
COMPUTER NETWORKS UNIT-3 NETWORK LAYER
17
COMPUTER NETWORKS UNIT-3 NETWORK LAYER
Path vector protocol does not rely on the cost of reaching a given destination to determine whether
each path available is loop free or not. Instead, path vector protocols rely on analysis of the path to
reach the destination to learn if it is loop free or not.
A path vector protocol guarantees loop free paths through the network by recording each hop the
routing advertisement traverses through the network.
In this case, router A advertises reachability to the 10.1.1.0/24 network to router B. When router B
receives this information, it adds itself to the path, and advertises it to router C. Router C adds
itself to the path, and advertises to router D that the 10.1.1.0/24 network is reachable in this
direction
18
COMPUTER NETWORKS UNIT-3 NETWORK LAYER
7. Hierarchical Routing:
As the number of routers becomes large, the overhead involved in computing, storing, and
communicating the routing table information (e.g., link state updates or least cost path changes)
becomes prohibitive.
Also an organization should be able to run and administer its network as it wishes (e.g., to run
whatever routing algorithm it chooses), while still being able to connect its network to other
"outside" networks.
Clearly, something must be done to reduce the complexity of route computation in networks as
large as the public Internet.
The routers are divided into what we will call regions, with each router knowing all the details
about how to route packets to destinations within its own region, but knowing nothing about the
internal structure of other regions.
For huge networks, a two-level hierarchy may be insufficient; it may be necessary to group the
regions into clusters, the clusters into zones, the zones into groups, and so on, until we run out
of names for aggregations.
The full routing table for router 1A has 17 entries, as shown in (b).
• When routing is done hierarchically, as in (c), there are entries for all the local routers as
before, but all other regions have been condensed into a single router, so all traffic for
region 2 goes via the 1B -2A line, but the rest of the remote traffic goes via the 1C -3B line.
19
COMPUTER NETWORKS UNIT-3 NETWORK LAYER
The network and transport layers share the responsibility for handling congestion. Since
congestion occurs within the network, it is the network layer that directly experiences it and
must ultimately determine what to do with the excess packets.
When the number of packets dumped into the subnet by the hosts is within its carrying
capacity, they are all delivered and the number delivered is proportional to the number sent.
20
COMPUTER NETWORKS UNIT-3 NETWORK LAYER
• If packets arriving on several input lines and all need the same output line, a queue will
build up.
• The reason is that the time for packets to get to front of the queue, they have already timed
out, and duplicates have been sent.
• Congestion control has to do with making sure that the subnet is able to carry the offered
traffic.
• It is a global issue, involving the behavior of all hosts, all the routers, the store-and-forward
mechanism within the routers, and others.
Flow Control:
• Flow control relates to the point-to-point traffic between a given sender and a given
receiver.
• Its job is to make sure that a faster sender cannot continuously transmit data faster than the
receiver can absorb it.
• Flow control involves a direct feedback from the receiver to the sender.
Algorithms :
21
COMPUTER NETWORKS UNIT-3 NETWORK LAYER
Many problems in complex systems. Such as computer networks, can be viewed from a control
theory point of view. The solutions can be either:
• Open loop solutions attempt to solve the problem by good design, in essence, to make sure
it does not occur in first place.
• Various metrics can be used to monitor the subnet for congestion such as:
The average packet delay and the standard deviation of packet delay.
b) Transfer the information about congestion from the point where it is detected to places
where action can be taken:
• The router, detecting the congestion, sends a “warning” packet to the traffic source or
sources.
• Other possibility is to reserve a bit or field in every packet for routers to fill in whenever
congestion gets above some threshold level.
• Another approach is to have hosts or routers send probe packets out periodically to
explicitly ask about congestion.
22
COMPUTER NETWORKS UNIT-3 NETWORK LAYER
c) Adjust system operation to correct the problem using the appropriate congestion control.
Explicit: Packets are sent back from the point of congestion to warn the source.
Implicit: The source deduces the existence of congestion by making local observations, such as
the time needed for acknowledgements to come back.
1. Retransmission Policy:
• It deals with how fast a sender times out and what it transmit upon timeout.
• A jumpy sender that times out quickly and retransmits all outstanding packets using go-
back N will put heavier load on the system than the sender uses selective repeat.
23
COMPUTER NETWORKS UNIT-3 NETWORK LAYER
3. Acknowledgement Policy:
• If each packet is acknowledged immediately, the acknowledgement packets generate extra
load. However, if acknowledgments are saved up to piggyback onto reserve traffic, extra
timeouts and retransmissions may result.
Router may have one queue per input line, one queue per output line, or both.
4. Routing Policy:
• Good routing algorithm spreads the traffic over all the lines.
• It deals with how long a packet may live before being discarded.
1. Retransmission Policy.
3. Acknowledgement Policy.
24
COMPUTER NETWORKS UNIT-3 NETWORK LAYER
5. Timeout Determination:
• Determining the timeout interval is harder because the transit time across the network is
less predictable than the transit over a write between two routers
• If it is too long, congestion will be reduced, but the response time will suffer whenever
packet is lost.
1. Admission control
• The idea is simple, once congestion has been signaled, no more virtual circuits are set up
until the problem has gone away.
• An alternative approach is to allow new virtual circuits but carefully route all new virtual
circuits around problem areas. For example, consider the subnet of Fig
• Each router can easily monitor the utilization of its output lines and other resources.
• Whenever utilization moves above the threshold, the output line enters a ''warning'' state.
• Each newly-arriving packet is checked to see if its output line is in warning state.
25
COMPUTER NETWORKS UNIT-3 NETWORK LAYER
2. Negotiating for an Agreement between the Host and Subnet when a Virtual Circuit is set
up:
• This agreement normally specifies the volume and shape of the traffic, quality of service
(QoS) required, and other parameters.
• To keep its part of the agreement, the subnet will reserve resources along path when the
circuit is set up.
• These resources can include table and buffer space in the routers and bandwidth in the
lines.
• For example, if six virtual circuits that might use 1 Mbps all pass through the same physical
6-Mbps line, the line has to marked as full, even though it may rarely happen that all six
virtual circuits are transmitting at the same time.
These include:
1. Warning bit
2. Choke packets
3. Load shedding
5. Traffic shaping
• The first 3 deal with congestion detection and recovery. The last 2 deal with congestion
avoidance
Warning Bit:
1. A special bit in the packet header is set by the router to warn the source when congestion is
detected.
2. The bit is copied and piggy-backed on the ACK and sent to the sender.
26
COMPUTER NETWORKS UNIT-3 NETWORK LAYER
3. The sender monitors the number of ACK packets it receives with the warning bit set and
adjusts its transmission rate accordingly.
Choke Packets
A choke packet is a control packet generated at a congested node and transmitted to restrict
traffic flow.
The source, on receiving the choke packet must reduce its transmission rate by a certain
percentage.
An example of a choke packet is the ICMP Source Quench Packet.
1. Over long distances or at high speeds choke packets are not very effective.
3. This requires each hop to reduce its transmission even before the choke packet arrive at
the source.
Load Shedding
2. Which packet is chosen to be the victim depends on the application and on the error
strategy used in the data link layer.
3. For a file transfer, for, e.g. cannot discard older packets since this will cause a gap in the
received data.
4. For real-time voice or video it is probably better to throw away old data and keep new
packets.
1. This is a proactive approach in which the router discards one or more packets before the
buffer becomes completely full.
2. Each time a packet arrives, the RED algorithm computes the average queue length, avg.
3. If avg is lower than some lower threshold, congestion is assumed to be minimal or non-
existent and the packet is queued.
27
COMPUTER NETWORKS UNIT-3 NETWORK LAYER
4. If avg is greater than some upper threshold, congestion is assumed to be serious and the
packet is discarded.
5. If avg is between the two thresholds, this might indicate the onset of congestion. The
probability of congestion is then calculated.
Traffic Shaping
1. Another method of congestion control is to “shape” the traffic before it enters the network.
2. Traffic shaping controls the rate at which packets are sent (not just how many). Used in
ATM and Integrated Services networks.
3. At connection set-up time, the sender and carrier negotiate a traffic pattern (shape).
1. The leaky bucket enforces a constant output rate (average rate) regardless of the burstiness
of the input. Does nothing when input is idle.
2. The host injects one packet per clock tick onto the network. This results in a uniform flow
of packets, smoothing out bursts and reducing congestion.
28
COMPUTER NETWORKS UNIT-3 NETWORK LAYER
3. When packets are the same size (as in ATM cells), the one packet per tick is okay. For
variable length packets though, it is better to allow a fixed number of bytes per tick.
4. E.g. 1024 bytes per tick will allow one 1024-byte packet or two 512-byte packets or four
256- byte packets on 1 tick
The leaky bucket algorithm enforces output pattern at the average rate, no matter how
bursty the traffic is. So in order to deal with the bursty traffic we need a flexible algorithm so that
the data is not lost. One such algorithm is token bucket algorithm.
Steps of this algorithm can be described as follows:
1. In regular intervals tokens are thrown into the bucket.
2. The bucket has a maximum capacity.
3. If there is a ready packet, a token is removed from the bucket, and the packet is sent.
4. If there is no token in the bucket, the packet cannot be sent.
Let’s understand with an example,
In figure (A) we see a bucket holding three tokens, with five packets waiting to be transmitted. For
a packet to be transmitted, it must capture and destroy one token. In figure (B) We see that three of
the five packets have gotten through, but the other two are stuck waiting for more tokens to be
generated.
29
COMPUTER NETWORKS UNIT-3 NETWORK LAYER
IP addresses:
An IP stands for Internet Protocol. An IP address is assigned to each device connected to a network. Each
device uses an IP address for communication. It also behaves as an identifier as this address is used to
identify the device on a network. It defines the technical format of the packets.
An IP address consists of two parts, i.e., the first one is a network address, and the other one is a
host address.
IP address is an address having information about how to reach a specific host, especially
outside the LAN. An IP address is a 32 bit unique address having an address space of 2 32.
Generally, there are two notations in which IP address is written, dotted decimal notation
IPv4:
IP stands for Internet Protocol and v4 stands for Version Four (IPv4). IPv4 was the primary
version brought into action for production within the ARPANET in 1983.
IPv4 is a version 4 of IP. It is a current version and the most commonly used IP address. It is a 32-
bit address written in four numbers separated by 'dot', i.e., periods. This address is unique for each
device.
30
COMPUTER NETWORKS UNIT-3 NETWORK LAYER
The above example represents the IP address in which each group of numbers separated by periods
is called an Octet. Each number in an octet is in the range from 0-255. This address can produce
4,294,967,296 possible unique addresses.
In today's computer network world, computers do not understand the IP addresses in the standard
numeric format as the computers understand the numbers in binary form only. The binary number
can be either 1 or 0. The IPv4 consists of four sets, and these sets represent the octet. The bits in
each octet represent a number.
Each bit in an octet can be either 1 or 0. If the bit the 1, then the number it represents will count,
and if the bit is 0, then the number it represents does not count.
Now, we will see how to obtain the binary representation of the above IP address, i.e., 66.94.29.13
To obtain 66, we put 1 under 64 and 2 as the sum of 64 and 2 is equal to 66 (64+2=66), and the
remaining bits will be zero, as shown above. Therefore, the binary bit version of 66 is 01000010.
To obtain 94, we put 1 under 64, 16, 8, 4, and 2 as the sum of these numbers is equal to 94, and the
remaining bits will be zero. Therefore, the binary bit version of 94 is 01011110.
31
COMPUTER NETWORKS UNIT-3 NETWORK LAYER
To obtain 29, we put 1 under 16, 8, 4, and 1 as the sum of these numbers is equal to 29, and the
remaining bits will be zero. Therefore, the binary bit version of 29 is 00011101.
To obtain 13, we put 1 under 8, 4, and 1 as the sum of these numbers is equal to 13, and the
remaining bits will be zero. Therefore, the binary bit version of 13 is 00001101.
Parts of IPv4:
Network part:
The network part indicates the distinctive variety that’s appointed to the network. The network
part conjointly identifies the category of the network that’s assigned.
Host Part:
The host part uniquely identifies the machine on your network. This part of the IPv4 address is
assigned to every host.
For each host on the network, the network part is the same, however, the host half must vary.
Each of these classes has a valid range of IP addresses. Classes D and E are reserved for
multicast and experimental purposes respectively. The order of bits in the first octet determines
32
COMPUTER NETWORKS UNIT-3 NETWORK LAYER
The number of networks and the number of hosts per class can be derived by this formula −
33
COMPUTER NETWORKS UNIT-3 NETWORK LAYER
When calculating hosts' IP addresses, 2 IP addresses are decreased because they cannot be
assigned to hosts, i.e. the first IP of a network is network number and the last IP is reserved for
Broadcast IP.
Class A Address:
IP address belonging to class A are assigned to the networks that contain a large number of
hosts.
The network ID is 8 bits long.
The host ID is 24 bits long.
The first bit of the first octet is always set to 0 (zero). Thus the first octet ranges from 1 – 127, i.e.
Class A addresses only include IP starting from 1.x.x.x to 126.x.x.x only. The IP range 127.x.x.x
is reserved for loopback IP addresses.
The default subnet mask for Class A IP address is 255.0.0.0 which implies that Class A
addressing can have 126 networks (27-2) and 16777214 hosts (224-2).
Class A IP address format is thus: 0NNNNNNN.HHHHHHHH.HHHHHHHH.HHHHHHHH
IP addresses belonging to class A ranges from 1.x.x.x – 126.x.x.x
Class B Address:
IP address belonging to class B is assigned to the network that ranges from medium-sized to
large-sized networks.
The network ID is 16 bits long.
The host ID is 16 bits long.
The higher order bits of the first octet of IP addresses of class B are always set to 10. The
remaining 14 bits are used to determine network ID.
34
COMPUTER NETWORKS UNIT-3 NETWORK LAYER
The 16 bits of host ID is used to determine the host in any network. Class B IP Addresses range
from 128.0.x.x to 191.255.x.x .The default sub-net mask for class B is 255.255.x.x. Class B has a
total of:
2^14 = 16384 network address
2^16 – 2 = 65534 host address
IP addresses belonging to class B ranges from 128.0.x.x – 191.255.x.x.
Class C Address:
IP address belonging to class C is assigned to small-sized networks.
The network ID is 24 bits long.
The host ID is 8 bits long.
The higher order bits of the first octet of IP addresses of class C are always set to 110.
35
COMPUTER NETWORKS UNIT-3 NETWORK LAYER
Class D Address:
IP address belonging to class D is reserved for multi-casting. The higher order bits of the first
octet of IP addresses belonging to class D are always set to 1110. The remaining bits are for the
address that interested hosts recognize.
Class D has IP address range from 224.0.0.0 to 239.255.255.255. Class D is reserved for
Multicasting. In multicasting data is not destined for a particular host, that is why there is no need
to extract host address from the IP address, and Class D does not have any subnet mask.
Class E Address:
This IP Class is reserved for experimental purposes only for R&D or Study. IP addresses in this
class ranges from 240.0.0.0 to 255.255.255.254. Like Class D, this class too is not equipped with
any subnet mask.
IP addresses belonging to class E are reserved for experimental and research purposes. IP
addresses of class E ranges from 240.0.0.0 – 255.255.255.254. This class doesn’t have any sub-
net mask. The higher order bits of first octet of class E are always set to 1111.
36
COMPUTER NETWORKS UNIT-3 NETWORK LAYER
The network ID cannot start with 127 because 127 belongs to class A address and is
reserved for internal loop-back functions.
All bits of network ID set to 1 are reserved for use as an IP broadcast address and
therefore, cannot be used.
All bits of network ID set to 0 are used to denote a specific host on the local network and
are not routed and therefore, aren’t used.
37
COMPUTER NETWORKS UNIT-3 NETWORK LAYER
• Classless Inter-Domain Routing (CIDR) is a group of IP addresses that are allocated to the
customer when they demand a fixed number of IP addresses.
• In CIDR there is no wastage of IP addresses as compared to classful addressing because only
the numbers of IP addresses that are demanded by the customer are allocated to the customer.
• The group of IP addresses is called Block in Classless Inter - Domain (CIDR).
• CIDR follows CIDR notation or Slash notation. The representation of CIDR notation is
x.y.z.w /n the x.y.z.w is IP address and n is called mask or number of bits that are used in
network id
• In order to reduce the wastage of IP addresses a new concept of Classless Inter-Domain
Routing is introduced. Now a day’s IANA is using this technique to provide the IP
addresses. Whenever any user asks for IP addresses, IANA is going to assign that many IP
addresses to the User.
Representation: It is as also a 32-bit address, which includes a special number which represents
the number of bits that are present in the Block Id.
a. b . c. d / n
Example: 20.10.50.100/20
38
COMPUTER NETWORKS UNIT-3 NETWORK LAYER
1. First IP address of the Block must be evenly divisible by the size of the block. in simple
words, the least significant part should always start with zeroes in Host Id. Since all the least
significant bits of Host Id is zero, then we can use it as Block Id part.
Example:
Check whether 100.1.2.32 to 100.1.2.47 is a valid IP address block or not?
1. All the IP addresses are contiguous.
2. Total number of IP addresses in the Block = 16 = 2 4.
3. 1st IP address: 100.1.2.00100000
Since, Host Id will contains last 4 bits and all the least significant 4 bits are zero. Hence, first
IP address is evenly divisible by the size of the block.
All the three rules are followed by this Block. Hence, it is a valid IP address block.
The advantage of using CIDR notation is that it reduces the number of entries in the routing table
and also it manages the IP address space.
Disadvantages:
The disadvantages of using CIDR Notation are as follows −
Using CIDR it is complex to determine the route. By using classful addresses, we are
directly having separate tables for class A, Class B, Class C.
So we directly go to these tables by seeing the prefix of IP address. But by using CIDR, we
don't have these tables separately. All entries are placed in a single table. So, it is difficult
to find a route.
39
COMPUTER NETWORKS UNIT-3 NETWORK LAYER
Subnetting:
Let’s take another example. Due to maintenance there is a scheduled power cut. If town is divided
in sectors, electric department can make a local announcement for the affected sector rather than
making an announcement for the whole town.
Computer networks also follow the same concept. In computer networking, Subnetting is used to
divide a large IP network in smaller IP networks known as subnets.
A default class A, B and C network provides 16777214, 65534, 254 hosts respectively. Having so
many hosts in a single network always creates several issues such as broadcast, collision,
congestion, etc.
40
COMPUTER NETWORKS UNIT-3 NETWORK LAYER
Let’s take a simple example. In a company there are four departments; sales, production,
development and management. In each department there are 50 users. Company used a private
class C IP network. Without any Subnetting, all computers will work in a single large network.
Computers use broadcast messages to access and provide information in network. A broadcast
message is an announcement message in computer network which is received by all hosts in
network.
In this example since all computers belong to same network, they will receive all broadcast
messages regardless the broadcast messages which they are receiving are relevant to them or not.
Just like town is divided in sectors, this network can also be divided in subnets. Once network is
divided in subnets, computers will receive only the broadcasts which belong to them.
Since company has four departments, it can divide its network in four subnets. Following figure
shows same network after Subnetting.
41
COMPUTER NETWORKS UNIT-3 NETWORK LAYER
Advantage of Subnetting
• Subnetting reduces network traffic by allowing only the broadcast traffic which is relevant
to the subnet.
• By reducing unnecessary traffic, Subnetting improves overall performance of the network.
• By blocking a subnet’ traffic in subnet, Subnetting increases security of the network.
Disadvantage of Subnetting
• Different subnets need an intermediate device known as router to communicate with each
other.
• Subnetting adds complexity in network. An experienced network administrator is required
to manage the sub netted network.
Super netting:
Super netting is the opposite of Subnetting. In sub netting, a single big network is divided into
multiple smaller sub networks. In Super netting, multiple networks are combined into a bigger
network termed as a Super network or Super net.
Super netting is mainly used in Route Summarization, where routes to multiple networks with
similar network prefixes are combined into a single routing entry, with the routing entry pointing
to a Super network, encompassing all the networks. This in turn significantly reduces the size of
routing tables and also the size of routing updates exchanged by routing protocols.
More specifically,
When multiple networks are combined to form a bigger network, it is termed super-netting
Super netting is used in route aggregation to reduce the size of routing tables and routing
table updates
42
COMPUTER NETWORKS UNIT-3 NETWORK LAYER
There are some points which should be kept in mind while supernetting:
2. The block size of every network should be equal and must be in form of 2 n.
200.1.0.0,
200.1.1.0,
200.1.2.0,
200.1.3.0
Build a bigger network that has a single Network Id.
Explanation – Before Super netting routing table will look like as:
200.1.0.0 255.255.255.0 A
200.1.1.0 255.255.255.0 B
200.1.2.0 255.255.255.0 C
200.1.3.0 255.255.255.0 D
2. Equal size of all networks: As all networks are of class C, so all of them have a size of 256
which is in turn equal to 2 8.
43
COMPUTER NETWORKS UNIT-3 NETWORK LAYER
3. First IP address exactly divisible by total size: When a binary number is divided by 2 n then
last n bits are the remainder. Hence in order to prove that first IP address is exactly divisible
by while size of super net Network. You can check that if last n v=bits are 0 or not.
In the given example first IP is 200.1.0.0 and whole size of super net is 4*28 = 210. If last 10 bits
of first IP address are zero then IP will be divisible.
• When a binary number is divided by 2 n then last n bits are the remainder.
• Hence in order to prove that first IP address is exactly divisible by while size of super net
Network.
• In given example first IP is 200.1.0.0 .If last 10 bits of first IP address are zero then IP
will be divisible.
Last 10 bits of first IP address are zero (highlighted by green color). So 3rd condition is also
satisfied. Therefore, you can join all these 4 networks and can make a Super net. New Super
net Id will be 200.1.0.0.
Drawback of IPv4:
Currently, the population of the world is 7.6 billion. Every user is having more than one device
connected with the internet, and private companies also rely on the internet. As we know that IPv4
produces 4 billion addresses, which are not enough for each device connected to the internet on a
planet? So it gave rise to the development of the next generation of IP addresses, i.e., IPv6.
IPv6:
IPv4 produces 4 billion addresses, and the developers think that these addresses are enough, but
they were wrong. IPv6 is the next generation of IP addresses. The main difference between IPv4
and IPv6 is the address size of IP addresses. The IPv4 is a 32-bit address, whereas IPv6 is a 128-bit
hexadecimal address. IPv6 provides a large address space, and it contains a simple header as
compared to IPv4.
44
COMPUTER NETWORKS UNIT-3 NETWORK LAYER
This hexadecimal address contains both numbers and alphabets. Due to the usage of both the
numbers and alphabets, IPv6 is capable of producing over 340 undecillion (3.4*1038) addresses.
IPv6 is a 128-bit hexadecimal address made up of 8 sets of 16 bits each, and these 8 sets are
separated by a colon. In IPv6, each hexadecimal character represents 4 bits. So, we need to convert
4 bits to a hexadecimal number at a time.
Address format:
The above diagram shows the address format of IPv4 and IPv6. An IPv4 is a 32-bit decimal
address. It contains 4 octets or fields separated by 'dot', and each field is 8-bit in size. The number
that each field contains should be in the range of 0-255. Where as an IPv6 is a 128-bit hexadecimal
address. It contains 8 fields separated by a colon, and each field is 16-bit in size.
An IPv6 address consists of eight groups of four hexadecimal digits. Here’s an example IPv6
address:
3001:0da8:75a3:0000:0000:8a2e:0370:7334
45
COMPUTER NETWORKS UNIT-3 NETWORK LAYER
Advantages of IPv6
The next-generation IP, or IPv6, has some advantages over IPv4 that can be summarized as
follows:
Larger address space: An IPv6 address is 128 bits long, compared with the 32-bit address of
IPv4, this is a huge (296) increase in the address space.
Better header format: IPv6 uses a new header format in which options are separated from the
base header and inserted, when needed, between the base header and the upper-layer data. This
simplifies and speeds up the routing process because most of the options do not need to be
checked by routers.
New options: IPv6 has new options to allow for additional Functionalities.
Allowance for extension: IPv6 is designed to allow the extension of the protocol if required by
new technologies or applications.
Support for resource allocation: In IPv6, the type-of- service field has been removed, but a
mechanism (called flow label) has been added to enable the source to request special handling of
the packet. This mechanism can be used to support traffic such as real-time audio and video.
Support for more security: The encryption and authentication options in IPv6 provide
confidentiality and integrity of the packet.
Disadvantages of IPv6
Conversion: Due to widespread present usage of IPv4 it will take a long period to
completely shift to IPv6.
Communication: IPv4 and IPv6 machines cannot communicate directly with each other.
They need an intermediate technology to make that possible.
46
COMPUTER NETWORKS UNIT-3 NETWORK LAYER
Pv6 fixed header is 40 bytes long and contains the following information.
2 Traffic Class (8-bits): These 8 bits are divided into two parts. The most significant 6 bits are
used for Type of Service to let the Router Known what services should be provided to this
packet. The least significant 2 bits are used for Explicit Congestion Notification (ECN).
3 Flow Label (20-bits): This label is used to maintain the sequential flow of the packets
belonging to a communication. The source labels the sequence to help the router identify
that a particular packet belongs to a specific flow of information. This field helps avoid re-
ordering of data packets. It is designed for streaming/real-time media.
4 Payload Length (16-bits): This field is used to tell the routers how much information a
particular packet contains in its payload. Payload is composed of Extension Headers and
Upper Layer data. With 16 bits, up to 65535 bytes can be indicated; but if the Extension
Headers contain Hop-by-Hop Extension Header, then the payload may exceed 65535 bytes
and this field is set to 0.
5 Next Header (8-bits): This field is used to indicate either the type of Extension Header, or if
the Extension Header is not present then it indicates the Upper Layer PDU. The values for
the type of Upper Layer PDU are same as IPv4’s.
6 Hop Limit (8-bits): This field is used to stop packet to loop in the network infinitely. This is
same as TTL in IPv4. The value of Hop Limit field is decremented by 1 as it passes a link
(router/hop). When the field reaches 0 the packet is discarded.
7 Source Address (128-bits): This field indicates the address of originator of the packet.
47
COMPUTER NETWORKS UNIT-3 NETWORK LAYER
Ipv4 Ipv6
Classes IPv4 has 5 different classes of IP IPv6 does not contain classes of
address that includes Class A, Class B, IP addresses.
Class C, Class D, and Class E.
Security features In IPv4, security depends on the In IPv6, IPSEC is developed for
application. This IP address is not security purposes.
developed in keeping the security
feature in mind.
48
COMPUTER NETWORKS UNIT-3 NETWORK LAYER
Packet flow It does not provide any mechanism for It uses flow label field in the
identification packet flow identification. header for the packet flow
identification.
Checksum field The checksum field is available in IPv4. The checksum field is not
available in IPv6.
Encryption and It does not provide encryption and It provides encryption and
Authentication authentication. authentication.
Because of the huge number of systems on the Internet, the transition from IPv4 to IPv6 cannot
happen suddenly. It takes a considerable amount of time before every system in the Internet can
move from IPv4 to IPv6.
The transition must be smooth to prevent any problems between IPv4 and
IPv6 systems.
49
COMPUTER NETWORKS UNIT-3 NETWORK LAYER
Allows IPv4 and IPv6 to coexist in the same hosts and routers for supporting
interoperability between IPv4 and IPv6.
IPv6 nodes which provide a complete IPv4 and IPv6 implementations are called
“IPv6/IPv4 nodes” or “dual stack nodes”. IPv6/IPv4 nodes have the ability to send and
receive both IPv4 and IPv6 packets.
In other words, a station must run IPv4 and IPv6 simultaneously until all the Internet uses
IPv6.
2. Tunneling Mechanism:
Tunneling is a strategy used when two computers using IPv6 want to communicate with each other
and the packet must pass through a region that uses IPv4.
To pass through this region, the packet must have an IPv4 address. So the IPv6 packet is
encapsulated in an IPv4 packet when it enters the region, and it leaves its capsule when it exits the
region. It seems as if the IPv6 packet goes through a tunnel at one end and emerges at the other
end. To make it clear that the IPv4 packet is carrying an IPv6 packet as data.
50
COMPUTER NETWORKS UNIT-3 NETWORK LAYER
51
COMPUTER NETWORKS UNIT-3 NETWORK LAYER
Packet Fragmentation:
Packet Fragmentation is a process of dividing the datagram into fragments during its
transmission.
It is done by intermediary devices such as routers at the destination host at network layer.
Fragmentation is done by the network layer when the maximum size of datagram is greater than
maximum size of data that can be held a frame i.e., its Maximum Transmission Unit (MTU). The
network layer divides the datagram received from transport layer into fragments so that data flow
is not disrupted.
Since there are 16 bits for total length in IP header so, maximum size of IP datagram = 216 – 1 =
65,535 bytes. Source side does not require fragmentation due to wise (good) segmentation by
transport layer i.e. instead of doing segmentation at transport layer and fragmentation at network
layer, the transport layer looks at datagram data limit and frame data limit and does segmentation
in such a way that resulting data can easily fit in a frame without the need of fragmentation.
For example, if a router connects a LAN or WAN, its receives a frame in the LAN format and
sends a frame in the WAN format
Each data link layer protocol has its own frame format in most protocol.
When a datagram is encapsulated in a frame, the total size of the datagram must be less
than its maximum size which is defined by the restriction imposed by the hardware and
software used in the network
To make the IPv4 protocol independent of the physical network, the designers to make the
maximum length of the IPv4 datagram equal to 65,535 bytes.
52
COMPUTER NETWORKS UNIT-3 NETWORK LAYER
i.e Logical address to physical address translation can be done dynamically with ARP. ARP can
find the physical address of the node when its internet address is known. ARP provides a dynamic
mapping from an IP address to the corresponding hardware address.
When one host wants to communicate with another host on the network, it needs to resolve the IP
address of each host to the host's hardware address.
When a host tries to interact with another host, an ARP request is initiated. If the IP address
is for the local network, the source host checks its ARP cache to find out the hardware
address of the destination computer.
If the correspondence hardware address is not found, ARP broadcasts the request to all the
local hosts.
All hosts receive the broadcast and check their own IP address. If no match is discovered,
the request is ignored.
The destination host that finds the matching IP address sends an ARP reply to the source
host along with its hardware address, thus establishing the communication. The ARP cache
is then updated with the hardware address of the destination host
53
COMPUTER NETWORKS UNIT-3 NETWORK LAYER
ARP Packet:
Hardware address space: It specifies the type of hardware such as Ethernet or Packet
Radio net.
Protocol address space: It specifies the type of protocol, same as the Ether type field in
the IEEE 802 header (IP or ARP).
Hardware Address Length: It determines the length (in bytes) of the hardware addresses
in this packet. For IEEE 802.3 and IEEE 802.5, this is 6.
Protocol Address Length: It specifies the length (in bytes) of the protocol addresses in
this packet. For IP, this is 4 byte.
54
COMPUTER NETWORKS UNIT-3 NETWORK LAYER
Operation Code: It specifies whether this is an ARP request (1) or reply (2).
Source/target hardware address: It contains the physical network hardware addresses.
For IEEE 802.3, these are 48-bit addresses.
For the ARP request packet, the target hardware address is the only undefined field in the
packet.
55
COMPUTER NETWORKS UNIT-3 NETWORK LAYER
RARP Packet:
56
COMPUTER NETWORKS UNIT-3 NETWORK LAYER
Full Form The term ARP is an abbreviation for The term RARP is an abbreviation for
Address resolution protocol. Reverse Address Resolution Protocol.
Basics The ARP retrieves the receiver’s The RARP retrieves a computer’s
physical address in a network. logical address from its available server.
Broadcast The nodes use ARP broadcasts in the The RARP utilises IP addresses for
Address LAN with the help of the MAC address. broadcasting.
Table The ARP table is maintained by the The RARP table is maintained by the
Maintained By Local Host. RARP Server.
Usage The router or the host uses ARP to find RARP is used by thin clients that have
another router/host’s address (physical limited facilities.
address) in LAN.
Reply The primary use of the ARP reply is to The primary use of the RARP reply is to
Information update the ARP table. configure the local host’s IP address.
Mapping The ARP maps the node’s IP address The RARP maps the 48-bit address
(32-bit logical address) to the MAC (MAC address/physical address) to the
address/physical address (48-bit logical IP address (32-bit).
address).
57
COMPUTER NETWORKS UNIT-3 NETWORK LAYER
58
COMPUTER NETWORKS UNIT-3 NETWORK LAYER
12.a) Distinguish ARP and RARP Protocols and their services. 14M
b) Discuss the different IP addressing methods.
59
M.Ramanjaneyulu
Associate Professor
Contents
Transport Layer:
➢ Addressing
➢ Connection establishment
➢ Connection release
➢ Error Control & Flow Control
➢ Crash Recovery.
• UDP, Introduction to TCP, The TCP Service Model, The TCP Segment
Header
2
Introduction:
• The main role of the transport layer is to provide the communication services directly to the
application processes running on different hosts.
• The transport layer provides a logical communication between application processes running
on different hosts. Although the application processes on different hosts are not physically
connected, application processes use the logical communication provided by the transport layer
to send the messages to each other.
• The transport layer protocols are implemented in the end systems but not in the network
routers.
• Segmentation and Reassembly: - This layer accepts the message from the (session) layer,
breaks the message into smaller units.
• Flow control: - Flow control at this layer is performed end to end rather than across a single
link.
• Error control: - Error control at this layer is performed process-to-process rather than across
a single link.
PROCESS-TO-PROCESS DELIVERY
➢ The data link layer is responsible for delivery of frames between two neighboring nodes over
a link. This is called node-to-node delivery.
➢ The network layer is responsible for delivery of datagrams between two hosts. This is called
host-to-host delivery.
➢ Communication on the Internet is not defined as the exchange of data between two nodes or
between two hosts. Real communication takes place between two processes (application
programs). We need process-to-process delivery.
➢ The transport layer is responsible for process-to-process delivery- the delivery of a packet, part
of a message, from one process to another.
3
Services provided by the Transport Layer:
The services provided by the transport layer are similar to those of the data link layer. The data
link layer provides the services within a single network while the transport layer provides the
services across an internetwork made up of many networks. The data link layer controls the
physical layer while the transport layer controls all the lower layers.
The services provided by the transport layer protocols can be divided into five
categories:
➢ End-to-end delivery
➢ Addressing
➢ Reliable delivery
➢ Flow control
➢ Multiplexing
4
End-to-end delivery:
The transport layer transmits the entire message to the destination. Therefore, it ensures the end-
to-end delivery of an entire message from a source to the destination.
Reliable delivery:
The transport layer provides reliability services by retransmitting the lost and damaged packets.
➢ Error control
➢ Sequence control
➢ Loss control
➢ Duplication control
Flow Control:
Flow control is used to prevent the sender from overwhelming the receiver. If the receiver is
overloaded with too much data, then the receiver discards the packets and asking for the
retransmission of packets. This increases network congestion and thus, reducing the system
performance. The transport layer is responsible for flow control. It uses the sliding window
protocol that makes the data transmission more efficient as well as it controls the flow of data so
that the receiver does not become overwhelmed. Sliding window protocol is byte oriented rather
than frame oriented
5
• The hardware and/or software within the transport layer that does the work is called the
transport entity.
• The (logical) relationship of the network, transport, and application layers is illustrated in
below figure
TPDU (Transport Protocol Data Unit) is a term used for messages sent from transport
entity to transport entity. Thus, TPDUs (exchanged by the transport layer) are contained in packets
(exchanged by the network layer). In turn, packets are contained in frames (exchanged by the data
link layer). When a frame arrives, the data link layer processes the frame header and passes the
contents of the frame payload field up to the network entity. The network entity processes the
packet header and passes the contents of the packet payload up to the transport entity. This nesting
is illustrated in below figure
6
• The connection-oriented and the connectionless
Just as there are two types of network service, connection-oriented and connectionless, there are
also two types of transport service. The transport service is similar to the network service in many
ways.
The transport code runs entirely on the users' machines, but the network layer mostly runs on the
routers, which are operated by the carrier (at least for a wide area network). What happens if the
network layer offers inadequate service? Suppose that it frequently loses packets? What happens
if routers crash from time to time?
Problems occur, that's what. The users have no real control over the network layer, so they cannot
solve the problem of poor service by using better routers or putting more error handling in the data
link layer. The only possibility is to put on top of the network layer another layer that improves
the quality of the service.
In essence, the existence of the transport layer makes it possible for the transport service to be
more reliable than the underlying network service. Lost packets and mangled data can be detected
and compensated for by the transport layer. Furthermore, the transport service primitives can be
implemented as calls to library procedures in order to make them independent of the network
service primitives.
Thanks to the transport layer, application programmers can write code according to a standard set
of primitives and have these programs work on a wide variety of networks, without having to
worry about dealing with different subnet interfaces and unreliable transmission.
For this reason, many people have traditionally made a distinction between layers 1 through 4 on
the one hand and layer(s) above 4 on the other. The bottom four layers can be seen as the
transport service provider, whereas the upper layer(s) are the transport service user.
7
Figure. The primitives for a simple transport service.
Eg: Consider an application with a server and a number of remote clients.
1. The server executes a “LISTEN” primitive by calling a library procedure that makes aSystem call
to block the server until a client turns up.
2. When a client wants to talk to the server, it executes a “CONNECT” primitive, with
“CONNECTIONREQUEST” TPDU sent to the server.
3. When it arrives, the TE unblocks the server and sends a “CONNECTION ACCEPTED” TPDU
back to theclient.
4. When it arrives, the client is unblocked and the connection is established. Data can now be
exchanged using“SEND” and “RECEIVE” primitives.
When a connection is no longer needed, it must be released to free up table space within the
2 transport entries, which is done with “DISCONNECT” primitive by sending
“DISCONNECTION REQUEST.
• Connection Establishment
• Connection Release
• Multiplexing
• Crash Recovery
Addressing:
• Whenever we need to deliver something to one specific destination among many, we need an
address. At the data link layer, we need a MAC address to choose one node among several nodes
if the connection is not point-to-point.
8
• A frame in the data link layer needs a destination MAC address for delivery and a source address
for the next node's reply.
• At the network layer, we need an IP address to choose one host among millions.
• A datagram in the network layer needs a destination IP address for delivery and a source IP address
for the destination's reply.
• At the transport layer, we need a transport layer address, called a port number, to choose among
multiple processes running on the destination host. The destination port number is needed for
delivery; the source port number is needed for the reply.
• In the Internet model, the port numbers are 16-bit integers between 0 and 65,535. The client
program defines itself with a port number, chosen randomly by the transport layer software running
on the client host. This is the ephemeral port number
When an application (e.g., a user) process wishes to set up a connection to a remote application
process, it must specify which one to connect to. The method normally used is to define transport
addresses to which processes can listen for connection requests. In the Internet, these endpoints
are called ports.
There are two types of access points.
9
TSAP (Transport Service Access Point) to mean a specific endpoint in the transport layer.
The analogous endpoints in the network layer (i.e., network layer addresses) are not
surprisingly called
NSAPs (Network Service Access Points). IP addresses are examples of NSAPs.
Connection Establishment:
Establishing a connection sounds easy, but it is actually surprisingly tricky. At first glance, it would
seem sufficient for one transport entity to just send a CONNECTION REQUEST TPDU to the
destination and wait for a CONNECTION ACCEPTED reply. But the problem occurs when the
network can lose, store, and duplicate packets. In figure (A) Tomlinson (1975) introduced the
three-way handshake.
10
Figure. Three protocol scenarios for establishing a connection using a three-way handshake.CR
denotes CONNECTION REQUEST. a) Normal operation, b) Old CONNECTION REQUEST
appearing out of nowhere. c) Duplicate CONNECTION REQUEST and duplicate ACK.
➢ This establishment protocol involves one peer checking with the other that the connection request
is indeed current. Host 1 chooses a sequence number, x , and sends a CONNECTION REQUEST
segment containing it to host 2. Host 2replies with an ACK segment acknowledging x and
announcing its own initial sequence number, y.
➢ Finally, host 1 acknowledges host 2’s choice of an initial sequence number in the first data segment
thatit sends
In figure (b) the first segment is a delayed duplicate CONNECTION REQUEST from an old
connection.
➢ This segment arrives at host 2 without host 1’s knowledge. Host 2 reacts to this segment by sending
host1an ACK segment, in effect asking for verification that host 1 was indeed trying to set up a
new connection.
➢ When host 1 rejects host 2’s attempt to establish a connection, host 2 realizes that it was tricked
by a delayed duplicate and abandons the connection. In this way, a delayed duplicate does no
damage.
➢ The worst case is when both a delayed CONNECTION REQUEST and an ACK are floating
around in the subnet.
In figure (c) previous example, host 2 gets a delayed CONNECTION REQUEST and replies to it.
➢ At this point, it is crucial to realize that host 2 has proposed using y as the initial sequence number
for host 2 to host 1 traffic, knowing full well that no segments containing sequence number y or
acknowledgements to y are still in existence.
➢ When the second delayed segment arrives at host 2, the fact that z has been acknowledged rather
than ytells host 2 that this, too, is an old duplicate.
➢ The important thing to realize here is that there is no combination of old segments that can cause
theprotocol to fail and have a connection set up by accident when no one wants it.
11
Connection Release:
➢ Asymmetric release
➢ Symmetric release
• There are two styles of terminating a connection: asymmetric release and symmetric release.
• Asymmetric release is the way the telephone system works: when one party hangs up, the
connection is broken.
• Symmetric release treats the connection as two separate unidirectional connections and requires
each one to be released separately.
• Asymmetric release is abrupt and may result in data loss. Consider the scenario of Fig. After the
connection is established, host 1 sends a segment that arrives properly at host2. Then host 1 sends
another segment. Unfortunately, host 2 issues a DISCONNECT before the second segment arrives.
The result is that the connection is released and data are lost.
• Clearly, a more sophisticated release protocol is needed to avoid data loss. One way is to use
symmetric release, in which each direction is released independently of the other one.
• Here, a host can continue to receive data even after it has sent a DISCONNECT segment.
• Symmetric release does the job when each process has a fixed amount of data to send and clearly
knows when it has sent it.
• One can envision a protocol in which host 1 says ‘ I am done. Are you done too?’’ If host 2
responds: ‘ I am done too. Goodbye, the connection can be safely released.’’
12
Release Connection Using a Three-way Handshake:
13
Fig-(a) Fig-(b) Fig-(c) Fig-(d)
One of the user sends a Initial process is If the second DR is Same as in fig-( c) except
DISCONNECTION donein the same way lost, the user initiating that all repeatedattempts to
REQUEST TPDU in as in fig-(a). the disconnection will retransmitthe
not receive the
order to initiate If the final ACK- expected response, DR is assumed to be failed
connection release. TPDU is lost, the and will timeout and due to lost TPDUs. After
When it arrives, the situation is saved by starts all over again. ‘N’ entries, the sender just
recipient sends back a the timer. When the gives up and
DR-TPDU, too, and timer is expired, the releases the
starts a timer. connection is connection.
When this DR arrives, released.
the original sendersends
back an ACK- TPDU
and releases the
connection.
Finally, when the ACK-
TPDU arrives, the
receiver also
releases the connection.
Multiplexing: At the sender site, there may be several processes that need to send packets.
However, there is only one transport layer protocol at any time. This is a many-to-one relationship
and requires multiplexing. The protocol accepts messages from different processes, differentiated
14
by their assigned port numbers. After adding the header, the transport layer passes the packet to
the network layer.
Demultiplexing: At the receiver site, the relationship is one-to- many and requires demultiplexing.
The transport layer receives datagrams from the network layer. After error checking and dropping
of the header, the transport layer delivers each message to the appropriate process based on the
port number
Multiple transport connections use one network connection, called upward multiplexing. One
transport connection use multiple network connection, called downward multiplexing
Crash Recovery:
If hosts and routers are subject to crashes, recovery from these crashes becomes an issue. If the
transport entity is entirely within the hosts, recovery from network and router crashes is
straightforward. A more troublesome problem is how to recover from host crashes.
The host must decide whether to retransmit the most recent TPDU after recovery from a crash.
Eg. A client and a server communication, then the server crashes, see in figure.
No matter how the transport entity is programmed, there are always situations where the protocol
fails to recover properly, because the acknowledgement and the write can’t be done at the same
time.
Server sends a broadcast TPDU to all hosts, announcing that it had just crashed and requesting that
its client inform it about status of all open connection
15
Now it seems that if TPDU is outstanding, client should transmit it, but there can be different
hidden situations
1.If server has first sent ACK and before it can send TPDU to next layer, server crashes. In this
case, client will get ACK so it will not transmit, and TPDU is lost by server
2.It server first sends packet to next layer, then it crashes before it can send ACK. In this case
though server has already received TPDU, client thinks TPDU is lost and it will retransmit
Server (receiving host) can be programmed in two ways, 1.ACK first 2. write First
Three events are possible at server, sending ACK(A), sending packet to next layer(W), Crashing
(C)
Three events can occur in six different case: AC(W),AWC,C(AW), C(WA), WAC,WC(A)
3.Retransmit only s0
4.Retransmit only
Conclusion: When a crash occurred in layer N, only the layer N+1 can recovery.
16
Internet Transport protocols: User Datagram Protocol (UDP) and
the TCP are the basic transport-level protocols for making connections between Internet hosts.
Both TCP and UDP allow programs to send messages to and receive messages from applications
on other hosts. When an application sends a request to the Transport layer to send a
message, UDP and TCP break the information into packets, add a packet header including the
destination address, and send the information to the Network layer for further processing.
Both TCP and UDP use protocol ports on the host to identify the specific destination of the
message.
17
• To avoid this overhead, certain applications which require fast speed and less overhead use UDP.
1. Source Port: -
• Source Port is a 16-bit field.
• It identifies the port of the sending application.
2. Destination Port: -
• Destination Port is a 16-bit field.
• It identifies the port of the receiving application.
3. Length: -
• Length is a 16-bit field.
18
• It identifies the combined length of UDP Header and Encapsulated data.
4. Checksum-
• Checksum is a 16-bit field used for error control.
• It is calculated on UDP Header, encapsulated data and IP pseudo header.
• Checksum calculation is not mandatory in UDP.
Optional Use of the Checksum: The calculation of the checksum and its inclusion in a user
datagram are optional. If the checksum is not calculated, the field is filled with 1s. Note that a
calculated checksum can never be all 1s because this implies that the sum is all 0s, which is
impossible because it requires that the value of fields to be 0s.
Operation of UDP
1. Connectionless Services:
• The User datagram protocol offers Connectionless Services which simply means that each
user datagram that is sent by the UDP is an independent datagram. In different datagrams,
there is no relationship, even if they are coming from the same source process and also
going to the same destination program.
19
• User datagrams are not numbered, there is no connection establishment and no connection
termination.
• Each datagram mainly travels through different paths.
• User datagram is a very simple and unreliable transport protocol. It does not provide any
flow control mechanism and hence there is no window mechanism. Due to which the
receiver may overflow with the incoming messages.
• No error control mechanism is provided by UDP except checksum. Due to which the sender
does not know if any message is has been lost or duplicated.
• As there is a lack of flow control and error control it means that the process that uses the
UDP should provide these mechanisms.
In order to send the message from one process to another, the user datagram protocol encapsulates
and decapsulates the message in the form of an IP datagram.
Applications of UDP:
Given below are some applications of the User datagram protocol:
• UDP is used by those applications that require one response for one request.
• It is used by broadcasting and multicasting applications.
• Management processes such as SNMP make use of UDP.
• Route updating protocols like Routing Information Protocol (RIP) make use of User
Datagram Protocol.
• The process that has an error and flows control mechanism makes use of UDP. One
Application for the same is Trivial File Transfer Protocol (TFTP).
20
Transmission Control Protocol (TCP)
Introduction:
• It was specifically designed to provide a reliable end-to end byte stream over an
unreliable network.
• It was designed to adapt dynamically to properties of the inter network and to be robust in
the face of many kinds of failures.
• Each machine supporting TCP has a TCP transport entity, which accepts user data
streams from local processes, breaks them up into pieces not exceeding 64kbytes and
sends each piece as a separate IP datagram.
• When these datagrams arrive at a machine, they are given to TCP entity, which
reconstructs the original byte streams.
• It is up to TCP to time out and retransmits them as needed, also to reassemble datagrams
into messages in proper sequence.
21
• Connections are identified by the socket identifiers at both ends, that is, (socket1, socket2).
No virtual circuit numbers or other identifiers are used.
Port numbers below 1024 are called well known ports and these are reserved for standard
services.
• All TCP connections are full duplex and point-to-point. TCP does not support multicasting or
broadcasting
o A TCP connection is a byte stream, message boundaries are not preserved end to end.
o Ex: If sending process does four 512 bytes writes to a TCP stream, these data may be
delivered to receiving processes as four 512 byte chunk, or two 1024 byte chunks or
one 2048 byte chunk in which data is written
o The 2048 bytes of data delivered to the application in a single READ CALL.
• When an application passes data to TCP, TCP may send it immediately or buffer it (in order to
collect a larger amount to send at once), at its discretion.
• However, sometimes the application really wants the data to be sent immediately.
• For example, suppose a user of an interactive game wants to send a stream of updates. It is
essential that the updates be sent immediately, not buffered until there is a collection of them.
To force data out, TCP has the notion of a PUSH flag that is carried on packets. The original
intent was to let applications tell TCP implementations via the PUSH flag not to delay the
transmission. However, applications cannot literally set the PUSH flag when they send data.
22
• For Internet archaeologists, we will also mention one interesting feature of TCP service that
remains in the protocol but is rarely used: urgent data.
• When an application has high priority data that should be processed immediately, for example,
if an interactive user hits the CTRL-C key to break off a remote computation that has already
begun, the sending application can put some control information in the data stream and give it
to TCP along with the URGENT flag.
• This event causes TCP to stop accumulating data and transmit everything it has for that
connection immediately.
Characteristics Of TCP
01.TCP is a reliable Protocol
• It guarantees the delivery of data packets to its correct destination.
• After receiving the data packet, receiver sends an acknowledgement to the sender.
• It tells the sender whether data packet has reached its destination safely or not.
• TCP employs retransmission to compensate for packet loss.
02.TCP is a connection-oriented Protocol
This is because-
• TCP establishes an end to end connection between the source and destination.
• The connection is established before exchanging the data.
• The connection is maintained until the application programs at each end finishes exchanging the
data.
03.TCP handles both congestion and Flow Control
• TCP handles congestion and flow control by controlling the window size.
• TCP reacts to congestion by reducing the sender window size.
23
• Port numbers indicate which end to end sockets are communicating.
• Port numbers are contained in the TCP header and IP Addresses are contained in the IP header.
• TCP segments are encapsulated into an IP datagram.
• So, TCP header immediately follows the IP header during transmission.
07.TCP can use both selective & cumulative acknowledges
• TCP uses a combination of Selective Repeat and Go back N protocols.
• In TCP, sender window size = receiver window size.
• In TCP, out of order packets are accepted by the receiver.
• When receiver receives an out of order packet, it accepts that packet but sends an
acknowledgement for the expected packet.
• Receiver may choose to send independent acknowledgements or cumulative
acknowledgement.
• To sum up, TCP is a combination of 75% SR protocol and 25% Go back N protocol.
08.TCP is a Byte stream protocol
• Application layer sends data to the transport layer without any limitation.
• TCP divides the data into chunks where each chunk is a collection of bytes.
• Then, it creates a TCP segment by adding IP header to the data chunk.
• TCP segment = TCP header + Data chunk.
09.TCP Provides error checking & recovery mechanism
TCP provides error checking and recovery using three simple techniques-
1. Checksum
2. Acknowledgement
3. Retransmission
24
Let us discuss each field of TCP header one by one.
1. Source Port-
• Source Port is a 16-bit field.
• It identifies the port of the sending application.
2. Destination Port-
• Destination Port is a 16-bit field.
• It identifies the port of the receiving application.
3. Sequence Number-
• Sequence number is a 32-bit field.
• TCP assigns a unique sequence number to each byte of data contained in the TCP segment.
• This field contains the sequence number of the first data byte.
4. Acknowledgement Number-
• Acknowledgment number is a 32-bit field.
• It contains sequence number of the data byte that receiver expects to receive next from the
sender.
• It is always sequence number of the last received data byte incremented by 1.
5. Header Length-
• Header length is a 4 bit field.
• It contains the length of TCP header.
• It helps in knowing from where the actual data begins.
25
Minimum and Maximum Header length-
26
7. URG Bit-
8. ACK Bit-
27
11. SYN Bit-
SYN bit is used to synchronize the sequence numbers.
When SYN bit is set to 1,
• It indicates the receiver that the sequence number contained in the TCP header is the initial
sequence number.
• Request segment sent for connection establishment during Three way handshake contains
SYN bit set to 1.
12. FIN Bit-
FIN bit is used to terminate the TCP connection.
When FIN bit is set to 1,
• It indicates the receiver that the sender wants to terminate the connection.
• FIN segment sent for TCP Connection Termination contains FIN bit set to 1.
13. Window Size-
• Window size is a 16-bit field.
• It contains the size of the receiving window of the sender.
• It advertises how much data (in bytes) the sender can receive without
acknowledgement.
• Thus, window size is used for Flow Control.
14. Checksum-
• Checksum is a 16-bit field used for error control.
• It verifies the integrity of data in the TCP payload.
• Sender adds CRC checksum to the checksum field before sending the data.
• Receiver rejects the data that fails the CRC check.
15. Urgent Pointer-
• Urgent pointer is a 16-bit field.
• It indicates how much data in the current segment counting from the first data byte is
urgent.
• Urgent pointer added to the sequence number indicates the end of urgent data byte.
• This field is considered valid and evaluated only if the URG bit is set to 1.
16. Options-
• Options field is used for several purposes.
• The size of options field varies from 0 bytes to 40 bytes.
28
TCP Connection Establishment (3 Way Handshaking):
Three Way Handshake is a process used for establishing a TCP connection.
Consider-
• Client wants to establish a connection with the server.
• Before Three Way Handshake, both client and server are in closed state.
29
2. SYN Bit Set To 1-
Client sets SYN bit to 1 which indicates the server-
• This segment contains the initial sequence number used by the client.
• It has been sent for synchronizing the sequence numbers.
3. Maximum Segment Size (MSS)-
• Client sends its MSS to the server.
• It dictates the size of the largest data chunk that client can send and receive from the server.
• It is contained in the Options field.
4. Receiving Window Size-
• Client sends its receiving window size to the server.
• It dictates the limit of unacknowledged data that can be sent to the client.
• It is contained in the window size field.
Step-02: SYN + ACK-
After receiving the request segment,
• Server responds to the client by sending the reply segment.
• It informs the client of the parameters at the server side.
30
2. SYN Bit Set To 1-
Server sets SYN bit to 1 which indicates the client-
• This segment contains the initial sequence number used by the server.
• It has been sent for synchronizing the sequence numbers.
3. Maximum Segment Size (MSS)-
• Server sends its MSS to the client.
• It dictates the size of the largest data chunk that server can send and receive from the client.
• It is contained in the Options field.
4. Receiving Window Size-
• Server sends its receiving window size to the client.
• It dictates the limit of unacknowledged data that can be sent to the server.
• It is contained in the window size field.
5. Acknowledgement Number-
• Server sends the initial sequence number incremented by 1 as an acknowledgement number.
• It dictates the sequence number of the next data byte that server expects to receive from the
client.
6. ACK Bit Set To 1-
• Server sets ACK bit to 1.
• It indicates the client that the acknowledgement number field in the current segment is valid.
Step-03: ACK-
After receiving the reply segment,
• Client acknowledges the response of server.
• It acknowledges the server by sending a pure acknowledgement.
31
TCP Data transfer Phase:
• After connection is established, bidirectional data transfer can take place. The client and
server can both send data and acknowledgments. Data traveling in the same direction as
an acknowledgment are carried on the same segment. The acknowledgment is piggybacked
with the data
• In this example, after connection is established, the client sends 2000 bytes of data in two
segments. The server then sends 2000 bytes in one segment. The client sends one more
segment. The first three segments carry both data and acknowledgment, but the last
segment carries only an acknowledgment because there are no more data to be sent.
• Note the values of the sequence and acknowledgment numbers. The data segments sent by
the client have the PSH (push) flag set so that the server TCP knows to deliver data to the
server process as soon as they are received.
PUSHING DATA:
• Delayed transmission and delayed delivery of data may not be acceptable by the
application program.
• TCP can handle such a situation. The application program at the sending site can request a
push operation. This means that the sending TCP must not wait for the window to be filled.
It must create a segment and send it immediately.
• The sending TCP must also set the push bit (PSH) to let the receiving TCP know that the
segment includes data that must be delivered to the receiving application program as soon
as possible and not to wait for more data to come.
• The PSH flag in the TCP header informs the receiving host that the data should be pushed
up to the receiving application immediately.
The URG Flag
The URG flag is used to inform a receiving station that certain data within a segment is urgent
and should be prioritized. If the URG flag is set, the receiving station evaluates the urgent
pointer, a 16-bit field in the TCP header. This pointer indicates how much of the data in the
segment, counting from the first byte, is urgent.
32
TCP Connection Termination or Connection Release (FIN Segment)
Three Way Handshake-
A TCP connection is terminated using FIN segment where FIN bit is set to 1.
Consider-
• There is a well-established TCP connection between the client and server.
• Client wants to terminate the connection.
The following steps are followed in terminating the connection-
Step-01:
For terminating the connection,
• Client sends a FIN segment to the server with FIN bit set to 1.
• Client enters the FIN_WAIT_1 state.
• Client waits for an acknowledgement from the server.
Step-02:
After receiving the FIN segment,
• Server frees up its buffers.
• Server sends an acknowledgement to the client.
• Server enters the CLOSE_WAIT state.
33
Step-03:
After receiving the acknowledgement, client enters the FIN_WAIT_2 state.
Now,
• The connection from client to server is terminated i.e. one way connection is closed.
• Client can not send any data to the server since server has released its buffers.
• Pure acknowledgements can still be sent from the client to server.
• The connection from server to client is still open i.e. one way connection is still open.
• Server can send both data and acknowledgements to the client.
34
Step-04:
Now, suppose server wants to close the connection with the client.
For terminating the connection,
• Server sends a FIN segment to the client with FIN bit set to 1.
• Server waits for an acknowledgement from the client.
Step-05:
After receiving the FIN segment,
• Client frees up its buffers.
• Client sends an acknowledgement to the server (not mandatory).
• Client enters the TIME_WAIT state.
35
TIME_WAIT State-
• The TIME_WAIT state allows the client to resend the final acknowledgement if it gets lost.
• The time spent by the client in TIME_WAIT state depends on the implementation.
• The typical values are 30 seconds, 1 minute and 2 minutes.
• After the wait, the connection gets formally closed.
• TCP uses a sliding window, to handle flow control. The sliding window protocol used by
TCP, however, is something between the Go-Back-N and Selective Repeat sliding window.
• The sliding window protocol in TCP looks like the Go-Back-N protocol because it does
not use NAKs; it looks like Selective Repeat because the receiver holds the out-of-order
segments until the missing ones arrive.
• There are two big differences between this sliding window and the one we used at the data
link layer.
➢ The sliding window of TCP is byte-oriented; the one we discussed in the data link layer is
frame-oriented.
➢ The TCP's sliding window is of variable size; the one we discussed in the data link layer
was of fixed size
• The sending system cannot send more bytes than space that is available in the receive buffer
on the receiving system. TCP on the sending system must wait to send more data until all
bytes in the current send buffer are acknowledged by TCP on the receiving system.
• On the receiving system, TCP stores received data in a receive buffer. TCP acknowledges
receipt of the data, and advertises (communicates) a new receive window to the sending
system. The receive window represents the number of bytes that are available in the receive
buffer. If the receive buffer is full, the receiving system advertises a receive window size
of zero, and the sending system must wait to send more data.
• After the receiving application retrieves data from the receive buffer, the receiving system
can then advertise a receive window size that is equal to the amount of data that was read.
Then, TCP on the sending system can resume sending data.
36
Sliding window:
• The window is opened, closed, or shrunk. These three activities are in the control of the
receiver (and depend on congestion in the network), not the sender.
• The sender must obey the commands of the receiver in this matter.
Opening a window means moving the right wall to the right. This allows more new bytes in
the buffer that are eligible for sending.
Closing the window means moving the left wall to the right. This means that some bytes have
been acknowledged and the sender need not worry about them anymore.
Shrinking the window means moving the right wall to the left. The size of the window at one
end is determined by the lesser of two values: receiver window (rwnd) or congestion window
(cwnd).
37
The receiver window is the value advertised by the opposite end in a segment containing
acknowledgment. It is the number of bytes the other end can accept before its buffer overflows
and data are discarded.
TCP provides reliability using error control. Error control includes mechanisms for detecting
corrupted segments, lost segments, out-of-order segments, and duplicated segments. Error control
also includes a mechanism for correcting errors after they are detected. Error detection and
correction in TCP is achieved through the use of three simple tools: checksum, acknowledgment,
and time-out.
Checksum
Each segment includes a checksum field which is used to check for a corrupted segment. If the
segment is corrupted, it is discarded by the destination TCP and is considered as lost. TCP uses a
16-bit checksum that is mandatory in every segment
Acknowledgment
TCP uses acknowledgments to confirm the receipt of data segments. Control segments that carry
no data but consume a sequence number are also acknowledged. ACK segments are never
acknowledged.
Retransmission
The heart of the error control mechanism is the retransmission of segments. When a segment is
corrupted, lost, or delayed, it is retransmitted. In modern implementations, a segment is
38
retransmitted on two occasions: when a retransmission timer expires or when the sender receives
three duplicate ACKs.
After RTO A recent implementation of TCP maintains one retransmission time-out (RTO) timer
for all outstanding (sent, but not acknowledged) segments. When the timer matures, the earliest
outstanding segment is retransmitted even though lack of a received ACK can be due to a delayed
segment, a delayed ACK, or a lost acknowledgment.
Out-of-Order Segments
• When a segment is delayed, lost, or discarded, the segments following that segment arrive
out of order. Originally, TCP was designed to discard all out-of-order segments
• Most implementations today do not discard the out-of-order segments. They store them
temporarily and flag them as out-of- order segments until the missing segment arrives. Note,
however, that the out-of-order segments are not delivered to the process. TCP guarantees
that data are delivered to the process in order.
The size of the sender window is determined by the following two factors-
1. Receiver window size
2. Congestion window size
• Sender should not send data greater than receiver window size.
39
• Otherwise, it leads to dropping the TCP segments which causes TCP Retransmission.
• So, sender should always send data less than or equal to receiver window size.
• Receiver dictates its window size to the sender through TCP Header.
2. Congestion Window-
• Sender should not send data greater than congestion window size.
• Otherwise, it leads to dropping the TCP segments which causes TCP Retransmission.
• So, sender should always send data less than or equal to congestion window size.
• Different variants of TCP use different approaches to calculate the size of congestion window.
• Congestion window is known only to the sender and is not sent over the links.
So, always-
Sender window size = Minimum (Receiver window size, Congestion window size)
40
1. Slow Start Phase-
• Initially, sender sets congestion window size = Maximum Segment Size (1 MSS).
• After receiving each acknowledgment, sender increases the congestion window size by 1
MSS.
• In this phase, the size of congestion window increases exponentially.
41
2. Congestion Avoidance Phase-
After reaching the threshold,
• Sender increases the congestion window size linearly to avoid the congestion.
• On receiving each acknowledgement, sender increments the congestion window size by 1.
The followed formula is-
• This phase continues until the congestion window size becomes equal to the receiver
window size.
3. Congestion Detection Phase-
Case-01: Detection On Time Out-
Time Out Timer expires before receiving the acknowledgement for a segment.
• This case suggests the stronger possibility of congestion in the network.
• There are chances that a segment has been dropped in the network.
Reaction-
In this case, sender reacts by-
42
• Setting the slow start threshold to half of the current congestion window size.
• Decreasing the congestion window size to 1 MSS.
• Resuming the slow start phase.
Case-02: Detection on Receiving 3 Duplicate Acknowledgements-
Sender receives 3 duplicate acknowledgements for a segment.
• This case suggests the weaker possibility of congestion in the network.
• There are chances that a segment has been dropped but few segments sent later may have
reached.
Reaction-
In this case, sender reacts by-
• Setting the slow start threshold to half of the current congestion window size.
• Decreasing the congestion window size to slow start threshold.
• Resuming the congestion avoidance phase.
43
Differences between TCP & UDP:
Definition TCP establishes a virtual circuit UDP transmits the data directly to the
before transmitting the data. destination computer without verifying
whether the receiver is ready to receive or
not.
44
Important Questions
1. Explain in detail TCP connection management. Also draw the header part of UDP protocol.
Explain the components. In what application UDP is used and why?
2. Discuss in detail about crash recovery. Also explain how TCP connections are released using the
four way handshakes
3. Explain how TCP connections are established using the three way handshakes
4. Elucidate the elements of a Transport protocol?
45
COMPUTER NETWORKS UNIT-5
UNIT-V
APPLICATION LAYER
M.Ramanjaneyulu
Associate Professor
Contents
• Introduction
• Providing services
• Applications layer paradigms:
➢ Client server model
➢ HTTP
➢ E-mail
➢ WWW
➢ TELNET
➢ DNS
2
COMPUTER NETWORKS UNIT-5
Introduction:
• The application layer is responsible for providing services to the user.
• The application layer enables the user, whether human or software, to access the network.
It provides user interfaces and support for services such as electronic mail, file access
and transfer, access to system resources, surfing the World Wide Web, and network
management.
Application Layer Services
1. Mail Services: This layer provides the basis for E-mail forwarding and storage.
2. Network Virtual Terminal: It allows a user to log on to a remote host. The application
creates software emulation of a terminal at the remote host. User's computer talks to the
software terminal which in turn talks to the host and vice versa. Then the remote host
believes it is communicating with one of its own terminals and allows user to log on.
3. Directory Services: This layer provides access for global information about various services.
4. File Transfer, Access and Management (FTAM): It is a standard mechanism to access files
and manages it. Users can access files in a remote computer and manage it. They can also
retrieve files from a remote computer.
Application-Layer Paradigms :
• It should be clear that to use the Internet, we need two application programs to interact with
each other:
• one running on a computer somewhere in the world, the other running on another computer
somewhere else in the world.
• The two programs need to send messages to each other through the Internet infrastructure.
◼ Client-Server paradigm
Client-Server Model: Two remote application processes can communicate mainly in two
different fashions
Peer-to-peer Model: Both remote processes are executing at same level and they exchange data
using some shared resource.
3
COMPUTER NETWORKS UNIT-5
Client-Server: One remote process acts as a Client and requests some resource from
another application process acting as Server.
In client-server model, any process can act as Server or Client. It is not the type of machine,
size of the machine, or its computing power which makes it server; it is the ability of
serving request that makes a machine a server.
Client (Browser):
• A variety of vendors offer commercial browsers that interpret and display a Web
document, and all use nearly the same architecture.
• Each browser usually consists of three parts: a controller, client protocol, and interpreters.
• The controller receives input from the keyboard or the mouse and uses the client
programs to access the document.
• After the document has been accessed, the controller uses
one of the interpreters to display the document on the screen. The interpreter can
be HTML, Java, or JavaScript, depending on the type of document
• The client protocol can be one of the protocols described
previously such as FTP or HTTP.
Server:
The Web page is stored at the server each time a client request arrives, the corresponding
document is sent to the client. To improve efficiency, servers normally store requested files
in a cache in memory; memory is faster to access than disk. A server can also become more
efficient through multithreading or multiprocessing. In this case, a server can answer more than
one.
4
COMPUTER NETWORKS UNIT-5
Peer-to-Peer (P2P) architecture:
•Two or more computers are connected and are able to share resources without having a
dedicated server
• Every end device can function as a client or server on a ‘per request’ basis
• Resources are decentralized (information can be located anywhere)
• Running applications in hybrid mode allows for a centralized directory of files even
though the files themselves may be on multiple machines
• Unlike P2P networks, a device can act as both the client and server within the same
communication
DRAWBACKS:
• Difficult to enforce security and policies
• User accounts and access rights have to be set individually on each peer device
WWW:
• The World Wide Web was invented by a British scientist, Tim Berners-Lee in 1989.
• World Wide Web, which is also known as a Web, is a collection of websites or web
pages stored in web servers and connected to local computers through the internet.
• These websites contain text pages, digital images, audios, videos, etc.
• Users can access the content of these sites from any part of the world over the internet
using their devices such as computers, laptops, cell phones, etc.
• The WWW, along with internet, enables the retrieval and display of text and media to
your device.
• The building blocks of the Web are web pages which are formatted in HTML and
connected by links called "hypertext" or hyperlinks and accessed by HTTP.
• These links are electronic connections that link related pieces of information so that users
can access the desired information quickly.
• Hypertext offers the advantage to select a word or phrase from text and thus to access
other pages that provide additional information related to that word or phrase.
5
COMPUTER NETWORKS UNIT-5
• A web page is given an online address called a Uniform Resource Locator (URL).
• A particular collection of web pages that belong to a specific URL is called a website,
e.g., www.facebook.com, www.google.com, etc.
• World Wide Web is like a huge electronic book whose pages are stored on multiple
servers across the world.
•
Features of WWW:
• Hypertext Information System
• Cross-Platform
• Distributed
• Open Standards and Open Source
• Uses Web Browsers to provide a single interface for many services
• Dynamic, Interactive and Evolving.
• “Web 2.0”
The URL can optionally contain the port number of the server. If the port is included, it is
inserted between the host and the path, and it is separated from the host by a colon.
The documents in the WWW can be grouped into three broad categories: static, dynamic, and
active. The category is based on the time at which the contents of the document are
determined.
1. Static documents
Static documents are fixed-content documents that are created and stored in a server. The
client can get only a copy of the document. When the client accesses the document, a copy of
document is sent. The user can then use a browsing program to display the document.
Advantages: simple reliable, efficient
Disadvantages: inflexible-it can be inconvenient and costly to change static documents.
7
COMPUTER NETWORKS UNIT-5
2. Dynamic Documents
A dynamic document is created by a Web server whenever a browser requests the document.
When a request arrives, the Web server runs an application program or a script that creates the
dynamic document. The server returns the output of the program or script as a response to the
browser that requested the document.
Active Documents
For many applications, we need a program or a scriptto be run at the client site. These are
called active documents
8
COMPUTER NETWORKS UNIT-5
Hypertext Transfer Protocol (HTTP):
• HTTP is short for Hyper Text Transfer Protocol.
• It is an application layer protocol.
• It is mainly used for the retrieval of data from websites throughout the internet.
• It works on the top of TCP/IP suite of protocols.
Whenever a client requests some information (say clicks on a hyperlink) to the website server.
The browser sends a request message to the HTTP server for the requested objects.
Then-
• HTTP opens a connection between the client and server through TCP.
• HTTP sends a request to the server which collects the requested data.
• HTTP sends the response with the objects back to the client.
• HTTP closes the connection.
HTTP Connections-
9
COMPUTER NETWORKS UNIT-5
A new separate TCP connection is used for each A single TCP connection is used for sending
object. multiple objects one after the other.
HTTP 1.0 supports non-persistent connections HTTP 1.1 supports persistent connections by
by default. default.
10
COMPUTER NETWORKS UNIT-5
Request Line and Status line :
The first line in the Request message is known as the request line, while the first line in the
Response message is known as the Status line.
Header :
The header is used to exchange the additional information between the client and the server. The
header mainly consists of one or more header lines. Each header line has a header name, a colon,
space, and a header value.
Body:
It can be present in the request message or in the response message. The body part mainly
contains the document to be sent or received.
11
COMPUTER NETWORKS UNIT-5
Need of an Email :
By making use of Email, we can send any message at any time to anyone.
• We can send the same message to several peoples at the same time.
• It is a very fast and efficient way of transferring information.
• The email system is very fast as compared to the Postal system.
• Information can be easily forwarded to coworkers without retyping it.
It is a program that is mainly used to send and receive an email. It is also known as an email
reader. User-Agent is used to compose, send and receive emails.
12
COMPUTER NETWORKS UNIT-5
Message Transfer Agent:
The actual process of transferring the email is done through the Message Transfer Agent(MTA).
In the first and second stages of email delivery, we make use of SMTP.
Architecture of Email:
1. First Scenario
When the sender and the receiver of an E-mail are on the same system, then there is the need for
only two user agents.
2. Second Scenario:
In this scenario, the sender and receiver of an e-mail are basically users on the two different
systems. Also, the message needs to send over the Internet. In this case, we need to make use of
User Agents and Message transfer agents (MTA).
13
COMPUTER NETWORKS UNIT-5
3. Third Scenario
In this scenario, the sender is connected to the system via a point-to-point WAN it can be either a
dial-up modem or a cable modem. While the receiver is directly connected to the system like it
was connected in the second scenario.
Also in this case sender needs a User agent (UA) in order to prepare the message. After
preparing the message the sender sends the message via a pair of MTA through LAN or WAN.
4. Fourth Scenario
In this scenario, the receiver is also connected to his mail server with the help of WAN or LAN.
When the message arrives the receiver needs to retrieve the message; thus there is a need for
another set of client/server agents. The recipient makes use of MAA (Message access agent)
client in order to retrieve the message.
In this, the client sends the request to the Mail Access agent (MAA) server and then makes a
request for the transfer of messages.
14
COMPUTER NETWORKS UNIT-5
This scenario is most commonly used today.
Structure of Email :
1. Header
2. Body
Header: The header part of the email generally contains the sender's address as well as the
receiver's address and the subject of the message.
Body: The Body of the message contains the actual information that is meant for the receiver.
Email Address: In order to deliver the email, the mail handling system must make use of an
addressing system with unique addresses.
• Local part
• Domain Name
Local Part:
It is used to define the name of the special file, which is commonly called a user mailbox; it is
the place where all the mails received for the user is stored for retrieval by the Message Access
Agent.
15
COMPUTER NETWORKS UNIT-5
Domain Name:
It is the second part of the address is Domain Name.
Both local part and domain name are separated with the help of @.
Email uses following protocols for storing & delivering messages, They are:
1. SMTP (Simple Mail Transfer Protocol)
2. POP (Post Office Protocol)
3. IMAP (Internet Message Access Protocol)
MIME Protocol:
MIME is a short form of Multipurpose Internet Mail Extensions (MIME).
• audio
• images
• text
• video
• Other application-specific data (it can be pdf, Microsoft word document, etc).
• MIME is one of the applications of Email and it is not restricted only to the textual data.
16
COMPUTER NETWORKS UNIT-5
Let us take an example where a user wants to send an Email through the user agent, and this
email is in a non-ASCII format. So here we use the MIME protocol that mainly converts this
non-ASCII format into the 7-bit NVT ASCII format.
The message is transferred via email system to the other side in the 7-bit NVT ASCII format and
then again the MIME protocol will convert it back into the Non-ASCII code. at the receiver side
so that receiver can read it.
At the beginning of any email transfer basically, there is an insertion of the MIME header.
Features of MIME:
MIME Header:
The MIME header is mainly added to the original e-mail header section in order to define the
transformation. Given below are five headers that are added to the original header:
1. MIME-Version
2. Content-Type
3. Content-Transfer-Encoding.
4. Content-Id
5. Content-Description
1.MIME-Version:
This header of the MIME mainly defines the version of the MIME used. The currently used
version of MIME is 1.1.
17
COMPUTER NETWORKS UNIT-5
2.Content-Type:
This header of MIME is used to define the type of data that is used in the body of the message. In
this, the content-type and content-subtype are just separated by a slash.
Basically, depending upon the subtype the header also contains other parameters:
3.Content-Transfer-Encoding:
This header of the MIME mainly defines the method that is used to encode the messages into 0s
and 1s for transport.
4.Content-Id :
This header of the MIME is used to uniquely identify the whole message in the multiple-message
environment.
5. Content-Description:
This header of the MIME defines whether the body is in the form of image, audio, or video.
Advantages of MIME :
Some benefits of using MIME are as follows:
TELNET:
TELNET is basically the short form for Terminal Network. It is basically a TCP/IP protocol that
is used for virtual terminal services and was mainly proposed by International Organization for
Standards (ISO).
18
COMPUTER NETWORKS UNIT-5
FTP Protocol:
FTP means File Transfer Protocol and it is the standard mechanism provided by the TCP/IP in
order to copy a file from one host to another.
• File Transfer Protocol is a protocol present at the Application layer of the OSI Model.
• FTP is one of the easier, simpler, and secure ways to exchange files over the Internet.
• FTP is different from the other client/server applications as this protocol establishes two
connections between the hosts.
➢ where one connection is used for the data transfer and is known as a data
connection.
➢ while the other connection is used to control information like commands and
responses and this connection is termed as control connection.
Working of FTP :
Given below figure shows the basic model of file Transfer Protocol, where the client comprises
of three components: User Interface, Client control process, and client data transfer process. On
19
COMPUTER NETWORKS UNIT-5
the other hand, the server comprises of two components mainly the server control process and
the server data transfer process.
1. Also, the control connection is made between the control processes while the data
connection is made between the data transfer processes.
2. The control Connection remains connected during the entire interactive session of FTP
while the data connection is opened and then closed for each file transferred.
3. In simple terms when a user starts the FTP connection then the control connection opens,
while it is open the data connection can be opened and closed multiple times if several
files need to be transferred.
Data Structure :
Given below are three data structures supported by FTP:
1. File Structure In the File data structure, the file is basically a continuous stream of bytes.
2. Record Structure In the Record data structure, the file is simply divided into the form of
records.
3. Page Structure In the Page data structure, the file is divided into pages where each page has a
page number and a page header. These pages can be stored and accessed either randomly or
sequentially.
FTP Clients
It is basically software that is designed to transfer the files back-and-forth between a computer
and a server over the Internet. The FTP client needs to be installed on your computer and can
only be used with the live connection to the Internet.
Some of the commonly used FTP clients are Dreamweaver, FireFTP, and Filezilla.
20
COMPUTER NETWORKS UNIT-5
Features of FTP :
Following are the features offered by the File transfer protocol:
Transmission Modes
FTP can transfer a file across the data connection using one of the three given modes:
1. Stream Mode :
Stream Mode is the default mode of transmission used by FTP. In this mode, the File is
transmitted as a continuous stream of bytes to TCP.
If the data is simply in the form of the stream of bytes then there is no need for End-of-File,
Closing of data connection by the sender is considered as EOF or end-of-file. If the data is
divided into records (that is the record structure), each record has an I-byte of EOR(end-of-
record).
21
COMPUTER NETWORKS UNIT-5
2. Block Mode :
Block mode is used to deliver the data from FTP to TCP in the form of blocks of data. Each
block of data is preceded by 3 bytes of the header where the first byte represents the block
descriptor while the second and third byte represents the size of the block.
3. Compressed Mode:
In this mode, if the file to be transmitted is very big then the data can be compressed. This
method is normally used in Run-length encoding. In the case of a text file, usually, spaces/blanks
are removed. While in the case of the binary file, null characters are compressed.
Advantages of FTP :
Disadvantages of FTP :
Also, the tree can have 128 levels and these are from Level 0(root) to Level 127.
Label :
Each node of the tree must have a label. A Label is a string having a maximum of 63 characters.
23
COMPUTER NETWORKS UNIT-5
• The root label is basically a null string (means an empty string).
• Domain Name Space requires that the children of the node that means branches from the
same node should have different labels and this guarantees the uniqueness of the domain
names.
Domain Name :
Each node of the tree has a domain name.
• A Full domain name is basically a sequence of labels that are usually separated by dots(.).
• The domain name is always read from the node up to the root.
• The last label is the label of the root that is always null.
• All this means that the full domain name always ends in the null label, which means that
the last character is always a dot because the null string is nothing.
24
COMPUTER NETWORKS UNIT-5
2 Zone
Since the complete domain name hierarchy cannot be stored on a single server, it is divided
among many servers. What a server is responsible for or has authority over is called a zone. We
can define a zone as a contiguous part of the entire tree
3 Root Server
A root server is a server whose zone consists of the whole tree. A root server usually does not
store any information about domains but delegates its authority to other servers, keeping
references to those servers. There are several root servers, each covering the whole domain
name space. The servers are distributed all around the world.
25
COMPUTER NETWORKS UNIT-5
4 Primary and Secondary Servers
A primary server is a server that stores a file about the zone for which it is an authority. It is
responsible for creating, maintaining, and updating the zone file. It stores the zone file on a local
disk
A secondary server is a server that transfers the complete information about a zone from another
server (primary or secondary) and stores the file on its local disk. The secondary server neither
creates nor updates the zone files.
1 Generic Domain :
The generic domains define registered hosts according to their generic behavior. Each node in
the tree defines a domain, which is an index to the domain name space database
26
COMPUTER NETWORKS UNIT-5
2 Country Domains
The country domains section uses two-character country abbreviations (e.g., us for United
States). Second labels can be organizational, or they can be more specific, national designations.
The United States, for example, uses state abbreviations as a subdivision of us (e.g., ca.us.).
3 Inverse Domain :
The inverse domain is used to map an address to a name.
27
COMPUTER NETWORKS UNIT-5
RSA algorithm (Rivest-Shamir-Adleman) :
• RSA algorithm is a public key encryption technique and is considered as the most secure
way of encryption. It was invented by Rivest, Shamir and Adleman in year 1978 and
hence name RSA algorithm.
• RSA algorithm is asymmetric cryptography algorithm. Asymmetric actually means that
it works on two different keys i.e. Public Key and Private Key. As the name describes
that the Public Key is given to everyone and Private key is kept private.
• If the public key is used at encryption, we have to use the private key of the same user in
the decryption process.
RSA Algorithm:
Here we need to find out both public and private keys
Step 1: The initial procedure begins with selection of two prime numbers namely p and q, and
then calculating their product n, as shown n=p*q
Step 2: Then calculate ϕ(n) = (p-1)*(q-1)
Step 3: Assume e and d as Public and Private Keys.
Step 4: Assume e such that gcd(e, ϕ(n)) = 1
Step 5: Assume d such that d*e mod ϕ(n) = 1
Step 6 : PublicKey. = {e,n} Private key = {d,n}
Step 7: After finding public and private keys, Encryption process starts i.e converting plain text
to cipher text. Here we must consider the plain text must be less than n
Step 8: Encryption Formula
Consider a sender who sends the plain text message to someone whose public key is (e,n). To
encrypt the plain text message in the given scenario, use the following syntax -
C = Pe mod n
Step 9: Decryption Formula
P = Cd mod n
28
COMPUTER NETWORKS UNIT-5
3,5,7
So e=3
• Generating private key that is d such that d*e mod ϕ(n) = 1
3*3 mod 8 = 1
d=3
• Public Key. = {e,n} = {3,15}
• Private key = {d,n} = {3,15}
• ENCRYPTION – plain text must be less than n - > 4<15
C = Pe mod n
= 43 mod 15
= 64 mod 15
Cipher text = 4
• DECRYPTION
P = Ce mod n
= 43 mod 15
= 64 mod 15
Plain text = 4
29
COMPUTER NETWORKS UNIT-5
Important questions
1. Define name space. What is the difference between flat name space and hierarchical name
space? Also discuss about DNS.
2. Discuss in detail about TELNET
3. Explain in detail about HTTP and its message formats
4. What is HTTP? Discuss about various HTTP request methods.
5. Write short notes on the following:
(a) MIME (b) FTP (c) DNS
6. Write a Brief Notes on Following
a) World Wide Web b) E-Mail c) Telnet
7. a) What is RSA? Discuss RSA Algorithm Procedure with example
b) What are the Application Layer Services?
8. a) What is DNS? What are the services provided by DNS and explain how it works?
9. Compare and contrast client/server with peer-to-peer data transfer over networks?
30