DCN-Unit I
DCN-Unit I
DCN-Unit I
Unit-I
Network Structure and Architectures
Peer-To-Peer network is a network in which all the computers are linked together with
equal privilege and responsibilities for processing the data.
Peer-To-Peer network is useful for small environments, usually up to 10 computers.
Peer-To-Peer network has no dedicated server.
Special permissions are assigned to each computer for sharing the resources, but this can
lead to a problem if the computer with the resource is down.
Advantages of
Peer-To-Peer Network
Peer-To-Peer network does not have contained the centralized system therefore Files and
folders cannot be centrally backed up.
It has a security issue because device have to manage itself.
Data is is stored in different computer systems therefore it is difficult to backup the data.
Ensuring that viruses are not introduced to the network is the responsibility of each
individual user
Users don't need to log onto their workstations because no need to take the permissions.
Each computer might be accessed by others may slow down the performance.
Client/Server network
Client/Server network is a network model designed for the end users called clients, to
access the resources such as songs, video, etc. from a central computer known as Server.
The central controller is known as a server while all other computers in the network are
called clients.
A server performs all the major operations such as security and network management.
A server is responsible for managing all the resources such as files, directories, printer, etc.
All the clients communicate with each other through a server.
For example, if client1 wants to send some data to client 2, then it first sends the request to
the server for the permission. The server sends the response to the client 1 to initiate its
communication with the client 2.
Advantages of
Client/Server network
A Client/Server network contains the centralized system. Therefore we can back up the
data easily.
A Client/Server network has a dedicated server that improves the overall performance of
the whole system.
Security is better in Client/Server network as a single server administers the shared
resources.
It also increases the speed of the sharing resources.
Disadvantages of
Client/Server network
Client/Server network is expensive as it requires the server with large memory.
A server has a Network Operating System(NOS) to provide the resources to the clients, but
the cost of NOS is very high.
It requires a dedicated network administrator to manage all the resources.
OSI Reference Model
OSI stands for Open System Interconnection is a reference model that describes how
information from a software application in one computer moves through a physical medium to
the software application in another computer.
OSI consists of seven layers, and each layer performs a particular network function.
OSI model was developed by the International Organization for Standardization (ISO) in 1984,
and it is now considered as an architectural model for the inter-computer communications.
OSI model divides the whole task into seven smaller and manageable tasks. Each layer is
assigned a particular task.
Each layer is self-contained, so that task assigned to each layer can be performed independently.
OSI Reference Model
The OSI model is divided into two layers: upper layers and lower layers.
The upper layer of the OSI model mainly deals with the application related issues, and they are
implemented only in the software. T
The application layer is closest to the end user. Both the end user and the application layer
interact with the software applications. An upper layer refers to the layer just above another layer.
The lower layer of the OSI model deals with the data transport issues. The data link layer and the
physical layer are implemented in hardware and software.
The physical layer is the lowest layer of the OSI model and is closest to the physical medium.
The physical layer is mainly responsible for placing the information on the physical medium.
OSI Reference Model
There are the seven OSI layers. Each layer has different functions. A list of seven layers are
given below:
Physical Layer
Data-Link Layer
Network Layer
Transport Layer
Session Layer
Presentation Layer
Application Layer
OSI Reference Model
OSI Reference Model
Physical Layer
The main functionality of the physical layer is to transmit the individual bits from one node to
another node.
It is the lowest layer of the OSI model.
It establishes, maintains and deactivates the physical connection.
It specifies the mechanical, electrical and procedural network interface specifications.
Functions of a Physical layer:
Line Configuration: It defines the way how two or more devices can be connected physically.
Data Transmission: It defines the transmission mode whether it is simplex, half-duplex or full-
duplex mode between the two devices on the network.
Topology: It defines the way how network devices are arranged.
Signals: It determines the type of the signal used for transmitting the information.
OSI Reference Model
Data-Link Layer
The Transport layer is a Layer 4 ensures that messages are transmitted in the order in which they
are sent and there is no duplication of data.
The main responsibility of the transport layer is to transfer the data completely.
It receives the data from the upper layer and converts them into smaller units known as segments.
This layer can be termed as an end-to-end layer as it provides a point-to-point connection
between source and destination to deliver the data reliably.
OSI Reference Model
Transport Layer
A Presentation layer is mainly concerned with the syntax and semantics of the information exchanged
between the two systems.
It acts as a data translator for a network.
This layer is a part of the operating system that converts the data from one presentation format to
another format.
The Presentation layer is also known as the syntax layer.
Functions of Presentation layer:
Translation: The processes in two systems exchange the information in the form of character strings,
numbers and so on.
Encryption: Encryption is needed to maintain privacy. Encryption is a process of converting the
sender-transmitted information into another form and sends the resulting message over the network.
Compression: Data compression is a process of compressing the data, i.e., it reduces the number of
bits to be transmitted. Data compression is very important in multimedia such as text, audio, video.
OSI Reference Model
Application Layer
An application layer serves as a window for users and application processes to access network
service.
It handles issues such as network transparency, resource allocation, etc.
An application layer is not an application, but it performs the application layer functions.
This layer provides the network services to the end-users.
Functions of Application layer:
File transfer, access, and management (FTAM): An application layer allows a user to access
the files in a remote computer, to retrieve the files from a computer and to manage the files in a
remote computer.
Mail services: An application layer provides the facility for email forwarding and storage.
Directory services: An application provides the distributed database sources and is used to
provide that global information about various objects.
The Physical Layer
Theoretical basis for data communication
The information is transmitted on wires by physical property in way of varying voltage and
current.
A single valued function of time, f(t), is representing value of voltage or current. Let’s
analyses the behavior of the signal mathematically.
This analysis is included using following sections.
1] Fourier Analysis
2] Bandwidth-Limited Signals
3] The Maximum Data Rate of a Channel
The Physical Layer
Theoretical basis for data communication
Fourier Analysis
In the early 19th century, the French mathematician Jean-Baptiste Fourier proved that any
reasonably behaved periodic function, g(t) with period T, can be constructed as the sum of
a (possibly infinite) number of sines and cosine:
1
Where f = 1/T is the fundamental frequency,
an and bn are the sine and cosine amplitudes of the nth harmonics (terms),
c is a constant.
Such series representation is called a Fourier series.
From the Fourier series, the function can be reconstructed.
That is, if the period, T, is known and the amplitudes are given, the original function of
time can be found by performing the sums of Eq. (1).
The Physical Layer
Theoretical basis for data communication
Fourier Analysis
The an amplitudes can be computed for any given g(t) by multiplying both sides of Eq. (1)
by sin(2πkft ) and then integrating from 0 to T.
The root-mean square amplitudes, √an 2 + √ bn 2 , for the first few terms are shown on the
right-hand side of Fig. 2-1(a).
These values are of interest because their squares are proportional to the energy transmitted
at the corresponding frequency.
The Physical Layer
Theoretical basis for data communication
Bandwidth-Limited
Signals that run from 0 upSignals
to a maximum frequency are called baseband signals.
Signals that are shifted to occupy a higher range of frequencies, as is the case for all wireless
transmissions, are called passband signals. No transmission facility can transmit signals
without losing some power in the process.
If all the Fourier components were equally diminished, the resulting signal would be reduced in
amplitude but not distorted [i.e., it would have the same nice squared-off shape as Fig. 2-1(a)].
Unfortunately, all transmission facilities diminish different Fourier components by different
amounts, thus introducing distortion.
Usually, for a wire, the amplitudes are transmitted mostly undiminished from 0 up to some
frequency fc [measured in cycles/sec or Hertz (Hz)], with all frequencies above this cutoff
frequency attenuated.
The width of the frequency range transmitted without being strongly attenuated is called the
bandwidth.
In practice, the cutoff is not really sharp, so often the quoted bandwidth is from 0 to the
frequency at which the received power has fallen by half.
The Physical Layer
Theoretical basis for data communication
Bandwidth-Limited
Now let us consider howSignals
the signal of Fig. 2-1(a) would look if the bandwidth were so low
that only the lowest frequencies were transmitted [i.e., if the function were being
approximated by the first few terms of Eq. (1)].
Figure 2-1(b) shows the signal that results from a channel that allows only the first
harmonic (the fundamental, f) to pass through.
Similarly, Fig. 2-1(c)–(e) show the spectra and reconstructed functions for higher-
bandwidth channels.
For digital transmission, the goal is to receive a signal with just enough fidelity to
reconstruct the sequence of bits that was sent.
We can already do this easily in Fig. 2-1(e), so it is wasteful to use more harmonics to
receive a more accurate replica.
Given a bit rate of b bits/sec, the time required to send the 8 bits in our example 1 bit at a
time is 8/b sec, so the frequency of the first harmonic of this signal is b /8 Hz.
Limiting the bandwidth limits the data rate, even for perfect channels. However, coding
schemes that make use of several voltage levels do exist and can achieve higher data rates.
The Physical Layer
Theoretical basis for data communication
The Maximum Data Rate of a Channel
As early as 1924, an AT&T engineer, Henry Nyquist, realized that even a perfect channel
has a finite transmission capacity.
He derived an equation expressing the maximum data rate for a finite-bandwidth noiseless
channel.
In 1948, Claude Shannon carried Nyquist’s work further and extended it to the case of a
channel subject to random (that is, thermodynamic) noise (Shannon, 1948).
Nyquist proved that if an arbitrary signal has been run through a low-pass filter of
bandwidth B, the filtered signal can be completely reconstructed by making only 2B
(exact) samples per second.
Sampling the line faster than 2B times per second is pointless because the higher-frequency
components that such sampling could recover have already been filtered out.
If the signal consists of V discrete levels, Nyquist’s theorem states:
maximum data rate = 2B log2 V bits /sec
Transmission media
The transmission media is nothing but the physical media over which communication takes
place in computer networks.
Magnetic Media
One of the most convenient way to transfer data from one computer to another, even before
the birth of networking, was to save it on some storage media and transfer physical from
one station to another.
Though it may seem old-fashion way in today’s world of high speed internet, but when the
size of data is huge, the magnetic media comes into play.
For example, a bank has to handle and transfer huge data of its customer, which stores a
backup of it at some geographically far-away place for security reasons and to keep it from
uncertain calamities. If the bank needs to store its huge backup data then its, transfer
through internet is not feasible. The WAN links may not support such high speed. Even if
they do; the cost too high to afford.
In these cases, data backup is stored onto magnetic tapes or magnetic discs, and then
shifted physically at remote places.
Transmission media
Twisted Pair Cable
A twisted pair cable is made of two plastic insulated copper wires twisted together to form
a single media. Out of these two wires, only one carries actual signal and another is used
for ground reference. The twists between wires are helpful in reducing noise (electro-
magnetic interference) and crosstalk.
There are two types of twisted pair cables:
Shielded Twisted Pair (STP) Cable
Unshielded Twisted Pair (UTP) Cable
STP cables comes with twisted wire pair covered in metal foil. This makes it more
indifferent to noise and crosstalk.
UTP has seven categories, each suitable for specific use. In computer networks, Cat-5, Cat-
5e, and Cat-6 cables are mostly used. UTP cables are connected by RJ45 connectors.
Transmission media
Coaxial Cable
Coaxial cable has two wires of copper. The core wire lies in the center and it is made of
solid conductor.
The core is enclosed in an insulating sheath. The second wire is wrapped around over the
sheath and that too in turn encased by insulator sheath.
This all is covered by plastic cover.
Because of its structure, the coax cable is capable of carrying high frequency signals than
that of twisted pair cable.
The wrapped structure provides it a good shield against noise and cross talk. Coaxial
cables provide high bandwidth rates of up to 450 mbps.
There are three categories of coaxial cables namely, RG-59 (Cable TV), RG-58 (Thin
Ethernet), and RG-11 (Thick Ethernet). RG stands for Radio Government.
Cables are connected using BNC connector and BNC-T. BNC terminator is used to
terminate the wire at the far ends.
Transmission media
Power Lines
Power Line communication (PLC) is Layer-1 (Physical Layer) technology which uses
power cables to transmit data signals. In PLC, modulated data is sent over the cables. The
receiver on the other end de-modulates and interprets the data.
Because power lines are widely deployed, PLC can make all powered devices controlled
and monitored. PLC works in half-duplex.
There are two types of PLC:
Narrow band PLC
Broad band PLC
Narrow band PLC provides lower data rates up to 100s of kbps, as they work at lower
frequencies (3-5000 kHz).They can be spread over several kilometers.
Broadband PLC provides higher data rates up to 100s of Mbps and works at higher
frequencies (1.8 – 250 MHz).They cannot be as much extended as Narrowband PLC.
Transmission media
Fiber Optics
Fiber Optic works on the properties of light.
When light ray hits at critical angle it tends to refracts at 90 degree.
This property has been used in fiber optic.
The core of fiber optic cable is made of high quality glass or plastic.
From one end of it light is emitted, it travels through it and at the other end light detector
detects light stream and converts it to electric data.
Fiber Optic provides the highest mode of speed.
It comes in two modes, one is single mode fiber and second is multimode fiber.
Single mode fiber can carry a single ray of light whereas multimode is capable of carrying
multiple beams of light.
Fiber Optic also comes in unidirectional and bidirectional capabilities.
To connect and access fiber optic special type of connectors are used.
These can be Subscriber Channel (SC), Straight Tip (ST), or MT-RJ.
Analog Transmission
To send the digital data over an analog media, it needs to be
converted into analog signal.
There can be two cases according to data formatting.
Bandpass: The filters are used to filter and pass frequencies of
interest. A bandpass is a band of frequencies which can pass the
filter.
Low-pass: Low-pass is a filter that passes low frequencies signals.
When digital data is converted into a bandpass analog signal, it is
called digital-to-analog conversion. When low-pass analog signal is
converted into bandpass analog signal, it is called analog-to-analog
conversion.
Analog Transmission
Digital-to-Analog Conversion
When data from one computer is sent to another via some analog carrier, it is first converted into
analog signals. Analog signals are modified to reflect digital data.
An analog signal is characterized by its amplitude, frequency, and phase. There are three kinds of
digital-to-analog conversions:
Amplitude Shift Keying
In this conversion technique, the amplitude of analog carrier signal is modified to reflect binary data.
When binary data represents digit 1, the amplitude is held; otherwise it is set to 0. Both frequency
and phase remain same as in the original carrier signal.
Analog Transmission
Digital-to-Analog Conversion
When a new binary symbol is encountered, the phase of the signal is altered. Amplitude
and frequency of the original carrier signal is kept intact.
Analog Transmission
Digital-to-Analog Conversion
Analog signals are modified to represent analog data. This conversion is also known as
Analog Modulation.
Analog modulation is required when bandpass is used. Analog to analog conversion can be
done in three ways:
Analog Transmission
Analog-to-Analog Conversion
Amplitude Modulation
In this modulation, the amplitude of the carrier
signal is modified to reflect the analog data.
Amplitude modulation is implemented bymeans
of a multiplier.
The amplitude of modulating signal (analog data)
is multiplied by the amplitude of carrier
frequency, which then reflects analog data.
The frequency and phase of carrier signal remain
unchanged.
Analog Transmission
Analog-to-Analog Conversion
Frequency Modulation
In this modulation technique, the frequency of the
carrier signal is modified to reflect the change in
the voltage levels of the modulating signal (analog
data).
The amplitude and phase of the carrier signal are
not altered.
Analog Transmission
Analog-to-Analog Conversion
Phase Modulation
In the modulation technique, the phase of carrier
signal is modulated in order to reflect the change
in voltage (amplitude) of analog data signal.
Phase modulation is practically similar to
Frequency Modulation, but in Phase modulation
frequency of the carrier signal is not increased.
Frequency of carrier is signal is changed (made
dense and sparse) to reflect voltage change in the
amplitude of modulating signal.
Digital Transmission
Data or information can be stored in two ways, analog and digital.
For a computer to use the data, it must be in discrete digital form.
Similar to data, signals can also be in analog and digital form.
To transmit data digitally, it needs to be first converted to digital
form.
Digital Transmission
Digital-to-Digital Conversion
This section explains how to convert digital data into digital signals.
It can be done in two ways, line coding and block coding.
For all communications, line coding is necessary whereas block
coding is optional.
Digital Transmission
Digital-to-Digital Conversion
Line Coding
The process for converting digital data into digital signal is said to
be Line Coding. Digital data is found in binary format.It is
represented (stored) internally as series of 1s and 0s.
Line Coding
Digital signal is denoted by discreet signal, which represents digital
data.
There are three types of line coding schemes available:
Digital Transmission
Digital-to-Digital Conversion
Line Coding
1] Uni-polar Encoding
Unipolar encoding schemes use single voltage level to represent
data. In this case, to represent binary 1, high voltage is transmitted
and to represent 0, no voltage is transmitted. It is also called
Unipolar-Non-return-to-zero, because there is no rest condition i.e.
it either represents 1 or 0.
Digital Transmission
Digital-to-Digital Conversion
Line Coding
2] Polar Encoding
Polar encoding scheme uses multiple voltage levels to represent
binary values. Polar encodings is available in four types:
1) Polar Non-Return to Zero (Polar NRZ)
It uses two different voltage levels to represent binary values.
Generally, positive voltage represents 1 and negative value
represents 0. It is also NRZ because there is no rest condition.
NRZ scheme has two variants: NRZ-L and NRZ-I.
NRZ-L changes voltage level at when a different bit is encountered
whereas NRZ-I changes voltage when a 1 is encountered.
Digital Transmission
Digital-to-Digital Conversion
Line Coding
2] Polar Encoding
2) Return to Zero (RZ)
Problem with NRZ is that the receiver cannot conclude when a bit
ended and when the next bit is started, in case when sender and
receiver’s clock are not synchronized.
RZ uses three voltage levels, positive voltage to represent 1,
negative voltage to represent 0 and zero voltage for none. Signals
change during bits not between bits.
Digital Transmission
Digital-to-Digital Conversion
Line Coding
2] Polar Encoding
3) Manchester
This encoding scheme is a combination of RZ and NRZ-L. Bit time
is divided into two halves. It transits in the middle of the bit and
changes phase when a different bit is encountered.
4) Differential Manchester
This encoding scheme is a combination of RZ and NRZ-I. It also
transit at the middle of the bit but changes phase only when 1 is
encountered.
Digital Transmission
Digital-to-Digital Conversion
Line Coding
3] Bipolar Encoding
Bipolar encoding uses three voltage levels, positive, negative and
zero. Zero voltage represents binary 0 and bit 1 is represented by
altering positive and negative voltages.
Digital Transmission
Digital-to-Digital Conversion
Block Coding
To ensure accuracy of the received data frame redundant bits are
used. For example, in even-parity, one parity bit is added to make
the count of 1s in the frame even. This way the original number of
bits is increased. It is called Block Coding.
Block coding is represented by slash notation, mB/nB. Means, m-bit
block is substituted with n-bit block where n > m. Block coding
involves three steps:
Division,
Substitution
Combination.
After block coding is done, it is line coded for transmission.
Digital Transmission
Analog-to-Digital Conversion
Sampling
The analog signal is sampled every T interval.
Most important factor in sampling is the rate at which analog signal
is sampled.
According to Nyquist Theorem, the sampling rate must be at least
two times of the highest frequency of the signal.
Digital Transmission
Analog-to-Digital Conversion
Quantization
Sampling yields discrete form of continuous analog signal.
Every discrete pattern shows the amplitude of the analog signal at
that instance.
The quantization is done between the maximum amplitude value
and the minimum amplitude value.
Quantization is approximation of the instantaneous analog value.
Digital Transmission
Analog-to-Digital Conversion
Encoding
In encoding, each approximated value is then converted into binary
format.
Digital Transmission
Transmission Modes
Parallel Transmission
The binary bits are organized in-to groups of fixed length.
Both sender and receiver are connected in parallel with the equal
number of data lines.
Both computers distinguish between high order and low order data
lines.
The sender sends all the bits at once on all lines.
Because the data lines are equal to the number of bits in a group or
data frame, a complete group of bits (data frame) is sent in one go.
Advantage of Parallel transmission is high speed and disadvantage
is the cost of wires, as it is equal to the number of bits sent in
parallel.
Digital Transmission
Transmission Modes
Serial Transmission
In serial transmission, bits are sent one after another in a queue
manner.
Serial transmission requires only one communication channel.
Serial transmission can be either asynchronous or synchronous.
Digital Transmission
Transmission Modes
Circuit Switching
When two nodes communicate with each other over a dedicated communication path,
it is called circuit switching.
There 'is a need of pre-specified route from which data will travels and no other data is
permitted.
In circuit switching, to transfer the data, circuit must be established so that the data
transfer can take place.
Circuits can be permanent or temporary. Applications which use circuit switching may
have to go through three phases:
Establish a circuit
Transfer the data
Disconnect the circuit
Circuit switching was designed for voice applications.
Telephone is the best suitable example of circuit switching.
Before a user can make a call, a virtual path between caller and callee is established
Transmission and Switching
Message Switching
This technique was somewhere in middle of circuit switching and packet switching.
In message switching, the whole message is treated as a data unit and is switching / transferred in its
entirety.
A switch working on message switching, first receives the whole message and buffers it until there are
resources available to transfer it to the next hop.
If the next hop is not having enough resource to accommodate large size message, the message is stored
and switch waits.
This technique was considered substitute to circuit switching.
As in circuit switching the whole path is blocked for two entities only.
Message switching is replaced by packet switching. Message switching has the following drawbacks:
Every switch in transit path needs enough storage to accommodate entire message.
Because of store-and-forward technique and waits included until resources are available, message
switching is very slow.
Message switching was not a solution for streaming media and real-time applications.
Transmission and Switching
Packet Switching
Shortcomings of message switching gave birth to an idea of packet switching.
The entire message is broken down into smaller chunks called packets.
The switching information is added in the header of each packet and transmitted independently.
It is easier for intermediate networking devices to store small size packets and they do not take
much resources either on carrier path or in the internal memory of switches.
Packet switching enhances line efficiency as packets from multiple applications can be multiplexed
over the carrier.
The internet uses packet switching technique.
Packet switching enables the user to differentiate data streams based on priorities.
Packets are stored and forwarded according to their priority to provide quality of service.
Transmission and Switching
ISDN
Integrated Services Digital Network (ISDN) is simply considered as general-purpose
digital network that is being capable of highly and fully supporting wide range of
services like voice, data, text, and image with the help of very small set of standard
multipurpose user-network interfaces.
ISDN also supports two types of switching operations i.e., circuit-switched operations
and packet-switched operations.
Circuit Switching is provided at the very nominal bit rate of 64 kbps whereas packet
switching is provided for wide range of bit rates up to 64 kbps.
Types of Channels :
ISDN generally contains three types of channels i.e., B-channel (Bearer channel), D-
channel (Data Channel), and H-channel (Hybrid Channel).
Transmission and Switching
ISDN
B-Channel :
B-channel usually has 64 kbps data rate. This channel is required for voice, data, or
other low data rate information. For higher data rates, two B-channel will get combined
to give total of 128 kbps data rates.
D-Channel :
D-channel usually has 16 to 64 kbps data rate. This channel is required for signaling or
packet-switched data. D-channel does not even carry data. It is simply required for
carrying all of the controlling signals as establishing call, ringing, call interrupt, etc. It
is common channel signaling that carries control signals for all of the using out-band
signaling. Using this channel subscribes generally provide security to B connection. It
is also required to carry data or information as videotext, tele-text, emergency services
alarms, etc. in case of no signaling.
H-Channel :
H-channel generally has kbps, 1536 kbps, or 1920 kbps data rate. This channel is
required for video, video-conferencing, high-speed data/audio, etc.
Data Link Layer
Design issues
The data link layer uses the services of the physical layer to send and receive bits over
communication channels. It has a number of functions, including:
1. Providing a well-defined service interface to the network layer.
2. Dealing with transmission errors.
3. Regulating the flow of data so that slow receivers are not swamped by fast senders.
To accomplish these goals, the data link layer takes the packets it gets from the network layer
and encapsulates them into frames for transmission.
Each frame contains a frame header, a payload field for holding the packet, and a frame trailer.
Data Link Layer
Design issues
1] Services Provided to the Network Layer
The function of the data link layer is to provide services to the network layer.
The principal service is transferring data from the network layer on the source machine to the
network layer on the destination machine.
On the source machine is an entity, call it a process, in the network layer that hands some bits to the
data link layer for transmission to the destination.
The job of the data link layer is to transmit the bits to the destination machine so they can be handed
over to the network layer there.
The data link layer can be designed to offer various services.
The actual services that are offered vary from protocol to protocol. Three reasonable possibilities that
we will consider in turn are:
1. Unacknowledged connectionless service.
2. Acknowledged connectionless service.
Data Link Layer
Design issues
1] Services Provided to the Network Layer
1. Unacknowledged connectionless service.
Unacknowledged connectionless service consists of having the source machine send independent
frames to the destination machine without having the destination machine acknowledge them.
Ethernet is a good example of a data link layer that provides this class of service.
No logical connection is established beforehand or released afterward. If a frame is lost due to
noise on the line, no attempt is made to detect the loss or recover from it in the data link layer.
This class of service is appropriate when the error rate is very low, so recovery is left to higher
layers.
It is also appropriate for real-time traffic, such as voice, in which late data are worse than bad
data.
Data Link Layer
Design issues
1] Services Provided to the Network Layer
2. Acknowledged connectionless service.
When this service is offered, there are still no logical connections used, but each frame sent is
individually acknowledged.
In this way, the sender knows whether a frame has arrived correctly or been lost. If it has not
arrived within a specified time interval, it can be sent again.
This service is useful over unreliable channels, such as wireless systems. 802.11 (WiFi) is a good
example of this class of service.
Data Link Layer
Design issues
1] Services Provided to the Network Layer
3. Acknowledged connection-oriented service.
With this service, the source and destination machines establish a connection before any data are transferred.
Each frame sent over the connection is numbered, and the data link layer guarantees that each frame sent is indeed
received.
Furthermore, it guarantees that each frame is received exactly once and that all frames are received in the right
order.
Connection-oriented service thus provides the network layer processes with the equivalent of a reliable bit stream.
It is appropriate over long, unreliable links such as a satellite channel or a long-distance telephone circuit.
If acknowledged connectionless service were used, it is conceivable that lost acknowledgements could cause a
frame to be sent and received several times, wasting bandwidth.
When connection-oriented service is used, transfers go through three distinct phases.
In the first phase, the connection is established by having both sides initialize variables and counters needed to
keep track of which frames have been received and which ones have not.
In the second phase, one or more frames are actually transmitted.
In the third and final phase, the connection is released, freeing up the variables, buffers, and other resources used
to maintain the connection.
Data Link Layer
Design issues
2] Framing
The usual approach is for the data link layer to break up the bit stream into discrete frames,
compute a short token called a checksum for each frame, and include the checksum in the frame
when it is transmitted.
When a frame arrives at the destination, the checksum is recomputed.
Breaking up the bit stream into frames is more difficult than it at first appears. A good design
must make it easy for a receiver to find the start of new frames while using little of the channel
bandwidth. We will look at four methods:
1. Byte count.
2. Flag bytes with byte stuffing.
3. Flag bits with bit stuffing.
4. Physical layer coding violations.
Data Link Layer
Design issues
2] Framing
1. Byte count.
The first framing method uses a field in the header to specify the number of bytes in the frame.
When the data link layer at the destination sees the byte count, it knows how many bytes follow
and hence where the end of the frame is.
This technique is shown in Fig.(a) for four small example frames of sizes 5, 5, 8, and 8 bytes,
respectively.
The trouble with this algorithm is that the count can be garbled by a transmission error.
For example, if the byte count of 5 in the second frame of Fig. (b) becomes a 7 due to a single bit
flip, the destination will get out of synchronization.
It will then be unable to locate the correct start of the next frame.
Data Link Layer
Design issues
2] Framing
2. Flag bytes with byte stuffing.
The second framing method gets around the problem of resynchronization after an error by having
each frame start and end with special bytes. Often the same byte, called a flag byte, is used as both
the starting and ending delimiter. This byte is shown in Fig.(a) as FLAG.
Two consecutive flag bytes indicate the end of one frame and the start of the next. Thus, if the
receiver ever loses synchronization it can just search for two flag bytes to find the end of the
current frame and the start of the next frame.
However, there is a still a problem we have to solve. It may happen that the flag byte occurs in the
data, especially when binary data such as photographs or songs are being transmitted.
This situation would interfere with the framing. One way to solve this problem is to have the
sender’s data link layer insert a special escape byte (ESC) just before each ‘‘accidental’’ flag byte
in the data.
Thus, a framing flag byte can be distinguished from one in the data by the absence or presence of
an escape byte before it.
The data link layer on the receiving end removes the escape bytes before giving the data to the
network layer. This technique is called byte stuffing.
Data Link Layer
Design issues
2] Framing
3. Flag bits with bit stuffing.
The third method of delimiting the bit stream gets around a disadvantage of byte stuffing, which is that it is tied to
the use of 8-bit bytes.
Each frame begins and ends with a special bit pattern, 01111110 or 0x7E in hexadecimal. This pattern is a flag byte.
Whenever the sender’s data link layer encounters five consecutive 1s in the data, it automatically stuffs a 0 bit into
the outgoing bit stream.
This bit stuffing is analogous to byte stuffing, in which an escape byte is stuffed into the outgoing character stream
before a flag byte in the data.
It also ensures a minimum density of transitions that help the physical layer maintain synchronization. USB
(Universal Serial Bus) uses bit stuffing for this reason.
When the receiver sees five consecutive incoming 1 bits, followed by a 0 bit, it automatically destuffs (i.e., deletes)
the 0 bit. Just as byte stuffing is completely transparent to the network layer in both computers, so is bit stuffing.
If the user data contain the flag pattern, 01111110, this flag is transmitted as 011111010 but stored in the receiver’s
memory as 01111110.
Figure gives an example of bit stuffing. With bit stuffing, the boundary between two frames can be unambiguously
recognized by the flag pattern.
Data Link Layer
Design issues
2] Framing
4. Physical layer coding violations.
The last method of framing is to use a shortcut from the physical layer.
The encoding of bits as signals often includes redundancy to help the receiver.
This redundancy means that some signals will not occur in regular data.
For example, in the 4B/5B line code 4 data bits are mapped to 5 signal bits to ensure sufficient
bit transitions.
We can use some reserved signals to indicate the start and end of frames.
In effect, we are using ‘‘coding violations’’ to delimit frames. The beauty of this scheme is that,
because they are reserved signals, it is easy to find the start and end of frames and there is no
need to stuff the data.
Many data link protocols use a combination of these methods for safety.
A common pattern used for Ethernet and 802.11 is to have a frame begin with a well-defined
pattern called a preamble.
Data Link Layer
Design issues
Error Control
When data-frame is transmitted, there is a probability that data-frame may be lost in the
transit or it is received corrupted.
In both cases, the receiver does not receive the correct data-frame and sender does not
know anything about any loss.
In such case, both sender and receiver are equipped with some protocols which helps
them to detect transit errors such as loss of data-frame.
Hence, either the sender retransmits the data-frame or the receiver may request to
resend the previous data-frame.
Data Link Layer
Design issues
Error Control
Requirements for error control mechanism:
Error detection - The sender and receiver, either both or any, must ascertain that there is some
error in the transit.
Positive ACK - When the receiver receives a correct frame, it should acknowledge it.
Negative ACK - When the receiver receives a damaged frame or a duplicate frame, it sends a
NACK back to the sender and the sender must retransmit the correct frame.
Retransmission: The sender maintains a clock and sets a timeout period. If an
acknowledgement of a data-frame previously transmitted does not arrive before the timeout the
sender retransmits the frame, thinking that the frame or it’s acknowledgement is lost in transit.
Data Link Layer
Design issues
Error Control
There are three types of techniques available which Data-link layer may deploy to control the
errors by Automatic Repeat Requests (ARQ):
Stop-and-wait ARQ
The following transition may occur in Stop-and-Wait ARQ:
The sender maintains a timeout counter.
When a frame is sent, the sender starts the timeout counter.
If acknowledgement of frame comes in time, the sender transmits the next frame in queue.
If acknowledgement does not come in time, the sender assumes that either the frame or its
acknowledgement is lost in transit. Sender retransmits the frame and starts the timeout counter.
If a negative acknowledgement is received, the sender retransmits the frame.
Data Link Layer
Design issues
Error Control
Go-Back-N ARQ
Stop and wait ARQ mechanism does not utilize the resources at their best.
When the acknowledgement is received, the sender sits idle and does nothing. In Go-Back-N ARQ
method, both sender and receiver maintain a window.
The sending-window size enables the sender to send multiple frames without receiving the
acknowledgement of the previous ones.
The receiving-window enables the receiver to receive multiple frames and acknowledge them. The
receiver keeps track of incoming frame’s sequence number.
When the sender sends all the frames in window, it checks up to what sequence number it has received
positive acknowledgement.
If all frames are positively acknowledged, the sender sends next set of frames.
If sender finds that it has received NACK or has not receive any ACK for a particular frame, it
retransmits all the frames after which it does not receive any positive ACK.
Data Link Layer
Design issues
Error Control
Selective Repeat ARQ
In Go-back-N ARQ, it is assumed that the receiver does not have any buffer
space for its window size and has to process each frame as it comes.
This enforces the sender to retransmit all the frames which are not
acknowledged.
In Selective-Repeat ARQ, the receiver while keeping track of sequence numbers,
buffers the frames in memory and sends NACK for only frame which is missing
or damaged.
The sender in this case, sends only packet for which NACK is received.
Data Link Layer
Design issues
Flow Control
When a data frame (Layer-2 data) is sent from one host to another over a single
medium, it is required that the sender and receiver should work at the same speed.
That is, sender sends at a speed on which the receiver can process and accept the data.
What if the speed (hardware/software) of the sender or receiver differs? If sender is
sending too fast the receiver may be overloaded, (swamped) and data may be lost.
Data Link Layer
Design issues
Flow Control
Two types of mechanisms can be deployed to control the flow:
Stop and Wait
This flow control mechanism forces the sender after transmitting a data frame to stop and wait
until the acknowledgement of the data-frame sent is received.
Sliding Window
In this flow control mechanism, both sender and receiver agree on the number of data-frames
after which the acknowledgement should be sent.
As we learnt, stop and wait flow control mechanism wastes resources, this protocol tries to make
use of underlying resources as much as possible.
Data Link Layer
Types of Errors
There may be three types of errors:
Single bit error
Error Detection
Errors in the received frames are detected by means of Parity Check and Cyclic
Redundancy Check (CRC).
In both cases, few extra bits are sent along with actual data to confirm that bits received
at other end are same as they were sent.
If the counter-check at receiver’ end fails, the bits are considered corrupted.
Data Link Layer
Error Detection
Parity Check
One extra bit is sent along with the original bits to make number of 1s either even in case of even
parity, or odd in case of odd parity.
The sender while creating a frame counts the number of 1s in it.
For example, if even parity is used and number of 1s is even then one bit with value 0 is added.
This way number of 1s remains even.
If the number of 1s is odd, to make it even a bit with value 1 is added.
The receiver simply counts the number of 1s in a frame. If the count of 1s is even and even parity
is used, the frame is considered to be not-corrupted and is accepted.
If the count of 1s is odd and odd parity is used, the frame is still not corrupted.
If a single bit flips in transit, the receiver can detect it by counting the number of 1s. But when
more than one bits are erroneous, then it is very hard for the receiver to detect the error.
Data Link Layer
Error Detection
Cyclic Redundancy Check (CRC)
CRC is a different approach to detect if the received frame contains valid data. This technique
involves binary division of the data bits being sent.
The divisor is generated using polynomials. The sender performs a division operation on the bits
being sent and calculates the remainder.
Before sending the actual bits, the sender adds the remainder at the end of the actual bits.
Actual data bits plus the remainder is called a codeword. The sender transmits data bits as code
words.
At the other end, the receiver performs division operation on codewords using the same CRC
divisor.
If the remainder contains all zeros the data bits are accepted, otherwise it is considered as there
some data corruption occurred in transit.
Data Link Layer
Error Correction
In the digital world, error correction can be done in two ways:
Backward Error Correction When the receiver detects an error in the data received, it requests back
the sender to retransmit the data unit.
Forward Error Correction When the receiver detects some error in the data received, it executes
error-correcting code, which helps it to auto-recover and to correct some kinds of errors.
The first one, Backward Error Correction, is simple and can only be efficiently used where
retransmitting is not expensive. For example, fiber optics. But in case of wireless transmission
retransmitting may cost too much. In the latter case, Forward Error Correction is used.
To correct the error in data frame, the receiver must know exactly which bit in the frame is corrupted. To
locate the bit in error, redundant bits are used as parity bits for error detection.
For example, we take ASCII words (7 bits data), then there could be 8 kind of information we need: first
seven bits to tell us which bit is error and one more bit to tell that there is no error.
For m data bits, r redundant bits are used. r bits can provide 2r combinations of information. In m+r bit
codeword, there is possibility that the r bits themselves may get corrupted. So the number of r bits used
must inform about m+r bit locations plus no-error information, i.e. m+r+1.
Data Link Layer
ELEMENTARY DATA LINK PROTOCOLS
When a frame arrives at the receiver, the checksum is recomputed. If the checksum in the frame is
incorrect (i.e., there was a transmission error), the data link layer is so informed (event = cksum err).
If the inbound frame arrived undamaged, the data link layer is also informed (event = frame arrival )
so that it can acquire the frame for inspection using from physical layer.
As soon as the receiving data link layer has acquired an undamaged frame, it checks the control
information in the header, and, if everything is all right, passes the packet portion to the network layer.
Under no circumstances is a frame header ever given to a network layer.
Five data structures are defined there: boolean, seq nr, packet, frame kind, and frame.
A boolean is an enumerated type and can take on the values true and false.
A seq nr is a small integer used to number the frames so that we can tell them apart.
These sequence numbers run from 0 up to and including MAX SEQ, which is defined in each protocol
needing it.
Data Link Layer
After adjustment, A then combines this table with its own table to create a combined table.
Network Layer
ROUTING ALGORITHMS
4 Distance Vector Routing
The combined table may contain some duplicate data.
In the above figure, the combined table of router A
contains the duplicate data, so it keeps only those data
which has the lowest cost.
For example, A can send the data to network 1 in two
ways. The first, which uses no next router, so it costs
one hop. The second requires two hops (A to B, then B
to Network 1). The first option has the lowest cost,
therefore it is kept and the second one is dropped.
The process of creating the routing table continues for
all routers. Every router receives the information from
the neighbors, and update the routing table.
Network Layer
ROUTING ALGORITHMS
5. Link State Routing
Link state routing is a technique in which each router shares the knowledge of its neighborhood with
every other router in the internetwork.
The three keys to understand the Link State Routing algorithm:
Knowledge about the neighborhood: Instead of sending its routing table, a router sends the
information about its neighborhood only. A router broadcast its identities and cost of the directly
attached links to other routers.
Flooding: Each router sends the information to every other router on the internetwork except its
neighbors. This process is known as Flooding. Every router that receives the packet sends the copies to
all its neighbors. Finally, each and every router receives a copy of the same information.
Information sharing: A router sends the information to every other router only when the change
occurs in the information.
Network Layer
ROUTING ALGORITHMS
5 Link State Routing
Link State Routing has two phases:
Reliable Flooding
Initial state: Each node knows the cost of its neighbors.
Final state: Each node knows the entire graph.
Route Calculation
Each node uses Dijkstra's algorithm on the graph to calculate the optimal routes to all nodes.
The Link state routing algorithm is also known as Dijkstra's algorithm which is used to find the shortest path from one node
to every other node in the network.
The Dijkstra's algorithm is an iterative, and it has the property that after k th iteration of the algorithm, the least cost paths are
well known for k destination nodes.
Let's describe some notations:
c( i , j): Link cost from node i to node j. If i and j nodes are not directly linked, then c(i , j) = ∞.
D(v): It defines the cost of the path from source code to destination v that has the least cost currently.
P(v): It defines the previous node (neighbor of v) along with current least cost path from source to v.
N: It is the total number of nodes available in the network.
Network Layer
ROUTING ALGORITHMS
5 Link State Routing
Let's understand through an example:
Use this formula : D(B) = min( D(B) , D(D) + c(D,B) )
In the above figure, source vertex is A.
Step 1:
The first step is an initialization step. The currently known least cost path from A to its directly attached neighbors, B, C, D
are 2,5,1 respectively. The cost from A to B is set to 2, from A to D is set to 1 and from A to C is set to 5. The cost from A to
E and F are set to infinity as they are not directly linked to A.
Step N D(B),P(B) D(C),P(C) D(D),P(D) D(E),P(E) D(F),P(F)
1 A 2,A 5,A 1,A ∞ ∞
Step 2:
In the above table, we observe that vertex D contains the least cost path in step 1. Therefore, it is added in N. Now, we need
to determine a least-cost path through D vertex.
a) Calculating shortest path from A to B
v = B, w = D
D(B) = min( D(B) , D(D) + c(D,B) ) = min( 2, 1+2)> = min( 2, 3)
The minimum value is 2. Therefore, the currently shortest path from A to B is 2.
Network Layer
ROUTING ALGORITHMS
5 Link State Routing
b) Calculating shortest path from A to C
v = C, w = D
D(B) = min( D(C) , D(D) + c(D,C) ) = min( 5, 1+3) = min( 5, 4)
The minimum value is 4. Therefore, the currently shortest path from A to C is 4.</p>
c) Calculating shortest path from A to E
v = E, w = D
D(B) = min( D(E) , D(D) + c(D,E) ) = min( ∞, 1+1) = min(∞, 2)
The minimum value is 2. Therefore, the currently shortest path from A to E is 2.
This method is easy on router's CPU but may cause the problem of duplicate packets received
from peer routers.
Reverse path forwarding is a technique, in which router knows in advance about its predecessor
from where it should receive broadcast. This technique is used to detect and discard duplicates.
Network Layer
ROUTING ALGORITHMS
8 Multicast Routing
Some applications, such as a multiplayer game or live video of a sports event streamed to many viewing
locations, send packets to multiple receivers.
Unless the group is very small, sending a distinct packet to each receiver is expensive.
On the other hand, broadcasting a packet is wasteful if the group consists of, say, 1000 machines on a million-
node network, so that most receivers are not interested in the message (or worse yet, they are definitely
interested but are not supposed to see it).
Thus, we need a way to send messages to well-defined groups that are numerically large in size but small
compared to the network as a whole.
Sending a message to such a group is called multicasting, and the routing algorithm used is called multicast
routing.
Multicast routing schemes build on the broadcast routing schemes we have already studied, sending packets
along spanning trees to deliver the packets to the members of the group while making efficient use of
bandwidth.
However, the best spanning tree to use depends on whether the group is dense, with receivers scattered over
most of the network, or sparse, with much of the network not belonging to the group.
Network Layer
ROUTING ALGORITHMS
9 Anycast Routing
So far, we have covered delivery models in which a source sends to a single destination
(called unicast), to all destinations (called broadcast), and to a group of destinations
(called multicast).
Another delivery model, called anycast is sometimes also useful.
In anycast, a packet is delivered to the nearest member of a group Schemes that find
these paths are called anycast routing.
Network Layer
ROUTING ALGORITHMS
10 Routing for Mobile Hosts
Millions of people use computers while on go, from the truly mobile situations with a wireless device in moving
cars, to nomadic situations in which laptop computers are used in a series of a different location.
We use the term mobile hosts to mean either category, as distinct from stationary hosts that never move.
The mobile hosts introduce a new complication to route packets to the mobile hosts, the network first has to find it.
Assumed model :
The model of the world that we will consider is one in which all hosts are assumed to have a permanent home
location that never changes.
Each host has a permanent home address that can be used to determine home location.
Like the telephone number 1-212-5551212 indicates the United States (country code 1) and Manhattan (212).
Feature :
The basic idea used for mobile routing on the internet and cellular network is for the mobile hosts to tell the host at
the home location.
Network Layer
ROUTING ALGORITHMS
10 Routing for Mobile Hosts
This host, which acts on behalf of the mobile host called a home agent.
Once it knows where the mobile host currently located, it can forward packets so that they are delivered. The figure
shows mobile routing in action.
The local address called care-of address.
Once it has this address it can tell its home agent where it is now. It does this by sending registration message to
home agent with care of address.
The message is shown with a dashed line in the figure indicate that it is a control message, not a data message.
The sender sends a data packet to the mobile host using its permanent address.
This packet is routed by the network to the host home location because the home addresses belong there.
It encapsulates the packet with a new header and sends this bundle to the care-of address.
This mechanism is called tunneling. It is very important on the internet, so we will look at it in more detail later.
Network Layer
ROUTING ALGORITHMS
10 Routing for Mobile Hosts
When the encapsulated packet arrives at the care-of address, the mobile host unwraps it and retrieves the
packet from the sender.
The overall route is called triangle routing because it way is circuitous if the remote location is far from the
home location.
As part of the step, 4 senders learns the current care-of address.
Subsequent packets can be routed directly to the mobile host by tunneling them to the care-of address (step 5)
bypassing the home location.
If connectivity lost for any reason as the mobile moves, the home address can always be used to reach the
mobile.
Network Layer
ROUTING ALGORITHMS
11 Routing in Ad Hoc Networks
We have now seen how to do routing when the hosts are mobile but the routers are fixed.
An even more extreme case is one in which the routers themselves are mobile.
Among the possibilities are emergency workers at an earthquake site, military vehicles on a battlefield, a fleet
of ships at sea, or a gathering of people with laptop computers in an area lacking 802.11.
In all these cases, and others, each node communicates wirelessly and acts as both a host and a router.
Networks of nodes that just happen to be near each other are called ad hoc networks or MANETs (Mobile
Ad hoc NETworks).
With a wired network, if a router has a valid path to some destination, that path continues to be valid barring
failures, which are hopefully rare.
With an ad hoc network, the topology may be changing all the time, so the desirability and even the validity
of paths can change spontaneously without warning.
Network Layer
ROUTING ALGORITHMS
11 Routing in Ad Hoc Networks
It is a reactive/on-demand routing protocol. In this type of routing, the route is discovered only when it is
required/needed. The process of route discovery occurs by flooding the route request packets throughout the
mobile network.
It consists of two phases:
Route Discovery: This phase determines the most optimal path for the transmission of data packets
between the source and the destination mobile nodes.
Route Maintenance: This phase performs the maintenance work of the route as the topology in the
mobile ad-hoc network is dynamic in nature and hence, there are many cases of link breakage resulting
in the network failure between the mobile nodes.
Network Layer
CONGESTION CONTROL ALGORITHMS
Too many packets present in (a part of) the network causes packet delay and loss that degrades performance.
This situation is called congestion. Congestion control refers to the techniques used to control or prevent
congestion. Congestion control techniques can be broadly classified into two categories:
Open Loop Congestion Control
Open loop congestion control policies are applied to prevent congestion before it happens. The congestion
control is handled either by the source or the destination.
Policies adopted by open loop congestion control –
1.Retransmission Policy :
It is the policy in which retransmission of the packets are taken care. If the sender feels that a sent packet is
lost or corrupted, the packet needs to be retransmitted. This transmission may increase the congestion in the
network.
To prevent congestion, retransmission timers must be designed to prevent congestion and also able to
optimize efficiency.
Network Layer
CONGESTION CONTROL ALGORITHMS
2.Window Policy : The type of window at the sender side may also affect the congestion. Several packets in
the Go-back-n window are resent, although some packets may be received successfully at the receiver side.
This duplication may increase the congestion in the network and making it worse.
Therefore, Selective repeat window should be adopted as it sends the specific packet that may have been lost.
3.Discarding Policy : A good discarding policy adopted by the routers is that the routers may prevent
congestion and at the same time partially discards the corrupted or less sensitive package and also able to
maintain the quality of a message.
In case of audio file transmission, routers can discard less sensitive packets to prevent congestion and also
maintain the quality of the audio file.
4.Acknowledgment Policy : Since acknowledgement are also the part of the load in network, the
acknowledgment policy imposed by the receiver may also affect congestion. Several approaches can be used
to prevent congestion related to acknowledgment. The receiver should send acknowledgement for N packets
rather than sending acknowledgement for a single packet. The receiver should send a acknowledgment only if
it has to sent a packet or a timer expires.
Network Layer
CONGESTION CONTROL ALGORITHMS
5.Admission Policy : In admission policy a mechanism should be used to prevent congestion. Switches in a
flow should first check the resource requirement of a network flow before transmitting it further. If there is a
chance of a congestion or there is a congestion in the network, router should deny establishing a virtual
network connection to prevent further congestion.
All the above policies are adopted to prevent congestion before it happens in the network.
Closed Loop Congestion Control
Closed loop congestion control technique is used to treat or alleviate congestion after it happens. Several
techniques are used by different protocols; some of them are:
1.Backpressure :
Backpressure is a technique in which a congested node stop receiving packet from upstream node. This may
cause the upstream node or nodes to become congested and rejects receiving data from above nodes.
Backpressure is a node-to-node congestion control technique that propagate in the opposite direction of data
flow. The backpressure technique can be applied only to virtual circuit where each node has information of its
above upstream node. In above diagram the 3rd node is congested and stops receiving packets as a result 2nd
node may be get congested due to slowing down of the output data flow. Similarly 1st node may get
congested and informs the source to slow down.
Network Layer
CONGESTION CONTROL ALGORITHMS
2.Choke Packet Technique : Choke packet technique is applicable to both virtual networks as well as datagram subnets. A
choke packet is a packet sent by a node to the source to inform it of congestion. Each router monitor its resources and the
utilization at each of its output lines. whenever the resource utilization exceeds the threshold value which is set by the
administrator, the router directly sends a choke packet to the source giving it a feedback to reduce the traffic. The
intermediate nodes through which the packets has traveled are not warned about congestion.
3. Implicit Signaling : In implicit signaling, there is no communication between the congested nodes and the source. The
source guesses that there is congestion in a network. For example when sender sends several packets and there is no
acknowledgment for a while, one assumption is that there is a congestion.
4.Explicit Signaling : In explicit signaling, if a node experiences congestion it can explicitly sends a packet to the source or
destination to inform about congestion. The difference between choke packet and explicit signaling is that the signal is
included in the packets that carry data rather than creating different packet as in case of choke packet technique.
Explicit signaling can occur in either forward or backward direction.
Forward Signaling : In forward signaling signal is sent in the direction of the congestion. The
destination is warned about congestion. The receiver in this case adopt policies to prevent further
congestion.
Backward Signaling : In backward signaling signal is sent in the opposite direction of the congestion.
The source is warned about congestion and it needs to slow down.
Network Layer
INTERNETWORKING
Until now, we have implicitly assumed that there is a single homogeneous network, with each machine using
the same protocol in each layer. Unfortunately, this assumption is wildly optimistic. Many different networks
exist, including PANs, LANs, MANs, and WANs. We have described Ethernet, Internet over cable, the fixed
and mobile telephone networks, 802.11, 802.16, and more. Numerous protocols are in widespread use across
these networks in every layer. In the following sections, we will take a careful look at the issues that arise
when two or more networks are connected to form an internetwork, or more simply an internet.
1 How Networks Differ
Network Layer
INTERNETWORKING
There are two basic choices for connecting different networks: we can build devices that translate or convert
packets from each kind of network into packets for each other network, or, like good computer scientists, we
can try to solve the problem by adding a layer of indirection and building a common layer on top of the
different networks.
In either case, the devices are placed at the boundaries between networks.
Let us first explore at a high level how interconnection with a common network layer can be used to
interconnect dissimilar networks.
An internet comprised of 802.11, MPLS(Multiprotocol Label Switching), and Ethernet networks is shown
in Fig. 5-39(a).
Suppose that the source machine on the 802.11 network wants to send a packet to the destination machine on
the Ethernet network.
Since these technologies are different, and they are further separated by another kind of network (MPLS),
some added processing is needed at the boundaries between the networks.
Network Layer
INTERNETWORKING
3 Tunneling
If they are two geographically separate networks, which want to communicate with each other, they may
deploy a dedicated line between or they have to pass their data through intermediate networks.
Tunneling is a mechanism by which two or more same networks communicate with each other, by passing
intermediate networking complexities. Tunneling is configured at both ends.
When the data enters from one end of Tunnel, it is tagged. This tagged data is then routed inside the
intermediate or transit network to reach the other end of Tunnel. When data exists the Tunnel its tag is
removed and delivered to the other part of the network.
Both ends seem as if they are directly connected and tagging makes data travel through transit network
without any modifications.
Network Layer
INTERNETWORKING
4. Packet Fragmentation
Most Ethernet segments have their maximum transmission unit (MTU) fixed to 1500 bytes.
A data packet can have more or less packet length depending upon the application.
Devices in the transit path also have their hardware and software capabilities which tell what amount of data that
device can handle and what size of packet it can process.
If the data packet size is less than or equal to the size of packet the transit network can handle, it is processed
neutrally.
If the packet is larger, it is broken into smaller pieces and then forwarded. This is called packet fragmentation.
Each fragment contains the same destination and source address and routed through transit path easily. At the
receiving end it is assembled again.
If a packet with DF (don’t fragment) bit set to 1 comes to a router which can not handle the packet because of its
length, the packet is dropped.
When a packet is received by a router has its MF (more fragments) bit set to 1, the router then knows that it is a
fragmented packet and parts of the original packet is on the way.
If packet is fragmented too small, the overhead is increases. If the packet is fragmented too large, intermediate router
may not be able to process it and it might get dropped.
Network Layer
INTERNETWORKING
5 Internetwork Routing
In real world scenario, networks under same administration are generally scattered geographically.
There may exist requirement of connecting two different networks of same kind as well as of different kinds.
Routing between two networks is called internetworking.
Networks can be considered different based on various parameters such as, Protocol, topology, Layer-2
network and addressing scheme.
In internetworking, routers have knowledge of each other’s address and addresses beyond them.
They can be statically configured go on different network or they can learn by using internetworking routing
protocol.
Routing protocols which are used within an organization or administration are called Interior Gateway
Protocols or IGP. RIP, OSPF are examples of IGP.
Routing between different organizations or administrations may have Exterior Gateway Protocol, and there is
only one EGP i.e. Border Gateway Protocol.
Network Layer
Examples of the network layer
1 The IP Version 4 Protocol
2 IP Addresses
3 IP Version 6
4 Internet Control Protocols
5 Label Switching and MPLS
6 OSPF—An Interior Gateway Routing Protocol
7 BGP—The Exterior Gateway Routing Protocol
8 Internet Multicasting
9 Mobile IP