3rd Module

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 33

Maharaja Institute of Technology Mysore Department of Computer Science and Engineering

Module 3

In real life, we have links with limited bandwidths. The wise use of these bandwidths has
been one of the main challenges of electronic communications. However, the meaning of
wise may depend on the application. Sometimes it is necessary to combine several low-
bandwidth channels to make use of one channel with a larger bandwidth. Sometimes it is
necessary to expand the bandwidth of a channel to achieve goals such as privacy and
antijamming. There are two broad categories of bandwidth utilization: multiplexing and
spectrum spreading. In multiplexing, the main goal is efficiency; it combines several channels
into one. In spectrum spreading, its goals are privacy and antijamming.

MULTIPLEXING:
The set of techniques that allow the simultaneous transmission of multiple signals across a
single data link is called Multiplexing.
In a multiplexed system, n lines share the bandwidth of one link. Figure 3.1 shows the basic
format of a multiplexed system. At the sending side, many lines direct their transmission
streams to a multiplexer (MUX), which combines them into a single stream (many-to
one).At the receiving end, that stream is fed into a DE multiplexer (DEMUX), which
separates the stream back into its component transmissions (one-to-many) and directs them to
their corresponding lines. In the figure, the word link refers to the physical path. The word
channel refers to the portion of a link that carries a transmission between a given pair of
lines. One link can have many (n) channels.

Fig.3.1 dividing a link into channels

Data Communication (18CS46) Module 3


Maharaja Institute of Technology Mysore Department of Computer Science and Engineering

There are three basic multiplexing techniques:


 Frequency-division multiplexing,
 Wavelength-division multiplexing, and
 Time-division multiplexing
The first two are techniques designed for analog signals, the third, for digital signals

Fig.3.2 categories of multiplexing

frequency-division multiplexing(FDM): FDM is an analog multiplexing


technique that combines analog signals. It can be applied when the bandwidth of a link is
greater than the combined bandwidths of the signals to be transmitted. In FDM, signals
generated by each sending device modulate different carrier frequencies. These modulated
signals are then combined into a single composite signal that can be transported by the link.
Carrier frequencies are separated by sufficient bandwidth to accommodate the modulated
signal. These bandwidth ranges are the channels which can be separated by strips of unused
bandwidth—guard bands—to prevent signals from overlapping. Carrier frequencies must
not interfere with the original data frequencies.
Figure 3.3 gives a conceptual view of FDM. In this illustration, the transmission path is
divided into three parts, each representing a channel that carries one transmission.

Fig.3.3 FDM

Data Communication (18CS46) Module 3


Maharaja Institute of Technology Mysore Department of Computer Science and Engineering

Multiplexing Process
Figure 3.4 is a conceptual illustration of the multiplexing process. Each source generates a
signal of a similar frequency range. Inside the multiplexer, these similar signals modulate
different carrier frequencies (f1, f2, and f3). The resulting modulated signals are then
combined into a single composite signal that is sent out over a media link that has enough
bandwidth to accommodate it.

Fig.3.4 FDM Multiplexing example

Demultiplexing Process
The demultiplexer uses a series of filters to decompose the multiplexed signal into its
constituent component signals. The individual signals are then passed to a demodulator that
separates them from their carriers and passes them to the output lines. Figure 3.5 is
a conceptual illustration of demultiplexing process.

Fig.3.5 FDM Demultiplexing example

Data Communication (18CS46) Module 3


Maharaja Institute of Technology Mysore Department of Computer Science and Engineering

Other Applications of FDM


A very common application of FDM is AM and FM radio broadcasting. Radio uses
the air as the transmission medium. A special band from 530 to 1700 kHz is assigned to AM
radio. All radio stations need to share this band. However, FM has a wider band of 88 to 108
MHz because each station needs a bandwidth of 200 kHz.
Another common use of FDM is in television broadcasting. Each TV channel has its
own bandwidth of 6 MHz.
The first generation of cellular telephones also uses FDM. Each user is assigned two
30-kHz channels, one for sending voice and the other for receiving. The voice signal, which
has a bandwidth of 3 kHz is modulated by using FM.

Wavelength-Division Multiplexing(WDM):WDM is an analog multiplexing


technique to combine optical signals. Here, the multiplexing and demultiplexing involve
optical signals transmitted through fiber-optic channels. It combines different signals of
different frequencies. The difference is that the frequencies are very high. It is designed to
use the high-data-rate capability of fiber-optic cable.
Figure 3.6 gives a conceptual view of a WDM multiplexer and demultiplexer. Very
narrow bands of light from different sources are combined to make a wider band of light. At
the receiver, the signals are separated by the demultiplexer.

Fig.3.6 WDM
The basic idea of WDM is very simple. It combines multiple light sources into one single
light at the multiplexer and do the reverse at the demultiplexer. The combining and splitting
of light sources are easily handled by a prism. Using this technique, a multiplexer can be
made to combine several input beams of light, each containing a narrow band of frequencies,

Data Communication (18CS46) Module 3


Maharaja Institute of Technology Mysore Department of Computer Science and Engineering

into one output beam of a wider band of frequencies. A demultiplexer can also be made to
reverse the process. Figure 3.7 shows the concept.

Fig 3.7 prisms in wavelength division multiplexing and demultiplexing


One application of WDM is the SONET network, in which multiple optical fiber lines are
multiplexed and demultiplexed.

Time-Division Multiplexing: Time-division multiplexing (TDM) is a digital


multiplexing technique for combining several low-rate channels into one high-rate one. It
allows several connections to share the high bandwidth of a link on time basis. Each
connection occupies a portion of time in the link. Figure 3.8 gives a conceptual view of
TDM. In the figure, portions of signals 1, 2, 3, and 4 occupy the link sequentially.

Fig 3.8 TDM


TDM can be divided into two different schemes: Synchronous and Statistical.
Synchronous TDM
In synchronous TDM, each input connection has an allotment in the output even if it is not
sending data.

Data Communication (18CS46) Module 3


Maharaja Institute of Technology Mysore Department of Computer Science and Engineering

Time Slots and Frames


In synchronous TDM, the data flow of each input connection is divided into units, where
each input occupies one input time slot. A unit can be 1 bit, one character, or one block of
data. Each input unit becomes one output unit and occupies one output time slot. The
duration of an output time slot is n times shorter than the duration of an input time slot. If an
input time slot is T s, the output time slot is T/n s, where n is the number of connections.
Figure 3.9 is an example of synchronous TDM where n is 3.

Fig 3.9 Synchronous TDM


In synchronous TDM, if there are n connections, a frame is divided into n time slots and one
slot is allocated for each unit, one for each input line. If the duration of the input unit is T, the
duration of each slot is T/n and the duration of each frame is T.
In synchronous TDM, the data rate of the link is n times faster, and the unit duration is
n times shorter.
Interleaving
TDM can be visualized as two fast-rotating switches, one on the multiplexing side and the
other on the demultiplexing side. The switches are synchronized and rotate at the same speed,
but in opposite directions. On the multiplexing side, as the switch opens in front of a
connection, that connection has the opportunity to send a unit onto the path. This process is
called interleaving. On the demultiplexing side, as the switch opens in front of a connection,
that connection has the opportunity to receive a unit from the path. Figure 3.10 shows the
interleaving process for the connection shown in Figure 3.9.

Data Communication (18CS46) Module 3


Maharaja Institute of Technology Mysore Department of Computer Science and Engineering

Fig 3.10 Interleaving


Data Rate Management
One problem with TDM is how to handle a disparity in the input data rates that means data
rates of all input lines are not the same. Hence three strategies can be used. Three strategies
are multilevel multiplexing, multiple-slot allocation, and pulse stuffing.

Multilevel Multiplexing
Multilevel multiplexing is a technique used when the data rate of an input line is a multiple of
others.
For example, in Figure 3.11, it has two inputs of 20 kbps and three inputs of 40 kbps.The first
two input lines can be multiplexed together to provide a data rate equal to the last three. A
second level of multiplexing can create an output of 160 kbps.

Fig 3.11 multilevel multiplexing

Multiple-Slot Allocation
It allots more than one slot in a frame to a single input line.
For example, if an input line has a data rate that is a multiple of another input. In Figure 3.12,
the input line with a 50-kbps data rate can be given two slots in the output. Then a
demultiplexer is inserted in the line to make two inputs out of one.

Data Communication (18CS46) Module 3


Maharaja Institute of Technology Mysore Department of Computer Science and Engineering

Fig 3.12 Multiple slot allocation

Pulse Stuffing
Sometimes the bit rates of sources are not multiple integers of each other. Therefore, neither
of the above two techniques can be applied. One solution is to make the highest input data
rate the dominant data rate and then add dummy bits to the input lines with lower rates. This
will increase their rates. This technique is called pulse stuffing, bit padding, or bit stuffing.
The idea is shown in Figure 3.13. The input with a data rate of 46 is pulse-stuffed to increase
the rate to 50 kbps.

Fig 3.13 Pulse stuffing

Frame Synchronizing
Synchronization between the multiplexer and demultiplexer is a major issue. If the
multiplexer and the demultiplexer are not synchronized, a bit belonging to one channel may
be received by the wrong channel. For this reason, one or more synchronization bits are
usually added to the beginning of each frame. These bits, called framing bits that allows the
demultiplexer to synchronize with the incoming stream so that it can separate the time slots
accurately. This synchronization information consists of 1 bit per frame, alternating between
0 and 1, as shown in Figure 3.14.

Data Communication (18CS46) Module 3


Maharaja Institute of Technology Mysore Department of Computer Science and Engineering

Fig 3.14 Framing bits

Statistical TDM

In synchronous TDM, each input has a reserved slot in the output frame. Hence it can be
inefficient if some input lines have no data to send. This drawback of synchronous TDM can
be overcome by statistical TDM, in which slots are dynamically allocated to improve
bandwidth efficiency. In statistical multiplexing, the number of slots in each frame is less
than the number of input lines. The multiplexer checks each input line in round robin fashion;
it allocates a slot for an input line if the line has data to send; otherwise, it skips the line and
checks the next line.
Figure 3.15 shows a synchronous and a statistical TDM example.

Fig 3.15 TDM slot comparison

Addressing
An output slot in synchronous TDM is totally occupied by data; hence there is no need for
addressing. Synchronization and preassigned relationships between the inputs and outputs
serve as an address.

Data Communication (18CS46) Module 3


Maharaja Institute of Technology Mysore Department of Computer Science and Engineering

In statistical TDM, a slot needs to carry data as well as the address of the destination. Here,
there is no fixed relationship between the inputs and outputs because there are no preassigned
or reserved slots. It is necessary to include the address of the receiver inside each slot to show
where it is to be delivered. The addressing in its simplest form can be n bits to define N
different output lines with n = log2 N.
Slot Size
Since a slot carries both data and an address in statistical TDM, the ratio of the data size to
address size must be reasonable to make transmission efficient.
No Synchronization Bit
There is another difference between synchronous and statistical TDM, but this time it is at the
frame level. The frames in statistical TDM need not be synchronized, so we do not need
synchronization bits.
Bandwidth
In statistical TDM, the capacity of the link is normally less than the sum of the capacities of
each channel. The designers of statistical TDM define the capacity of the link based on the
statistics of the load for each channel.

SPREAD SPECTRUM:
Spread spectrum is designed to be used in wireless applications (LANs and WANs). It also
combines signals from different sources to fit into a larger bandwidth, but the goals are
privacy and antijamming. To achieve these goals, spread spectrum techniques add
redundancy; they spread the original spectrum needed for each station. If the required
bandwidth for each station is B, spread spectrum expands it to Bss, such that Bss >> B. The
expanded bandwidth allows the source to wrap its message in a protective envelope for a
more secure transmission.
Figure 3.16 shows the idea of spread spectrum. Spread spectrum achieves its goals through
two principles:
1. The bandwidth allocated to each station needs to be, by far, larger than what is
needed. This allows redundancy.
2. The expanding of the original bandwidth B to the bandwidth Bss must be done by a
process that is independent of the original signal. In other words, the spreading
process occurs after the signal is created by the source.

Data Communication (18CS46) Module 3


Maharaja Institute of Technology Mysore Department of Computer Science and Engineering

Fig.3.16 Spread spectrum

After the signal is created by the source, the spreading process uses a spreading code and
spreads the bandwidth. The above figure shows the original bandwidth B and the spread
bandwidth BSS. The spreading code is a series of numbers that look random but are a pattern.
There are two techniques to spread the bandwidth: Frequency hopping spread spectrum
(FHSS) and Direct sequence spread spectrum (DSSS).

Frequency hopping spread spectrum (FHSS)


The frequency hopping spread spectrum (FHSS) technique uses M different carrier
frequencies that are modulated by the source signal. The modulation is done using one carrier
frequency at a time, M frequencies are used in the long run. The bandwidth occupied by a
source after spreading is BFHSS >> B.
Figure 3.17 shows the general layout for FHSS. A pseudorandom code generator, called
pseudorandom noise (PN), creates a k-bit pattern for every hopping period Th. The
frequency table uses the pattern to find the frequency to be used for this hopping period and
passes it to the frequency synthesizer. The frequency synthesizer creates a carrier signal of
that frequency, and the source signal modulates the carrier signal.
In this case, M is 8 and k is 3. The pseudorandom code generator will create eight different 3-
bit patterns. These are mapped to eight different frequencies in the frequency table (see
Figure 3.18). The pattern for this station is 101, 111, 001, 000, 010, 011, 100. Note that this
pattern is pseudorandom.it is repeated after eight hopping.

Data Communication (18CS46) Module 3


Maharaja Institute of Technology Mysore Department of Computer Science and Engineering

Fig.3.17 FHSS

Fig.3.18 Frequency selection in FHSS

Figure 3.19 shows how the signal hops around from carrier to carrier. Assume the required
bandwidth of the original signal is 100 kHz.
It can be shown that this scheme can accomplish the previously mentioned goals. If there are
many k-bit patterns and the hopping period is short, a sender and receiver can have privacy.
The scheme also has an antijamming effect. A malicious sender may be able to send noise to
jam the signal for one hopping period (randomly), but not for the whole period.

Data Communication (18CS46) Module 3


Maharaja Institute of Technology Mysore Department of Computer Science and Engineering

Fig.3.19 FHSS cycles

Direct Sequence Spread Spectrum (DSSS)


The direct sequence spread spectrum (DSSS) technique also expands the bandwidth of the
original signal, replacing each data bit with n bits using a spreading code. Each bit is assigned
a code of n bits, called chips, where the chip rate is n times that of the data bit. Figure 3.20
shows the concept of DSSS.

Fig.3.20 DSSS

For example, let us consider the sequence used in a wireless LAN, the famous Barker
sequence, where n is 11. Assume that the original signal and the chips in the chip generator
use polar NRZ encoding. Figure 3.21 shows the chips and the result of multiplying the
original data by the chips to get the spread signal.

Data Communication (18CS46) Module 3


Maharaja Institute of Technology Mysore Department of Computer Science and Engineering

Fig.3.21 DSSS example

In Figure 3.21, the spreading code is 11 chips having the pattern 10110111000 (in this case).
If the original signal rate is N, the rate of the spread signal is 11N. The spread signal can
provide privacy if the intruder does not know the code. It can also provide immunity against
interference if each station uses a different code.

SWITCHING
Switching is the solution to connect multiple devices in a network to make one-to-one
communication. A switched network consists of a series of interlinked nodes, called switches.
Switches are devices capable of creating temporary connections between two or more devices
linked to the switch. Figure 3.22 shows a switched network.

Fig.3.22 Switched network

Data Communication (18CS46) Module 3


Maharaja Institute of Technology Mysore Department of Computer Science and Engineering

The end systems (communicating devices) are labeled A, B, C, D, and so on, and the
switches are labeled I, II, III, IV, and V. Each switch is connected to multiple links.

There are three methods of switching: circuit switching, packet switching and message
switching. Packet switching can further be divided into two subcategories—virtual circuit
approach and datagram approach.

Circuit-switched network:
A circuit-switched network is made of a set of switches connected by physical links, in which
each link is divided into n channels. It occurs at physical layer. A connection between two
stations is a dedicated path made of one or more links. In circuit switching, the resources
need to be reserved during the setup phase; the resources remain dedicated for the entire
duration of data transfer until the teardown phase.

The actual communication in a circuit-switched network requires three phases: connection


setup, data transfer, and connection teardown.

Setup Phase
Before the two parties can communicate, a dedicated circuit needs to be established. The end
systems are normally connected through dedicated lines to the switches, so connection setup
means creating dedicated channels between the switches. For example, in Figure 3.23 when
system A needs to connect to system M, it sends a setup request that includes the address of
system M, to switch I. Switch I finds a channel between itself and switch IV that can be
dedicated for this purpose. Switch I then send the request to switch IV, which finds a

Data Communication (18CS46) Module 3


Maharaja Institute of Technology Mysore Department of Computer Science and Engineering

dedicated channel between itself and switch III. Switch III informs system M of system A’s
intention currently.
In the next step to making a connection, system M will send an acknowledgment in the
opposite direction to system A. Only after system A receives this acknowledgment is the
connection established.
The end-to-end addressing is required for creating a connection between the two end systems.
These can be, the addresses of the computers assigned by the administrator in a TDM
network, or telephone numbers in an FDM network.

Data-Transfer Phase
After the establishment of the dedicated circuit (channels), the two parties can transfer data.

Teardown Phase
When one of the parties needs to disconnect, a signal is sent to each switch to release
the resources.

Fig.3.23 Circuit Switched network

Efficiency
The circuit-switched networks are not as efficient as the other two types of networks because
resources are allocated during the entire duration of the connection. These resources are
unavailable to other connections.

Data Communication (18CS46) Module 3


Maharaja Institute of Technology Mysore Department of Computer Science and Engineering

Delay
In circuit-switched network the delay is minimal. During data transfer the data are not
delayed at each switch; the resources are allocated for the duration of the connection. Figure
3.24 shows the idea of delay in a circuit-switched network when only two switches are
involved.

Fig.3.24 Delay in a Circuit Switched network

As figure 3.24 shows, there is no waiting time at each switch. The total delay is due to the
time needed to create the connection, transfer data, and disconnect the circuit.
 The delay caused by the setup=the propagation time of the source computer request +
the request signal transfer time +the propagation time
of the acknowledgment from the destination
computer+ the signal transfer time of the
acknowledgment
 The delay due to data transfer =the propagation time +data transfer time
 The time needed to tear down the circuit.

Packet-switched network
In packet-switched network, messages are divided into packets of fixed or variable size.
The size of the packet is determined by the network and the governing protocol.
In a packet-switched network, there is no resource reservation, resources are allocated on
demand.

Data Communication (18CS46) Module 3


Maharaja Institute of Technology Mysore Department of Computer Science and Engineering

There are two types of packet-switched networks: datagram networks and virtual circuit
networks.

Datagram Networks
In a datagram network, each packet is treated independently of all others. Packets in this
approach are referred to as datagrams.
Datagram switching is normally done at the network layer. Figure 3.25 shows how the
datagram approach is used to deliver four packets from station A to station X. The switches
in a datagram network are traditionally referred to as routers.

Fig.3.25 Datagram network with 4 switches


In this example, all four packets (or datagrams) belong to the same message but may travel
different paths to reach their destination. This approach can cause the datagrams of a
transmission to arrive at their destination out of order with different delays between the
packets. Packets may also be lost or dropped because of a lack of resources.
The datagram networks are sometimes referred to as connectionless networks. The term
connectionless here means that the switch (packet switch) does not keep information about
the connection state. There are no setup or teardown phases. Each packet is treated the same
by a switch regardless of its source or destination.

Routing Table
In this type of network, each switch (or packet switch) has a routing table which is based on
the destination address. The routing tables are dynamic and are updated periodically. The
destination addresses and the corresponding forwarding output ports are recorded in the
tables. Figure 3.26 shows the routing table for a switch.

Data Communication (18CS46) Module 3


Maharaja Institute of Technology Mysore Department of Computer Science and Engineering

Fig.3.26 routing table in a datagram network


A switch in a datagram network uses a routing table that is based on the destination
address.

Destination Address
Every packet in a datagram network carries a header that contains, among other information,
the destination address of the packet. When the switch receives the packet, it examines the
destination address and consult the routing table to find the corresponding port through which
the packet should be forwarded. This address remains the same during the entire journey of
the packet.

Efficiency
The efficiency of a datagram network is better than that of a circuit-switched network,
because resources are allocated only when there are packets to be transferred.

Delay
Delay in a datagram network is greater than in a virtual-circuit network. Each packet may
experience a wait at a switch before it is forwarded. Since not all packets in a message
necessarily travel through the same switches, the delay is not uniform for the packets of a
message. Figure 3.27 gives an example of delay in a datagram network for one packet.

Data Communication (18CS46) Module 3


Maharaja Institute of Technology Mysore Department of Computer Science and Engineering

Fig.3.27 Delay in a datagram network


The packet travels through two switches. There are three transmission times (3T), three
propagation delays (slopes 3τ of the lines), and two waiting times (w1+ w2). Processing time
in each switch is ignored. The total delay is

Total delay = 3T + 3τ + w1 +w2

Virtual-Circuit Networks

A virtual-circuit network is a cross between a circuit-switched network and a datagram


network. It has some characteristics of both.
1. As in a circuit-switched network, there are setup and teardown phases in addition to the
data transfer phase.
2. Resources can be allocated during the setup phase, as in a circuit-switched network, or on
demand, as in a datagram network.
3. As in a datagram network, data are packetized, and each packet carries an address in the
header. However, the address in the header has local jurisdiction, not end-to-end jurisdiction
4. As in a circuit-switched network, all packets follow the same path established during the
connection.
5.A virtual-circuit network is normally implemented in the data-link layer, while a circuit-
switched network is implemented in the physical layer and a datagram network in the
network layer.

Data Communication (18CS46) Module 3


Maharaja Institute of Technology Mysore Department of Computer Science and Engineering

Figure 3.28 is an example of a virtual-circuit network. The network has switches that allow
traffic from sources to destinations. A source or destination can be a computer, packet switch,
bridge, or any other device that connects other networks.

Fig.3.28 Virtual circuit network

Addressing
In a virtual-circuit network, two types of addressing are involved: global and local (virtual-
circuit identifier).

Global Addressing
A source or a destination needs to have a global address—an address that can be unique in
the scope of the network or internationally if the network is part of an international network.

Virtual-Circuit Identifier
The identifier that is used for data transfer is called the virtual-circuit identifier (VCI) or the
label. A VCI is a small number that has only switch scope; it is used by a frame between two
switches. When a frame arrives at a switch, it has a VCI; when it leaves, it has a different
VCI. Figure 3.29 shows how the VCI in a data frame changes from one switch to another.

Fig.3.29 Virtual circuit identifier

Data Communication (18CS46) Module 3


Maharaja Institute of Technology Mysore Department of Computer Science and Engineering

Three Phases
A source and destination need to go through three phases in a virtual-circuit network: setup,
data transfer, and teardown.
Setup Phase
In the setup phase, a switch creates an entry for a virtual circuit. The source and destination
use their global addresses to help switches make table entries for the connection. For
example, suppose source A needs to create a virtual circuit to B. Two steps are required: the
setup request and the acknowledgment.

Setup Request
A setup request frame is sent from the source to the destination. Figure 3.30 shows the
process.
a. Source A sends a setup frame to switch 1.
b. Switch 1 receives the setup request frame. It knows that a frame going from A to B
goes out through port 3. The switch can only able to fill three of the four columns. The switch
assigns the incoming port (1) and chooses an available incoming VCI (14) and the outgoing
port (3). The fourth column, outgoing VCI, which will be found during the acknowledgment
step. The switch then forwards the frame through port 3 to switch 2.
c. Switch 2 receives the setup request frame. The same events happen here as at switch 1,
three columns of the table are completed: in this case, incoming port (1), incoming VCI (66),

Fig.3.30 set-up request in a Virtual circuit network


and outgoing port (2).

Data Communication (18CS46) Module 3


Maharaja Institute of Technology Mysore Department of Computer Science and Engineering

d. Switch 3 receives the setup request frame. Again, three columns are completed: incoming
port (2), incoming VCI (22), and outgoing port (3).
e. Destination B receives the setup frame, and if it is ready to receive frames from A, it
assigns a VCI to the incoming frames that come from A, in this case 77. This VCI lets the
destination know that the frames come from A, and no other sources.

Acknowledgment
A special frame, called the acknowledgment frame, completes the entries in the switching
tables. Figure 3.31 shows the process.

Fig.3.31 set-up acknowledgement in a Virtual circuit network


a. The destination sends an acknowledgment to switch 3. The acknowledgment carries the
global source and destination addresses. The frame also carries VCI 77, chosen by the
destination as the incoming VCI for frames from A. Switch 3 uses this VCI to complete the
outgoing VCI column for this entry.
b. Switch 3 sends an acknowledgment to switch 2 that contains its incoming VCI in the table,
chosen in the previous step. Switch 2 uses this as the outgoing VCI in the table.
c. Switch 2 sends an acknowledgment to switch 1 that contains its incoming VCI in the table,
chosen in the previous step. Switch 1 uses this as the outgoing VCI in the table.
d. Finally switch 1 sends an acknowledgment to source A that contains its incoming VCI in
the table, chosen in the previous step.
e. The source uses this as the outgoing VCI for the data frames to be sent to destination to B.

Data Communication (18CS46) Module 3


Maharaja Institute of Technology Mysore Department of Computer Science and Engineering

Teardown Phase
In this phase, source A, after sending all frames to B, sends a special frame called a teardown
request. Destination B responds with a teardown confirmation frame. All switches delete the
corresponding entry from their tables.

Efficiency
In virtual circuit switching, all packets belonging to the same source and destination travel
the same path, but the packets may arrive at the destination with different delays if resource
allocation is on demand. There is one big advantage in a virtual-circuit network even if
resource allocation is on demand. The source can check the availability of the resources,
without reserving it.

Delay in Virtual-Circuit Networks


In a virtual-circuit network, there is a one-time delay for setup and a one-time delay for
teardown. If resources are allocated during the setup phase, there is no wait time for
individual packets. Figure 3.32 shows the delay for a packet traveling through two switches
in a virtual-circuit network.
The packet is traveling through two switches (routers). The total delay time is
Total delay = 3T + 3τ + set up delay + tear down delay

Fig.3.32 delay in a Virtual circuit network

Data Communication (18CS46) Module 3


Maharaja Institute of Technology Mysore Department of Computer Science and Engineering

ERROR DETECTION AND CORRECTION

Networks must be able to transfer data from one device to another with acceptable accuracy.
For most applications, a system must guarantee that the data received are identical to the data
transmitted. Any time data are transmitted from one node to the next, they can become
corrupted in passage. Many factors can alter one or more bits of a message. Some
applications require a mechanism for detecting and correcting errors.

Types of Errors
Whenever bits flow from one point to another, they are subject to unpredictable changes
because of interference. This interference can change the shape of the signal.
There are 2 types of errors: single-bit error and burst error.
Single-bit error -> only 1 bit of a given data unit (such as a byte, character, or packet) is
changed from 1 to 0 or from 0 to 1.
Burst error -> 2 or more bits in the data unit have changed from 1 to 0 or from 0 to 1. Burst
error is more likely to occur than a single bit error because the duration of the noise signal is
normally longer than the duration of 1 bit
Below figure 3.33 shows the effect of a single-bit and a burst error on a data unit.

Fig.3.33 single-bit and burst error


The central concept in detecting or correcting errors is redundancy which means adding
some extra bits to data. These redundant bits are added by the sender and removed by the
receiver. The correction of errors is more difficult than the detection. In error detection, it
only looks to see if any error has occurred. In error correction, it is necessary to know the
exact number of bits that are corrupted and their location in the message.
Redundancy is achieved through various coding schemes. The sender adds redundant
bits through a process that creates a relationship between the redundant bits and the actual
data bits. The receiver checks the relationships between the two sets of bits to detect errors.

Data Communication (18CS46) Module 3


Maharaja Institute of Technology Mysore Department of Computer Science and Engineering

BLOCK CODING
In block coding, A message is divided into blocks, each of ‘k’ bits, called data words. ‘r’
redundant bits are added to each block to make the length n = k + r. The resulting n-bit
blocks are called code words. With k bits, a combination of 2k data words can be created and
with n bits, a combination of 2n codewords can be created. Since n > k, the number of
possible code words is larger than the number of possible data words. The block coding
process is one-to-one; the same data word is always encoded as the same codeword. Hence,
out of 2n codewords,2n - 2k codewords are not used. These codewords are invalid or illegal. If
the receiver receives an invalid codeword, this indicates that the data was corrupted during
transmission.
Error Detection
The receiver can detect a change in the original codeword, if the following two conditions are
met:
1. The receiver has a list of valid codewords.
2. The original codeword has changed to an invalid one.

Figure 3.34 shows the role of block coding in error detection. The sender creates codewords
out of data words by using a generator that applies the rules and procedures of encoding Each
codeword sent to the receiver may change during transmission. If the received codeword is
the same as one of the valid codewords, the word is accepted; the corresponding data word is
extracted for use. If the received codeword is not valid, it is discarded. However, if the
codeword is corrupted during transmission but the received word still matches a valid
codeword, the error remains undetected.

Fig.3.34 Error detection in Block coding

Data Communication (18CS46) Module 3


Maharaja Institute of Technology Mysore Department of Computer Science and Engineering

Example: Let us assume that k = 2 and n = 3. Below Table 3.1 shows the list of data words
and codewords.

Table 3.1 A code for error detection


Assume the sender encodes the data word 01 as 011 and sends it to the receiver. Consider the
following cases:
1. The receiver receives 011. It is a valid codeword. The receiver extracts the data word 01
from it.
2. The codeword is corrupted during transmission, and 111 is received (the leftmost bit is
corrupted). This is not a valid codeword and is discarded.
3. The codeword is corrupted during transmission, and 000 is received (the right two bits are
corrupted). This is a valid codeword. The receiver incorrectly extracts the data word 00. Two
corrupted bits have made the error undetectable.

Hence, an error-detecting code can detect only the types of errors for which it is
designed, other types of errors may remain undetected.

Hamming distance
The Hamming distance between two words (of the same size) is the number of differences
between the corresponding bits. The Hamming distance between two words x and y as d (x,
y). Hamming distance between the received codeword and the sent codeword is the number
of bits that are corrupted during transmission.
For example, if the codeword 00000 is sent and 01101 is received, 3 bits are in error and the
Hamming distance between the two is d (00000, 01101) = 3.
The Hamming distance can easily be found by applying the XOR operation on the
two words and count the number of 1s in the result.
The minimum Hamming distance is the smallest Hamming distance between all
possible pairs of codewords. To guarantee the detection of up to s errors in all cases, the
minimum Hamming distance in a block code must be dmin = s + 1.

Data Communication (18CS46) Module 3


Maharaja Institute of Technology Mysore Department of Computer Science and Engineering

Linear Block Codes


a linear block code is a code in which the exclusive OR of two valid codewords creates
another valid codeword.
The code in Table 3.1 is a linear block code because the result of XORing any codeword with
any other codeword is a valid codeword. For example, the XORing of the second and third
codewords creates the fourth one.

Minimum Distance for Linear Block Cods is the number of 1s in the nonzero valid
codeword with the smallest number of 1s.
Example: In Table 3.1, the numbers of 1s in the nonzero codewords are 2, 2, and 2. So the
minimum Hamming distance is dmin = 2.

Parity-Check Code
This code is a linear block code. In this code, a k-bit data word is changed to an n-bit code
word, where n = k + 1. The extra bit, called the parity bit, is selected to make the total number
of 1s in the codeword even.
The code in Table 3.2 is a parity-check code with k = 4 and n = 5.

Table 3.2 Parity check code


Figure 3.35 shows a possible structure of an encoder (at the sender) and a decoder of simple
parity check code. In the above fig., the encoder uses a generator that takes a copy of a 4-bit
data word (a0, a1, a2, and a3) and generates a parity bit r0. The data word bits and the parity
bit create the 5-bit codeword. The parity bit that is added makes the number of 1s in the
codeword even. This is normally done by adding the 4 bits of the data word (modulo-2); the
result is the parity bit.
r0 = a3 + a2 + a1 + a0 (modulo-2)

Data Communication (18CS46) Module 3


Maharaja Institute of Technology Mysore Department of Computer Science and Engineering

Fig. 3.35 encoder and decoder of simple Parity check code


If the number of 1s is even, the result is 0; if the number of 1s is odd, the result is 1. In both
cases, the total number of 1s in the codeword is even. The sender sends the codeword, which
may be corrupted during transmission. The receiver receives a 5-bit word. The checker at the
receiver do the addition over all 5 bits. The result, which is called the syndrome, is just 1 bit.
The syndrome is 0 when the number of 1s in the received codeword is even; otherwise, it is 1.
s0 = b3 + b2 + b1 + b0 + q0 (modulo-2)
The syndrome is passed to the decision logic analyzer. If the syndrome is 0, there is no
detectable error in the received codeword, the data portion of the received codeword is
accepted as the data word. If the syndrome is 1, the data portion of the received codeword is
discarded. The data word is not created.
Note: A parity-check code can detect an odd number of errors.

CYCLIC CODES
Cyclic codes are special linear block codes with one extra property. In a cyclic code, if a
codeword is cyclically shifted (rotated), the result is another codeword.
For example, if 1011000 is a codeword and by doing cyclically left shift, then 0110001 is
also a codeword.

Cyclic Redundancy Check


A subset of cyclic codes called the cyclic redundancy check (CRC), which is used in
networks such as LANs and WANs.
Figure 3.36 shows design for the encoder and decoder of CRC.

Data Communication (18CS46) Module 3


Maharaja Institute of Technology Mysore Department of Computer Science and Engineering

Fig. 3.36 encoder and decoder of CRC


In the encoder, the data word has k bits (k=4 here),the codeword has n bits (n=7 here).The
size of the data word is augmented by adding n - k (3 here) 0s to the right-hand side of the
word. The n-bit result is fed into the generator. The generator uses a divisor of size n - k + 1
(4 here), which is predefined. The generator divides the augmented data word by the divisor
(modulo-2 division). The quotient of the division is discarded and the remainder (r 2r1r0) is
appended to the data word to create the codeword.
The decoder receives the codeword A copy of all n bits is fed to the checker, which is
a replica of the generator. The remainder produced by the checker is a syndrome of n - k (3
here) bits, which is fed to the decision logic analyzer. The analyzer has a simple function. If
the syndrome bits are all 0s, the 4 leftmost bits of the codeword are accepted as the data
word. Otherwise, the 4 bits are discarded (error).

Note: The divisor in a cyclic code is normally called the generator polynomial or simply
the generator.

For example: The encoder takes a data word and augments it with n + k number of 0s. It then
divides the augmented data word by the divisor, as shown in figure 3.37. In each step, a copy
of the divisor is XORed with the 4 bits of the dividend. The result of the XOR operation
(remainder) is 3 bits (in this case), which is used for the next step after 1 extra bit is pulled
down to make it 4 bits long. When there are no bits left to pull down, we have a result. The 3-

Data Communication (18CS46) Module 3


Maharaja Institute of Technology Mysore Department of Computer Science and Engineering

bit remainder forms the check bits (r2, r1, and r0). They are appended to the data word to
create the codeword.

Fig. 3.37 Division in CRC encoder

The decoder does the same division process as the encoder as shown in figure 3.38. The
remainder of the division is the syndrome. If the syndrome is all 0s, then there is no error and
the data word is separated from the received codeword and accepted. Otherwise, everything
is discarded.

Data Communication (18CS46) Module 3


Maharaja Institute of Technology Mysore Department of Computer Science and Engineering

Fig. 3.38 Division in the CRC decoder for two cases

CHECKSUM
Checksum is an error-detecting technique that can be applied to a message of any length. In
the Internet, the checksum technique is mostly used at the network and transport layer rather
than the data-link layer.
At the source, the message is first divided into m-bit units. The generator then creates
an extra m-bit unit called the checksum, which is sent with the message. At the destination,
the checker creates a new checksum from the combination of the message and sent
checksum. If the new checksum is all 0s, the message is accepted; otherwise, the message is
discarded as shown in figure 3.39.

Fig. 3.39 checksum

Data Communication (18CS46) Module 3


Maharaja Institute of Technology Mysore Department of Computer Science and Engineering

Example:

Suppose the message is a list of five 4-bit numbers. The set of numbers is (7, 11, 12, 0, 6).
The sender adds all five numbers in one’s complement to get the sum = 6. The sender then
complements the result to get the checksum = 9, which is 15 - 6. Note that 6 = (0110)2 and 9
= (1001)2; they are complements of each other. The sender sends the five data numbers and
the checksum (7, 11, 12, 0, 6, 9). If there is no corruption in transmission, the receiver
receives (7, 11, 12, 0, 6, 9) and adds them in one’s complement to get 15. The sender
complements 15 to get 0. This shows that data have not been corrupted. Figure 3.40 shows
the process.

Fig. 3.40 checksum example


one’s complement arithmetic
In this arithmetic, unsigned numbers can be represented between 0 and 2 m - 1 using only m
bits. If the number has more than m bits, the extra leftmost bits need to be added to the m
rightmost bits (wrapping).
Example: the decimal number 36 in binary is (100100)2. To change it to a 4-bit number, add
the extra leftmost bit to the right four bits as shown below.
(10)2 + (0100) 2 = (0110) 2 -> (6) 10
Instead of sending 36 as the sum, we can send 6 as the sum (7, 11, 12, 0, 6, 6). The receiver
can add the first five numbers in one’s complement arithmetic. If the result is 6, the numbers
are accepted. otherwise, they are rejected.

Data Communication (18CS46) Module 3

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy