3rd Module
3rd Module
3rd Module
Module 3
In real life, we have links with limited bandwidths. The wise use of these bandwidths has
been one of the main challenges of electronic communications. However, the meaning of
wise may depend on the application. Sometimes it is necessary to combine several low-
bandwidth channels to make use of one channel with a larger bandwidth. Sometimes it is
necessary to expand the bandwidth of a channel to achieve goals such as privacy and
antijamming. There are two broad categories of bandwidth utilization: multiplexing and
spectrum spreading. In multiplexing, the main goal is efficiency; it combines several channels
into one. In spectrum spreading, its goals are privacy and antijamming.
MULTIPLEXING:
The set of techniques that allow the simultaneous transmission of multiple signals across a
single data link is called Multiplexing.
In a multiplexed system, n lines share the bandwidth of one link. Figure 3.1 shows the basic
format of a multiplexed system. At the sending side, many lines direct their transmission
streams to a multiplexer (MUX), which combines them into a single stream (many-to
one).At the receiving end, that stream is fed into a DE multiplexer (DEMUX), which
separates the stream back into its component transmissions (one-to-many) and directs them to
their corresponding lines. In the figure, the word link refers to the physical path. The word
channel refers to the portion of a link that carries a transmission between a given pair of
lines. One link can have many (n) channels.
Fig.3.3 FDM
Multiplexing Process
Figure 3.4 is a conceptual illustration of the multiplexing process. Each source generates a
signal of a similar frequency range. Inside the multiplexer, these similar signals modulate
different carrier frequencies (f1, f2, and f3). The resulting modulated signals are then
combined into a single composite signal that is sent out over a media link that has enough
bandwidth to accommodate it.
Demultiplexing Process
The demultiplexer uses a series of filters to decompose the multiplexed signal into its
constituent component signals. The individual signals are then passed to a demodulator that
separates them from their carriers and passes them to the output lines. Figure 3.5 is
a conceptual illustration of demultiplexing process.
Fig.3.6 WDM
The basic idea of WDM is very simple. It combines multiple light sources into one single
light at the multiplexer and do the reverse at the demultiplexer. The combining and splitting
of light sources are easily handled by a prism. Using this technique, a multiplexer can be
made to combine several input beams of light, each containing a narrow band of frequencies,
into one output beam of a wider band of frequencies. A demultiplexer can also be made to
reverse the process. Figure 3.7 shows the concept.
Multilevel Multiplexing
Multilevel multiplexing is a technique used when the data rate of an input line is a multiple of
others.
For example, in Figure 3.11, it has two inputs of 20 kbps and three inputs of 40 kbps.The first
two input lines can be multiplexed together to provide a data rate equal to the last three. A
second level of multiplexing can create an output of 160 kbps.
Multiple-Slot Allocation
It allots more than one slot in a frame to a single input line.
For example, if an input line has a data rate that is a multiple of another input. In Figure 3.12,
the input line with a 50-kbps data rate can be given two slots in the output. Then a
demultiplexer is inserted in the line to make two inputs out of one.
Pulse Stuffing
Sometimes the bit rates of sources are not multiple integers of each other. Therefore, neither
of the above two techniques can be applied. One solution is to make the highest input data
rate the dominant data rate and then add dummy bits to the input lines with lower rates. This
will increase their rates. This technique is called pulse stuffing, bit padding, or bit stuffing.
The idea is shown in Figure 3.13. The input with a data rate of 46 is pulse-stuffed to increase
the rate to 50 kbps.
Frame Synchronizing
Synchronization between the multiplexer and demultiplexer is a major issue. If the
multiplexer and the demultiplexer are not synchronized, a bit belonging to one channel may
be received by the wrong channel. For this reason, one or more synchronization bits are
usually added to the beginning of each frame. These bits, called framing bits that allows the
demultiplexer to synchronize with the incoming stream so that it can separate the time slots
accurately. This synchronization information consists of 1 bit per frame, alternating between
0 and 1, as shown in Figure 3.14.
Statistical TDM
In synchronous TDM, each input has a reserved slot in the output frame. Hence it can be
inefficient if some input lines have no data to send. This drawback of synchronous TDM can
be overcome by statistical TDM, in which slots are dynamically allocated to improve
bandwidth efficiency. In statistical multiplexing, the number of slots in each frame is less
than the number of input lines. The multiplexer checks each input line in round robin fashion;
it allocates a slot for an input line if the line has data to send; otherwise, it skips the line and
checks the next line.
Figure 3.15 shows a synchronous and a statistical TDM example.
Addressing
An output slot in synchronous TDM is totally occupied by data; hence there is no need for
addressing. Synchronization and preassigned relationships between the inputs and outputs
serve as an address.
In statistical TDM, a slot needs to carry data as well as the address of the destination. Here,
there is no fixed relationship between the inputs and outputs because there are no preassigned
or reserved slots. It is necessary to include the address of the receiver inside each slot to show
where it is to be delivered. The addressing in its simplest form can be n bits to define N
different output lines with n = log2 N.
Slot Size
Since a slot carries both data and an address in statistical TDM, the ratio of the data size to
address size must be reasonable to make transmission efficient.
No Synchronization Bit
There is another difference between synchronous and statistical TDM, but this time it is at the
frame level. The frames in statistical TDM need not be synchronized, so we do not need
synchronization bits.
Bandwidth
In statistical TDM, the capacity of the link is normally less than the sum of the capacities of
each channel. The designers of statistical TDM define the capacity of the link based on the
statistics of the load for each channel.
SPREAD SPECTRUM:
Spread spectrum is designed to be used in wireless applications (LANs and WANs). It also
combines signals from different sources to fit into a larger bandwidth, but the goals are
privacy and antijamming. To achieve these goals, spread spectrum techniques add
redundancy; they spread the original spectrum needed for each station. If the required
bandwidth for each station is B, spread spectrum expands it to Bss, such that Bss >> B. The
expanded bandwidth allows the source to wrap its message in a protective envelope for a
more secure transmission.
Figure 3.16 shows the idea of spread spectrum. Spread spectrum achieves its goals through
two principles:
1. The bandwidth allocated to each station needs to be, by far, larger than what is
needed. This allows redundancy.
2. The expanding of the original bandwidth B to the bandwidth Bss must be done by a
process that is independent of the original signal. In other words, the spreading
process occurs after the signal is created by the source.
After the signal is created by the source, the spreading process uses a spreading code and
spreads the bandwidth. The above figure shows the original bandwidth B and the spread
bandwidth BSS. The spreading code is a series of numbers that look random but are a pattern.
There are two techniques to spread the bandwidth: Frequency hopping spread spectrum
(FHSS) and Direct sequence spread spectrum (DSSS).
Fig.3.17 FHSS
Figure 3.19 shows how the signal hops around from carrier to carrier. Assume the required
bandwidth of the original signal is 100 kHz.
It can be shown that this scheme can accomplish the previously mentioned goals. If there are
many k-bit patterns and the hopping period is short, a sender and receiver can have privacy.
The scheme also has an antijamming effect. A malicious sender may be able to send noise to
jam the signal for one hopping period (randomly), but not for the whole period.
Fig.3.20 DSSS
For example, let us consider the sequence used in a wireless LAN, the famous Barker
sequence, where n is 11. Assume that the original signal and the chips in the chip generator
use polar NRZ encoding. Figure 3.21 shows the chips and the result of multiplying the
original data by the chips to get the spread signal.
In Figure 3.21, the spreading code is 11 chips having the pattern 10110111000 (in this case).
If the original signal rate is N, the rate of the spread signal is 11N. The spread signal can
provide privacy if the intruder does not know the code. It can also provide immunity against
interference if each station uses a different code.
SWITCHING
Switching is the solution to connect multiple devices in a network to make one-to-one
communication. A switched network consists of a series of interlinked nodes, called switches.
Switches are devices capable of creating temporary connections between two or more devices
linked to the switch. Figure 3.22 shows a switched network.
The end systems (communicating devices) are labeled A, B, C, D, and so on, and the
switches are labeled I, II, III, IV, and V. Each switch is connected to multiple links.
There are three methods of switching: circuit switching, packet switching and message
switching. Packet switching can further be divided into two subcategories—virtual circuit
approach and datagram approach.
Circuit-switched network:
A circuit-switched network is made of a set of switches connected by physical links, in which
each link is divided into n channels. It occurs at physical layer. A connection between two
stations is a dedicated path made of one or more links. In circuit switching, the resources
need to be reserved during the setup phase; the resources remain dedicated for the entire
duration of data transfer until the teardown phase.
Setup Phase
Before the two parties can communicate, a dedicated circuit needs to be established. The end
systems are normally connected through dedicated lines to the switches, so connection setup
means creating dedicated channels between the switches. For example, in Figure 3.23 when
system A needs to connect to system M, it sends a setup request that includes the address of
system M, to switch I. Switch I finds a channel between itself and switch IV that can be
dedicated for this purpose. Switch I then send the request to switch IV, which finds a
dedicated channel between itself and switch III. Switch III informs system M of system A’s
intention currently.
In the next step to making a connection, system M will send an acknowledgment in the
opposite direction to system A. Only after system A receives this acknowledgment is the
connection established.
The end-to-end addressing is required for creating a connection between the two end systems.
These can be, the addresses of the computers assigned by the administrator in a TDM
network, or telephone numbers in an FDM network.
Data-Transfer Phase
After the establishment of the dedicated circuit (channels), the two parties can transfer data.
Teardown Phase
When one of the parties needs to disconnect, a signal is sent to each switch to release
the resources.
Efficiency
The circuit-switched networks are not as efficient as the other two types of networks because
resources are allocated during the entire duration of the connection. These resources are
unavailable to other connections.
Delay
In circuit-switched network the delay is minimal. During data transfer the data are not
delayed at each switch; the resources are allocated for the duration of the connection. Figure
3.24 shows the idea of delay in a circuit-switched network when only two switches are
involved.
As figure 3.24 shows, there is no waiting time at each switch. The total delay is due to the
time needed to create the connection, transfer data, and disconnect the circuit.
The delay caused by the setup=the propagation time of the source computer request +
the request signal transfer time +the propagation time
of the acknowledgment from the destination
computer+ the signal transfer time of the
acknowledgment
The delay due to data transfer =the propagation time +data transfer time
The time needed to tear down the circuit.
Packet-switched network
In packet-switched network, messages are divided into packets of fixed or variable size.
The size of the packet is determined by the network and the governing protocol.
In a packet-switched network, there is no resource reservation, resources are allocated on
demand.
There are two types of packet-switched networks: datagram networks and virtual circuit
networks.
Datagram Networks
In a datagram network, each packet is treated independently of all others. Packets in this
approach are referred to as datagrams.
Datagram switching is normally done at the network layer. Figure 3.25 shows how the
datagram approach is used to deliver four packets from station A to station X. The switches
in a datagram network are traditionally referred to as routers.
Routing Table
In this type of network, each switch (or packet switch) has a routing table which is based on
the destination address. The routing tables are dynamic and are updated periodically. The
destination addresses and the corresponding forwarding output ports are recorded in the
tables. Figure 3.26 shows the routing table for a switch.
Destination Address
Every packet in a datagram network carries a header that contains, among other information,
the destination address of the packet. When the switch receives the packet, it examines the
destination address and consult the routing table to find the corresponding port through which
the packet should be forwarded. This address remains the same during the entire journey of
the packet.
Efficiency
The efficiency of a datagram network is better than that of a circuit-switched network,
because resources are allocated only when there are packets to be transferred.
Delay
Delay in a datagram network is greater than in a virtual-circuit network. Each packet may
experience a wait at a switch before it is forwarded. Since not all packets in a message
necessarily travel through the same switches, the delay is not uniform for the packets of a
message. Figure 3.27 gives an example of delay in a datagram network for one packet.
Virtual-Circuit Networks
Figure 3.28 is an example of a virtual-circuit network. The network has switches that allow
traffic from sources to destinations. A source or destination can be a computer, packet switch,
bridge, or any other device that connects other networks.
Addressing
In a virtual-circuit network, two types of addressing are involved: global and local (virtual-
circuit identifier).
Global Addressing
A source or a destination needs to have a global address—an address that can be unique in
the scope of the network or internationally if the network is part of an international network.
Virtual-Circuit Identifier
The identifier that is used for data transfer is called the virtual-circuit identifier (VCI) or the
label. A VCI is a small number that has only switch scope; it is used by a frame between two
switches. When a frame arrives at a switch, it has a VCI; when it leaves, it has a different
VCI. Figure 3.29 shows how the VCI in a data frame changes from one switch to another.
Three Phases
A source and destination need to go through three phases in a virtual-circuit network: setup,
data transfer, and teardown.
Setup Phase
In the setup phase, a switch creates an entry for a virtual circuit. The source and destination
use their global addresses to help switches make table entries for the connection. For
example, suppose source A needs to create a virtual circuit to B. Two steps are required: the
setup request and the acknowledgment.
Setup Request
A setup request frame is sent from the source to the destination. Figure 3.30 shows the
process.
a. Source A sends a setup frame to switch 1.
b. Switch 1 receives the setup request frame. It knows that a frame going from A to B
goes out through port 3. The switch can only able to fill three of the four columns. The switch
assigns the incoming port (1) and chooses an available incoming VCI (14) and the outgoing
port (3). The fourth column, outgoing VCI, which will be found during the acknowledgment
step. The switch then forwards the frame through port 3 to switch 2.
c. Switch 2 receives the setup request frame. The same events happen here as at switch 1,
three columns of the table are completed: in this case, incoming port (1), incoming VCI (66),
d. Switch 3 receives the setup request frame. Again, three columns are completed: incoming
port (2), incoming VCI (22), and outgoing port (3).
e. Destination B receives the setup frame, and if it is ready to receive frames from A, it
assigns a VCI to the incoming frames that come from A, in this case 77. This VCI lets the
destination know that the frames come from A, and no other sources.
Acknowledgment
A special frame, called the acknowledgment frame, completes the entries in the switching
tables. Figure 3.31 shows the process.
Teardown Phase
In this phase, source A, after sending all frames to B, sends a special frame called a teardown
request. Destination B responds with a teardown confirmation frame. All switches delete the
corresponding entry from their tables.
Efficiency
In virtual circuit switching, all packets belonging to the same source and destination travel
the same path, but the packets may arrive at the destination with different delays if resource
allocation is on demand. There is one big advantage in a virtual-circuit network even if
resource allocation is on demand. The source can check the availability of the resources,
without reserving it.
Networks must be able to transfer data from one device to another with acceptable accuracy.
For most applications, a system must guarantee that the data received are identical to the data
transmitted. Any time data are transmitted from one node to the next, they can become
corrupted in passage. Many factors can alter one or more bits of a message. Some
applications require a mechanism for detecting and correcting errors.
Types of Errors
Whenever bits flow from one point to another, they are subject to unpredictable changes
because of interference. This interference can change the shape of the signal.
There are 2 types of errors: single-bit error and burst error.
Single-bit error -> only 1 bit of a given data unit (such as a byte, character, or packet) is
changed from 1 to 0 or from 0 to 1.
Burst error -> 2 or more bits in the data unit have changed from 1 to 0 or from 0 to 1. Burst
error is more likely to occur than a single bit error because the duration of the noise signal is
normally longer than the duration of 1 bit
Below figure 3.33 shows the effect of a single-bit and a burst error on a data unit.
BLOCK CODING
In block coding, A message is divided into blocks, each of ‘k’ bits, called data words. ‘r’
redundant bits are added to each block to make the length n = k + r. The resulting n-bit
blocks are called code words. With k bits, a combination of 2k data words can be created and
with n bits, a combination of 2n codewords can be created. Since n > k, the number of
possible code words is larger than the number of possible data words. The block coding
process is one-to-one; the same data word is always encoded as the same codeword. Hence,
out of 2n codewords,2n - 2k codewords are not used. These codewords are invalid or illegal. If
the receiver receives an invalid codeword, this indicates that the data was corrupted during
transmission.
Error Detection
The receiver can detect a change in the original codeword, if the following two conditions are
met:
1. The receiver has a list of valid codewords.
2. The original codeword has changed to an invalid one.
Figure 3.34 shows the role of block coding in error detection. The sender creates codewords
out of data words by using a generator that applies the rules and procedures of encoding Each
codeword sent to the receiver may change during transmission. If the received codeword is
the same as one of the valid codewords, the word is accepted; the corresponding data word is
extracted for use. If the received codeword is not valid, it is discarded. However, if the
codeword is corrupted during transmission but the received word still matches a valid
codeword, the error remains undetected.
Example: Let us assume that k = 2 and n = 3. Below Table 3.1 shows the list of data words
and codewords.
Hence, an error-detecting code can detect only the types of errors for which it is
designed, other types of errors may remain undetected.
Hamming distance
The Hamming distance between two words (of the same size) is the number of differences
between the corresponding bits. The Hamming distance between two words x and y as d (x,
y). Hamming distance between the received codeword and the sent codeword is the number
of bits that are corrupted during transmission.
For example, if the codeword 00000 is sent and 01101 is received, 3 bits are in error and the
Hamming distance between the two is d (00000, 01101) = 3.
The Hamming distance can easily be found by applying the XOR operation on the
two words and count the number of 1s in the result.
The minimum Hamming distance is the smallest Hamming distance between all
possible pairs of codewords. To guarantee the detection of up to s errors in all cases, the
minimum Hamming distance in a block code must be dmin = s + 1.
Minimum Distance for Linear Block Cods is the number of 1s in the nonzero valid
codeword with the smallest number of 1s.
Example: In Table 3.1, the numbers of 1s in the nonzero codewords are 2, 2, and 2. So the
minimum Hamming distance is dmin = 2.
Parity-Check Code
This code is a linear block code. In this code, a k-bit data word is changed to an n-bit code
word, where n = k + 1. The extra bit, called the parity bit, is selected to make the total number
of 1s in the codeword even.
The code in Table 3.2 is a parity-check code with k = 4 and n = 5.
CYCLIC CODES
Cyclic codes are special linear block codes with one extra property. In a cyclic code, if a
codeword is cyclically shifted (rotated), the result is another codeword.
For example, if 1011000 is a codeword and by doing cyclically left shift, then 0110001 is
also a codeword.
Note: The divisor in a cyclic code is normally called the generator polynomial or simply
the generator.
For example: The encoder takes a data word and augments it with n + k number of 0s. It then
divides the augmented data word by the divisor, as shown in figure 3.37. In each step, a copy
of the divisor is XORed with the 4 bits of the dividend. The result of the XOR operation
(remainder) is 3 bits (in this case), which is used for the next step after 1 extra bit is pulled
down to make it 4 bits long. When there are no bits left to pull down, we have a result. The 3-
bit remainder forms the check bits (r2, r1, and r0). They are appended to the data word to
create the codeword.
The decoder does the same division process as the encoder as shown in figure 3.38. The
remainder of the division is the syndrome. If the syndrome is all 0s, then there is no error and
the data word is separated from the received codeword and accepted. Otherwise, everything
is discarded.
CHECKSUM
Checksum is an error-detecting technique that can be applied to a message of any length. In
the Internet, the checksum technique is mostly used at the network and transport layer rather
than the data-link layer.
At the source, the message is first divided into m-bit units. The generator then creates
an extra m-bit unit called the checksum, which is sent with the message. At the destination,
the checker creates a new checksum from the combination of the message and sent
checksum. If the new checksum is all 0s, the message is accepted; otherwise, the message is
discarded as shown in figure 3.39.
Example:
Suppose the message is a list of five 4-bit numbers. The set of numbers is (7, 11, 12, 0, 6).
The sender adds all five numbers in one’s complement to get the sum = 6. The sender then
complements the result to get the checksum = 9, which is 15 - 6. Note that 6 = (0110)2 and 9
= (1001)2; they are complements of each other. The sender sends the five data numbers and
the checksum (7, 11, 12, 0, 6, 9). If there is no corruption in transmission, the receiver
receives (7, 11, 12, 0, 6, 9) and adds them in one’s complement to get 15. The sender
complements 15 to get 0. This shows that data have not been corrupted. Figure 3.40 shows
the process.