TCS Full
TCS Full
NETWORKS I
MR. ADEBOWALE Q R
WEEK ONE
INTRODUCTION TO LAN AND WAN
Wireless
NIC
USERS
WEBSITES
SERVER
PARTS OF NETWORK
HOW THE INTERNET WORKS
CLIENTS AND HOSTS
PROTOCOLS
TOPOLOGY \ LAYOUT
BUS
CONSIDERING THE COST OF EQUIPMENTS
EASE OF MAINTAINANANCE
TERMINATORS MUST BE USED AT THE END TERMINAL OF THE CABLE SO
THE SIGNAL PROPAGATING ON THE MEDIUM DOES NOT GET MIRRORED
BACK SO AS TO AVOID DUPLICATES OF INFORMATION.
ADVANTAGES: COST EFFECTIVE
DISADVANTAGES
IT BREAKS VERY EASY.
USAGE:
DECENT FOR SMALL HOMES OR OFFICES
RING TOPOLOGY
IT ITERCONNECTS ALL NODES ON THE NETWORK IN A RING FORMAT.
Advantages: It can accommodate more data flow
Additional components does not affect the performance of the
network
Each packet of data must pass through all the computers between
source and destination.
Even when the load on the network increases, its performance is
better than that of bus topology
WEEK TWO
SWITCHING TECHNIQUE
There are various number of devices that are not
directly connected together and there is need to
communicate with each other. The technique used
in the exchange of information from the source to
the destination is known as SWITCHING
TECHNIQUE.
This technique is broadly divided into two parts:
Circuit and Packet Switching
CIRCUIT SWITCHING
Why Circuit Switching
Switched Communication Network
Circuit Switching Fundamentals:
Advantages and Disadvantages
Switching Concepts:
Space division switching :
Crossbar switches
Time division switching
Routing in circuit switched networks
Signaling in circuit switched networks
LEARNING OUTCOMES
On completion, the student will be able to:
Understand the need for circuit switching
Specify the components of a switched communication
network
Understand how circuit switching takes place
Understand how switching takes place using space-division
and time-division switching
Understand how routing is performed
Understand how signaling is performed
INTRODUCTION
Micro
Switch
Limitations of crossbar switch
The number of crosspoints grows with the square of the number of attached stations.
costly for a large switch.
The failure of a crosspoint prevents connection between the two devices whose lines
intersect at that crosspoint.
The crosspoint are inefficiently utilized
Only a small fraction are engaged even if all of the attached devices are active
Solution is to build multistage space division switches
Multistage Switches
There is more than one path through the
network to connect two endpoints, thereby
increasing reliability.
Multistage switches may lead to blocking
The problem may be tackled by increasing
the number or size of the intermediate
switches, which also increases the cost.
Blocking and Non blocking networks
An important characteristics of a circuit-switch node is whether it is blocking or non-
blocking
A blocking network is one which the node is unable to connect two stations because all
possible paths between them are already in use.
A non-blocking network permits all stations to be connected at once and grants all
possible connection requests as long as the called party is free.
For a network that supports voice only, a blocking configuration may be acceptable
because most calls are of short duration.
But, for data applications where a connection can stay active for hours, non-blocking
configuration is desirable
Time Division Switching
Both voice and data can be transmitted using digital signals.
All modern circuit switches use digital time-division
multiplexing (TDM) technique for establishing and maintaining
circuits.
Synchronous TDM allows multiple low-speed bit streams to
share a high-speed line.
A set of inputs is sampled in a round robin manner. The samples
are organized serially into slots (channels) to form a recurring
frame of slots
During successive time slots, different I/O pairings are enabled,
allowing a number of connections to be carried over the shared
bus.
Time Division Switching Continues…
To keep up with the input lines, the data rate on the bus must be high enough so that
the slots recur sufficiently frequently.
For 100 full=duplex lines at 19.200 Kbps, the data rate on the bus must be greater
than 1.92 Mbps
The source-destination pairs corresponding to all active connections are stored in the control
memory.
Thus, the slots need not specify the source &destination addresses.
4 2
De-multiplexers
Multiplexer
s
Routing in Circuit-Switched Networks
In large circuit-switched networks, connections
often require a path through more than one
switches
Basic objective: Efficiency and Resilience
Two basic approaches:
Static
Dynamic
Static Routing
Routing function in public switched telecommunication
networks (PSTN) has been traditionally quite simple and
static.
Switches are organized as a tree structure.
To add some resilience to the network, additional high-
usage trunks are added that cut across the tree structure to
connect exchanges with high volumes of traffic between
them. Some redundancies.
Limitations
Cannot adapt to changing conditions
Leads to congestion in case of failure
Dynamic Routing
To overcome the limitations of static routing and to cope with growing demands of
users, all providers presently use a dynamic approach.
Routing decisions are influenced by current traffic conditions.
Switching nodes have a peer relationship with each other rather than a hierarchical
one.
Routing is more complex and more flexible
Two techniques:
Alternate Routing
Adaptive Routing
Alternate Routing Approach
The possible routes to be used between two end
offices are predetermined.
It is the responsibility of the originating switch
to select the appropriate route for each call
In practice, usually a different set of pre-
planned routes is used for different time periods
Takes advantage of different traffic patterns in
different time zones and different times of day.
Adaptive Routing Approach
It is designed to enable switches to react to changing
traffic patterns on the network.
Greater management overhead (switches must
exchange information).
Has the potential for more effectively optimizing the
use of network resources
Example: Dynamic traffic management
A central controller collects data at the interval of
10 seconds to determine preferred alternate routes.
Control Signaling
Apart from routing, the switch nodes must send control
signaling in other to:
To manage the network and by which calls are established,
maintained and terminated.
Signaling classifications:
Supervisory: gives availability of resources
Address: different telephone numbers or address to be sent
Call-information: either busy or not
Network Management: use for maintenance and
termination.
Signaling Techniques
In-channel
In-band: Same band of frequencies used by voice
signals are used to transmit control signals.
Out-of-band: Uses different part of the frequency band
but uses the same facilities as the voice signal
Common-channel
Dedicated signaling are used to transmit control
signals and are common to a number of voice
channels
Review Questions
What are the three steps involved in data communication through circuit switching?
Mention the key advantages and disadvantages of circuit switching technique.
Why data communication through circuit switching is not efficient?
Compare the performance of space-division single-stage switch with multi-stage switch
Distinguish between in-channel and common-channel signaling techniques used in
circuit switched networks.
TELECOMMUNICATION AND NETWORKS I
CIRCUIT SWITCHING & PACKET SWITCHING NETWORKS
WEEK THREE
PACKET SWITCHING
When a message of
large size is sent
through the network, it
monopolizes the link
and the storage which
reduces the efficiency
of the network. And
the solution to this is
Packet Switching
PACKET SWITCHING TECHNIQUES
I. Bit: The most basic unit of information in a digital computer is called a bit, which is
a contraction of binary digit.
II. Byte: In 1964, the designers of the IBM System/360 main frame computer
established a convention of using groups of 8 bits as the basic unit of addressable
computer storage. They called this collection of 8 bits a byte.
III. Word: Computer words consist of two or more adjacent bytes that are
sometimes addressed and almost always are manipulated collectively. The word size
represents the data size that is handled most efficiently by a particular architecture.
Words can be 16 bits, 32 bits, 64 bits.
IV. Nibbles: Eight-bit bytes can be divided into two 4-bit halves call nibbles.
Radix (or Base): The general idea behind positional numbering systems is that
a numeric value is represented through increasing powers of a radix (or base).
Representing natural numbers (positive integers, commonly called unsigned integers
in CS) is very easy. It is exactly the same as with numbers in the decimal system that
humans use: we simply place the digits horizontally one after the other and the
position of each digit determines its significance. The last digit on the right hand
side is called the least-significant while the leftmost bit is called the most-significant.
EXAMPLE 1
Three numbers represented as powers of a radix.
243.5110 = 2 * 102 + 4 * 101 + 3 * 100 + 5 * 10-1 + 1 * 10-2
2123 = 2 * 32 + 1 * 31 + 2 * 30 = 2310
101102 = 1 * 24 + 0 * 23 + 1 * 22 + 1 * 21 + 0 * 20 = 2210
EXAMPLE 3
Convert 10410 to base 3 using the division-remainder method.
10410 = 102123
3|104 2
3| 34 1
3| 11 2
3| 3 0
3|1 1
0
EXAMPLE 4
Convert 14710 to binary
14710 = 100100112
2|147 1
2| 73 1
2|36 0
2|18 0
2|9 1
2|4 0
2|2 0
2|1 1
0
Converting Fractions
EXAMPLE 6
Convert 0.3437510 to binary with 4 bits to the right of the binary point.
Reading from top to bottom, 0.3437510 = 0.01012 to four binary places. We simply
discard (or truncate) our answer when the desired accuracy has been achieved.
0.3437510 = 0.01012
0.34375
X 2
0.68750
X 2
1.37500
X 2
0.75000
X 2
1.50000
EXAMPLE 7
Convert 31214 to base 3
EXAMPLE 8
Convert 1100100111012 to octal and hexadecimal.
110 010 011 1012 = 62358 Separate into groups of 3 for octal conversion
1100 1001 11012 = C9D16 Separate into groups of 4 for octal conversion
Signed Magnitude
A signed-magnitude number has a sign as its left-most bit (also referred to as the
high-order bit or the most significant bit) while the remaining bits represent the
magnitude (or absolute value) of the numeric value.
N bits can represent –(2n-1 - 1) to 2n-1 -1
EXAMPLE 9
Add 010011112 to 001000112 using signed-magnitude arithmetic.
EXAMPLE 10
Add 010011112 to 011000112 using signed-magnitude arithmetic.
EXAMPLE 11
Subtract 010011112 from 011000112 using signed-magnitude arithmetic.
The signed magnitude has two representations for zero, 10000000 and 00000000.
Note that all positive numbers start with 0 and all negative start with 1, exactly
as with the sign-magnitude representation. This is very convenient as we can rapidly
determine whether a number is positive or negative. Although this asymmetry can be
confusing at times, circuits implementing 2’s complement arithmetic are the same as
those for unsigned numbers, which makes 2’s complement a very good representation
for signed numbers.
To negate a number x, regardless if it is negative or positive, we simply invert (toggle
0 to 1 and vice versa) all its bits and add 1 (at the least significant bit - LSB). A faster,
but trickier, method is to start from the LSB and scan the number toward the left end. While
we encounter zeros, we do nothing; the first one seen, is also left untouched, but from then
on all bits are inverted.
With 2’s complement numbers represented differently than unsigned numbers, the
conditions and the behaviour of overflow change too. A positive over-flow, i.e. a positive
number becoming too large, will produce a negative number, since it will start with a 1.
Likewise a negative overflow produces a positive number. This explains what happens in
some programming languages when you use an integer to keep a running sum for quite
a long time and eventually (and surprisingly) you get a negative result. Note that Java
throws an exception when overflow happens.
In many situations a 2’s complement number needs to be converted to a Sign-
extension larger data type, for example a signed byte converted into a 16-bit short.
This is done by taking the most-significant bit of the byte (the origin) and
replicating it to fill the unused bits of the 16-bit short (the target data type). This
operation is called sign extension. Sign-extension is based on the fact that as adding
0’s to the left of a positive number does not change its value, adding 1’s to the left of a
negative number, does not change its value too.
A useful operation with binary numbers is shifting, i.e. moving the bits of a data
type to the left or to the right. When shifting left, 0’s fill up the empty bit places.
Note that this operation is, in effect, a multiplication; when a number is shifted left by
n bits, it is multiplied by 2n.
When shifting right a 2’s complement number, it makes sense to fill in the empty
spaces with copies of the MSB, for the same reason as with sign extension. This operation
is effectively a division by a power of 2. Because shifts are also useful for processing
data types other 2’s complement numbers, most processors have another version of shift-
right where the empty spaces are filled in with 0’s. To differentiate the two, the former is
usually called arithmetic shift right, while the latter is called logical shift right.
Complement Systems
Since we do not have any symbols other than 0, 1 available (i.e. no ‘-’, ’+’, etc.),
we have to agree to a convention for representing negative numbers using bits. One
such convention is the sign-magnitude representation where the first bit (the leftmost)
holds the number’s sign: 1 for negative, 0 for positive. This representation complicates
the design of circuits implementing basic operations, such as addition, therefore is no
longer used in modern computers to represent integers.
Instead of sign-magnitude, a representation, called 2’s complement, is used. The
idea of how to represent negative numbers in 2’s complement comes from the result one
gets when subtracting an unsigned number from a smaller unsigned number. For
example, with 4-bit data types, subtracting 0001 from 0000 produces a result of 11112,
thus 1111 represents -1. In general for an n bit data type in 2’s complement, the most
significant bit has a negative weighting, while all the others have the usual positive
weightings.
I. One’s Complement
This sort of bit-flipping is very simple to implement in computer hardware.
EXAMPLE 12
Express 2310 and -910 in 8-bit binary one’s complement form.
EXAMPLE 13
Express 2310, -2310, and -910 in 8-bit binary two’s complement form.
EXAMPLE 14
Add 910 to -2310 using two’s complement arithmetic.
A Simple Rule for Detecting an Overflow Condition: If the carry in the sign bit equals
the carry out of the bit, no overflow has occurred. If the carry into the sign bit is different
from the carry out of the sign bit, over (and thus an error) has occurred.
EXAMPLE 16
Find the sum of 12610 and 810 in binary using two’s complement arithmetic.
A one is carried into the leftmost bit, but a zero is carried out. Because these carries
are not equal, an overflow has occurred.
N bits can represent –(2n-1) to 2n-1 -1. With signed-magnitude number, for example, 4
bits allow us to represent the value -7 through +7. However using two’s complement,
we can represent the value -8 through +7.
EXAMPLE 17
Find the product of 000001102 (610) and 000010112 (1110).
00000110 ( 6)
x 00001011 (11)
When the divisor is much smaller than the dividend, we get a condition known as divide
underflow, which the computer sees as the equivalent of division by zero.
Computer makes a distinction between integer division and floating-point division.
I. With integer division, the answer comes in two parts: a quotient and a remainder.
II. Floating-point division results in a number that is expressed as a binary fraction.
III. Floating-point calculations are carried out in dedicated circuits call floating-point
units, or FPU.
If the 4-bit binary value 1101 is unsigned, then it represents the decimal value 13, but
as a signed two’s complement number, it represents -3. C programming language has int
and unsigned int as possible types for integer variables. If we are using 4-bit unsigned
binary numbers and we add 1 to 1111, we get 0000 (“return to zero”). If we add 1 to the
largest positive 4-bit two’s complement number 0111 (+7), we get 1000 (-8).
As obvious from example above, regular multiplication clearly yields the incorrect
result. However, Booth’s algorithm as one of the many arithmetic algorithms d o e s . In most
cases, Booth’s algorithm carries out multiplication faster and more accurately than naïve
pencil-and-paper methods. The general idea of Booth’s algorithm is to increase the speed
of a multiplication when there are consecutive zeros or ones in the multiplier. Let us consider
the following standard multiplication example (3 X 6):
0011 (3)
x 0110 (6)
+ 0000 (0 in multiplier means simple shift)
+ 0011 (1 in multiplier means add multiplicand and shift)
+ 0011 (1 in multiplier means add multiplicand and shift)
+ 0000 (0 in multiplier means simple shift)
0010010 (3 X 6 = 18)
In Booth’s algorithm, if the multiplicand and multiplier are n-bit two’s complement
numbers, the result is a 2n-bit two’s complement value. Therefore, when we perform our
intermediate steps, we must extend our n-bit numbers to 2n-bit numbers. For example,
the 4-bit number 1000 (-8) extended to 8 bits would be 11111000.
Booth’s algorithm is interested in pairs of bits in the multiplier and proceed according to
the following rules:
I. If the current multiplier bit is 1 and the preceding bit was, we are at the
beginning of a string of ones, so subtract (10 pair) the multiplicand form the
product.
II. If the current multiplier bit is 0 and the preceding bit was 1, we are at the end of
a string of ones, so we add (01 pair) the multiplicand to the product.
III. If it is a 00 pair, or a 11 pair, do no arithmetic operation (we are in the middle of
a string of zeros or a string of ones). Simply shift. The power of the algorithm
is in this step: we can now treat a string of ones as a string of zeros and do nothing
more than shift.
0011 (3)
x 0110 (6)
+ 00000000 (00 = simple shift; assume a mythical 0 as the previous bit)
+ 11111101 (10 = subtract = add 1111 1101, extend sign)
+ 00000000 (11 simple shift)
+ 00000011 (01 = add )
01000010010 (3 X 6 = 18; 010 ignore extended sign bit that go beyond 2n)
EXAMPLE 18
(-3 X 5) Negative 3 in 4-bit two’s complement is 1101. Extended to 8 bits, it is
11111101. Its complement is 00000011. When we see the rightmost 1 in the multiplier, it
is the beginning of a string of 1s, so we treat it as if it were the string 10:
1101 (-3; for subtracting, we will add -3’s complement, or 00000011)
x 0101 (5)
+ 00000011 (10 = subtract 1101 = add 0000 0011)
+ 11111101 (01 = add 1111 1101 to product: note sign extension)
+ 00000011 (10 = subtract 1101 = add 0000 0011)
+ 11111101 (01 = add 1111 1101 to product)
100111110001 (-3 X 5 = -15; using the 8 rightmost bits, 11110001 or -15)
For unsigned numbers, a carry (out of the leftmost bit) indicates the total number of
bits was not large enough to hold the resulting value, and overflow has occurred. For signed
numbers, if the carry in to the sign bit and the carry (out of the sign bit) differ, then
overflow has occurred.
EXAMPLE 1 9 :
Multiply the value 11 (expressed using 8-bit signed two’s complement
representation) by 2.
EXAMPLE 20:
Divide the value 12 (expressed using 8-bit signed two’s complement representation) by
2.
We start with the binary value for 12:
00001100 (+12)
We shift right one place, resulting in:
00000110 (+6)
(Remember, we carry the sign bit to the right as we shift.)
To divide 12 by 4, we right shift twice.
Floating-Point Representation
In addition to integers, computers are also used to perform calculations with real
numbers. The representation of these numbers in computers is typically done using
the floating point data type. Real numbers can be represented in binary using the
form (−1)s × f × 2e. Therefore a floating point type needs to hold s (the sign), f and e.
Note that floating point numbers use sign-magnitude for the fraction part, f.
Under IEEE 754, the most prevalent floating point format, f is normalized before
storage, which means shifting the binary point so that there is one non-zero digit before
the binary point. For example, the value 3/4 is represented as 1.1 × 2−1. The value 1.1 is
known as the mantissa and the −1 as the exponent.
Because the non-zero digit before the binary point must always be a 1, it is not
necessary to store it. The 32-bit format uses one bit to store the sign of the number, 8
bits to store the exponent, and 23 bits to store the digits of the mantissa after the binary
point. In contrast, the 64-bit format uses 11 bits for the exponent, and 52 bits for the
mantissa, giving a greater range of numbers and more accuracy.
Arithmetic operations on floating point numbers can be performed in multiple steps
using integer arithmetic, shifting, etc. To speed them up, modern processors contain a
specialized floating point arithmetic unit and special instructions. Most processors also
provide a separate bank of registers for floating point numbers and there are instructions
to transfer values between these and the general purpose registers. The MIPS provides
add, subtract, multiply, divide, negate, absolute value and comparison instructions on
32-bit and on 64-bit floating point numbers, as well as instructions to convert numbers
between the 32 and 64-bit formats and between floating point and integer representations.
A Simple Model
The IEEE-754 single precision floating point standard uses bias of 127 over its 8-bit
exponent. An exponent of 255 indicates a special value. The double precision standard has
a bias of 1023 over its 11-bit exponent. The “special” exponent value for a double
precision number is 2047, instead of the 255 used by the single precision standard.
Because of truncated bits, you cannot always assume that a particular floating point
operation is commutative or distributive.
Binary-Coded Decimal
EBCDIC
In 1964, BCD was extended to an 8-bit code, Extended Binary-Coded Decimal
Interchange Code (EBCDIC). EBCDIC was one of the first widely-used computer codes
that supported upper and lowercase alphabetic characters, in addition to special characters,
such as punctuation and control characters. EBCDIC and BCD are still in use by IBM
mainframes today.
ASCII
Unicode
Unicode is a 16-bit alphabet that is downward compatible with ASCII and Latin-1
character set. Because the base coding of Unicode is 16 bits, it has the capacity to
encode the majority of characters used in every language of the world. Unicode is currently
the default character set of the Java programming language.
The Unicode codespace is divided into six parts. The first part is for Western
alphabet codes, including English, Greek, and Russian. The lowest-numbered Unicode
characters comprise the ASCII code. The highest provide for user-defined codes.
Error Detection and Correction
Error Detection ‐ Error detection as the process of detecting error during the
transmission between the sender and the receiver are of various types, which are:
• Parity checking
• Redundancy checking
• Checksum
Parity checking
Parity adds a single bit that indicates whether the number of 1 bits in the preceding data
is even or odd. If a single bit is changed in transmission, the message will change parity
and the error can be detected at this point. Parity checking is not very robust, since if the
number of bits changed is even, the check bit will be invalid and the error will not be
detected.
Moreover, parity does not indicate which bit contained the error, even when it can
detect it. The data must be discarded entirely, and retransmitted completely. On a noisy
transmission medium a successful transmission could take a long time, or even never occur.
Parity does have the advantage, however, that it is about the best possible code that uses only
a single bit of space.
Redundancy
Redundancy allows a receiver to check whether received data was corrupted during
transmission. So that he can request a retransmission. Redundancy is the concept of using
extra bits for use in error detection. As shown in the figure sender adds redundant bits (R)
to the data unit and sends to receiver, when receiver gets bits stream and passes through
checking function. If no error then data portion of the data unit is accepted and redundant
bits are discarded, otherwise the sender is asked for the retransmission of the data.
• Data unit is composite by number of 0s, which is one less than the divisor.
• Then it is divided by the predefined divisor using binary division technique. The
remainder is called CRC. CRC is appended to the data unit and is sent to the receiver.
Receiver’s steps.
• When data unit arrives followed by the CRC it is divided by the same divisor which
was used to find the CRC (remainder).
• If the remainder result in this division process is zero then it is error free data, otherwise
it is corrupted.
Checksum
Check sum is the third method for error detection mechanism. Checksum is used in the
upper layers, while Parity checking and CRC is used in the physical layer. Checksum is
also on the concept of redundancy.
In the checksum mechanism two operations to perform are:
1. Checksum generator
Sender uses checksum generator mechanism. First data unit is divided into equal
segments of n bits. Then all segments are added together using 1’s complement. Then
it complements ones again. It becomes Checksum and sends along with data unit.
Example 21:
If 16 bits 10001010 00100011 is to be sent to receiver.
So the checksum is added to the data unit and sends to the receiver. Final data unit is
10001010 00100011 01010000.
2. Checksum checker
Receiver receives the data unit and divides into segments of equal size of segments. All
segments are added using 1’s complement. The result is completed once again. If the
result is zero, data will be accepted, otherwise rejected.
Error Correction/Flow Control - Error control allows a receiver to reconstruct
the original information when it has been corrupted during transmission. CRC, Reed-
Soloman, and Hamming codes are three important error control codes.
0+0=0
0+1=1
1+0=1
1+1=0
EXAMPLE 22:
Find the sum of 10112 and 1102 modulo 2. 10112 + 1102 = 11012 (mod 2)
Suppose we want to transmit the information string: 10010112. The receiver and
sender decide to use the (arbitrary) polynomial pattern, 1011. The information string is
shifted left by one position less than the number of positions in the divisor. I =
10010110002 The remainder is found through modulo 2 division (at right) and added
to the information string: 10010110002 + 1002 = 10010111002. If no bits are lost or
corrupted, dividing the received information string by the agreed upon pattern will give
a remainder of zero. Real applications use longer polynomials to cover larger information
strings. A remainder other than zero indicates that an error has occurred in the
transmission. This method work best when a large prime polynomial is used. There are
four standard polynomials used widely for this purpose:
CRC-32 has been proven that CRCs using these polynomials can detect over 99.8%
of all single-bit errors.
2. Reed-Soloman
A Reed-Soloman (RS) code can be thought of as a CRC that operates over entire
characters instead of only a few bits. For instance, if we expect errors to occur in blocks, then
we should use an error-correcting code that operates at a block level, as opposed to a
Hamming code, which operates at the bit level. RS codes, like CRCs, are systematic: The
parity bytes are append to a block of information bytes. RS (n, k) code are defined using
the following parameters:
I. s = The number of bits in a character (or “symbol”).
II. k = The number of s-bit characters comprising the data block.
III. n = The number of bits in the code word.
RS (n, k) can correct (n-k)/2 errors in the k information bytes. Reed-Soloman error-
correction algorithms lend themselves well to implementation in computer hardware. They
are implemented in high-performance disk drives for mainframe computers as well as
compact disks used for music and data storage.
3. Hamming Codes
EXAMPLE 23
00000
01011
10110
11101
D(min) = 3. Thus, this code can detect up to two errors and correct one single bit
error.
We are focused on single bit error. An error could occur in any of the n bits, so each
code word can be associated with n erroneous words at a Hamming distance of 1. Therefore,
we have n + 1 bit patterns for each code word: one valid code word, and n erroneous words.
With n-bit code words, we have 2n possible code words consisting of 2m data bits (where m
= n + r).
This gives us the inequality:
(n + 1) * 2m < = 2n
(m + r + 1) * 2m <= 2 m + r or
(m + r + 1) <= 2r
EXAMPLE 24:
Using the Hamming code and even parity, encode the 8-bit ASCII character K. (The high-
order bit will be zero.) Induce a single-bit error and then indicate how to locate the error.
Let’s introduce an error in bit position b9, resulting in the code word:
0 1 0 1 1 1 0 1 0 1 1 0
12 11 10 9 8 7 6 5 4 3 2 1
Parity b1 = b3 + b5 + b7 + b9 + b11 =1+1+1+1+1=1 (Error, should be 0)
Parity b2 = b3 + b6 + b7 + b10 + b11 =1+0+1+0+1=1 (OK)
Parity b4 = b5 + b6 + b7 + b12 =1+0+1+0=0 (OK)
Parity b8 = b9 + b10 + b11 + b12 =1+0+1+0=0 (Error, should be 1)
We found that parity bits 1 and 8 produced an error, and 1 + 8 = 9, which in exactly
where the error occurred.
TELECOMMUNICATION AND NETWORKS I
SIGNALS AND TRANSMISSION
WEEK FIVE
Outline of the Lecture
Concepts and Terminology
Analog and Digital Data Transmission
Transmission Impairments
Channel Capacity
Concepts and Terminology
WEEK FIVE
TRANSMISSION IMPAIRMENTS AND CHANNEL CAPACITY
Attenuation
This is the reduction in signal strength (loss of energy) as its propagates
through the communication medium. It is expressed in decibel
dB=10log10 (P2/P1) :It decides how signal can travel without amplification
P2= Power received at the destination
P1= Power transmitted from the source
An amplifier can be used to compensate the attenuation experienced by the
medium.
Decibel is used to measure relative strengths of two signals experienced by two
different points. dB=10log10 (P2/P1)
Example1: if energy strength at point 2 is 1/10 th with respect to the point 1. Then
attenuation in dB is 10log(1/10)= -10dB Note that the loss of power takes form of
negative. If the gain 100 times at point 3 with respect to point 2. Then gain in dB is
10log(100/1)=20 dB
Also note that signal strength at point 3 with respect to 1 which can form a
cascaded systems can be obtained by adding the two values: (-10) + 20= 10 dB
The above means the number of values in cascaded forms can be added.
Data rate Limits: How fast data can be sent?
Depends on three factors/parameters:
Bandwidth of the channel
Number of levels used in the signal and Noise level in the channel
Bandwidth of a medium
Used in quantifying
signal to noise level
Shannon Capacity (Noisy Channel)
Shannon capacity gives the highest data rate for a noisy
channel
C=B* log2 (1+S/N)
S/N is signal to noise ratio, in case of extremely noisy channel
C=0 when the noise is very high.
Between the Nyquist Bit rate and the Shannon limit, the result
providing the smallest channel capacity is the one that
establishes the limit. The lower must be taken as the channel
capacity.
Example
A channel has B = 4KHz, determine the channel capacity for each of
the following signal-to-noise ratios: (a) 20dB (b) 30dB (c) 40dB.
Solution:
The results indicates that the more we have increase in signal to noise
ratio, we also have higher capacity.
Example2: A channel has B = 4KHz and a signal-to-noise of 30dB,
determine maximum information rate for 4 level encoding.
For B= 4KHz and 4-level encoding the Nyquist Bit rate is 16 Kbps and
for B=4KHz and S/N of 30dB the Shannon capacity is 39.8Kbps. In
other words, the smallest of the two values has to be taken as the channel
capacity
I= 16Kbps
Example
WEEK SEVEN
Course Outline
Observations
Most individual data-communication devices typically
require modest data rate, 4kbps for voice requirement.
Communication media usually have much higher
bandwidth; coaxial optical have several megabits per
sec.
Two communication stations do not utilize the full
capacity of a data link
The higher the data rate, the most cost effective is the
transmission facility.
Why Multiplexing?
WEEK EIGHT
Course Outline