0% found this document useful (0 votes)
6 views

UNIT 5 DC

The document provides an introduction to information theory, discussing its significance in communication systems and its mathematical foundations. It covers key concepts such as information sources, entropy, and channel capacity, emphasizing the relationship between uncertainty and information content. The document also explores the discrete memoryless sources and the measures of information, including the logarithmic measure of information and the average information produced by a source.

Uploaded by

atmanyathakur
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
0% found this document useful (0 votes)
6 views

UNIT 5 DC

The document provides an introduction to information theory, discussing its significance in communication systems and its mathematical foundations. It covers key concepts such as information sources, entropy, and channel capacity, emphasizing the relationship between uncertainty and information content. The document also explores the discrete memoryless sources and the measures of information, including the logarithmic measure of information and the average information produced by a source.

Uploaded by

atmanyathakur
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
You are on page 1/ 22
‘ \ 5 ovsTBl\° CL bat etaTal ela . uction to information theo e1 pus: introduct ry, uncer Syl ox. source coding theorem, Huffinan oo ang glucaon, channel models, channel capacity, mit. Introduction © Wnatis information « Information Sources © information Content of a Discrete Memories Source (DMS) « information Content of a Symbol (ve, Loganthmic Measure of information) © Enropy (Ie. Average Information) information Rate * The Discrete Memoryless Channels (omc) ‘© Types of Channels ‘© Te Conditional and Joint Entropies * The Mistual Information © The Channel Capacity * Envopy Relations for a Continuous, Crannel * Cevacity of an Additive White Gaussian Noise (AWGN) Channel: Snannon-Hartey Law *® Channel Capacity : Testission of Continuous Signals Peeainty in the Transmission * Bchange of B i ek jandwidth for Signal- a Ratio 5 ine Source Coding "ORY Coding Information Theory ertainty and infor ‘ding, Shannon- channel coding, mation, average mutual information Fano-Elias-coding, Channel Coding: information capacity theorem, Shannon 8.1. INTRODUCTION As discussed in chapter 1, the purpose of a communication system is to carry information-bearing baseband signals from one place to another place over a communication channel. In the last few chapters, we have discussed a number of modulation schemes to accomplish this purpose. But what is the meaning of the word Information’. To answer this question we need to discuss information theory. Infact, information theory is a branch of probability theory which may be applied to the study of the communication systems. This broad mathematical discipline has made fundamental contributions, not only tocommunications, but also computer science, statistical physics and probability and statistics. Further, in the context of communications, information theory deals with mathematical modelling and anal of a communication system rather than with physical sources and physical channels. As a matter of fact, Information theory was invented by communication “scientists while they were studying the statistical structure of electronic communication equipments. When the communique is readily measurable, such as an electric current, the study of the communication system is relatively easy, But, when the communique is information, the study becomes rather difficult. How to define the measure for an amount of information? And also having described a suitable measure, how can it be applied to improve the communication of information? Information theory provides answers to all these questions. ‘Thus, this chapter is devoted to a detailed discussion o! information theory. 425 __f 426 DIGITAL COMMUNICATIONS 8.2. WHAT IS INFORMATION? Before discuss e ‘ye measure of information, let us revi tative measure of in ; ssing the quanti a : in unt of information in a message. Few messages rod ea b an basie “ ~ ° a vo nore information than others. This may be best underst: Ae afer on contain m od 7 8 example: acity located in such an area where r, . hy : Janning a tour ain fall i Consider you are p! he weather bureau and 8 Vey 7 an about the weather forecast you will call t may receive ny ok bor ; mn following information: ma (i) It would be hot ana sun. ii) There would be scattered rain, aa There would be a cyclone with thunderstorm. sved in lene It may be observed that the amount of information receive is clearly diferent fort 7 messages. The first message, just for instance, en a very tle information htt weather in a desert city in summer is expected to be hot and sur ny for maximim second message forecasting a scattered rain contains some more information because i, is ~ an event that occurs often. The forecast of a cyclonic storm contains even more informas, compared to the second message. This is because the third forecast is a rarest event in tha Hence, on an conceptual basis the amount of information received from the knowledge of vine, of a event may be related to the likelihood or probability of occurrence of that event. The messa associated with the least likelihood event thus consists of maximum information. The zy amount of information in a message depends only upon the uncertainty of the underlying evs rather than its actual content. Now, let us discuss few important concepts related to Infor theory in the sections to follow. 8.3. INFORMATION SOURCES Matiog (i) Definition An information source may be viewed as an object which produces an event, tite outcomedt which is selected at random according to a probability distribution. A practical source ina communication system is a device which produces messages, and it can be either anelog ot discrete. In this chapter, we deal mainly with the discrete sources since analog sources can transformed to discrete sources through the use of sampling and quantization techniques. 48¢ matter of fact, a discrete information source is a source which has only a finite set of symbol: possible outpyts. The set of 50) , and the elements of the set are, mbots or letter: sification of Information Sources information sources can be classified as having memory or being memoryless. A source with memory is one for which DO YOU KNOW? Sa aSarSITEe CR SE See ee eb RAMEE A nent ree ener? pemoryless source is one for which each symbol produced : 7 it independent of the previous symbols. , ane Sree 1 alphabet of symbols. h any message emitted by the source consists of a string of sequence of symbols. A discrete memoryless source (DMS) ca by the list of the symbols, ) can be characterized the probability assignment to these tion of the rate of generating these symbols, and the specifica symbols by the source ‘apable of honesty is the beginning of education’ ~~ Join Reuskin | INFORMATION THEORY 427 i ain Communications: Claude Elw on = FZ OH Borin Gaylord M [eget BR |, University with do 2 from the Massach, food Shannon (1916-1972) ichigan, Claude Shannon graduated from Michigan Brees in mathematics and engineering and, in 1940, tusetts Institute of Technology (MIT) with master's |] er doctoral dearees in mathenntes Shannon's theories effectively if Rievided the mathematical foundation for designing digital electronic | . citeuits, which form the basie of modern-day information processing. lt a After gradu: ‘ating from MIT, Shannon served as a National Research Fellow at the Institute for Advanced Study at Princeton in 1941, joined Bell Telephone Laboratories as a matician. While at Bell Labs, the transmissic ion and reliability of informatior telephone and telegraph lines. s4, INFORMATION CONTENT OFA DISCRETI ‘amount of information contained in an ey hing Knowledge of high probability of occurrence convey relatively little information. We ‘eat an event is certain (that is, the event eeewee with probability 1), it conveys zero ‘ation Thus, a mathematical measure of informatie Should be a function of the probability ‘souteome and should satisfy the following axioms, information should be prop Infor mation contained in i — MEMORYLESS SOURCE (DMs) ent is closely related to its uncertainty. Messages ortional to the Uncertainty of an outcome, independent outcomes should add, INFORMATION CONTENT OF A SYMBOL (ie, LOGARITHMIC MEASURE OF INFORMATION) {Definition 5. __ let us consider a discrete memoryle: #s source (DMS) denoted by X and havin, *ai- The information content of a symbol x,, denoted by I) is B alphabet {x,, defined by 1 Hox) = lor, 55, ‘Where P(x.) is the probability of occurrence of symbol x, 9) Properties of x) The Jog, P(x) oB.1) information content of a symbol x, denoted by lex) satiate ® the following properties 1(x,) = 0 for P(x,) = 1 (8.2) top 20 P P(x) : a i M(x) > 16x) if PO) < POY an 84 ee = H(x,) + I(x) if x and x, are indepedent 5 my = Ny) “i Unit of Iq) unit) if b= 2, Hartley of decit if = 10, and nat (natural : The unit of 1(x,) is the bit inary . Here, the unit bit (abbreviated "b°) is a measure of ine tf =e. It is standard to ve he confused with the term it’ meaning binary digit. ‘Te nvenation ae a Be units can be achieved by the following relationships, ‘sion of these units ——— UY 42k pIGITAL COMMUNICATIONS Ina, owa Tor. = yg” Joe? ints about 10x) (iv) Few ne Ln snformation content or amount of information Genent he). must be ines Fea o PO) ATR Frm oe IIL ope Mh, sniisfy the following re" (a) ix) must approach quirements . my tras Pox) approaches infinity. For example, eonsia i c ler th 9 nesage does not contain any inform, e me ‘Sun wll rise in the east, This massa Y information sink 4p, ee in the east with probability T- vn nt Ix.) must be a non-negative quantity sin ; quial to zero. " ‘ach m The information conten — vive information, In the worst Case I(x) can be see information content of a message having higher probability of oceu, than the information content of a message having lower probability. see few numerical examples to illustrate all the above concep, Gi) Cont, Gi) vette deg, Now let us produces one of four possible symbols during each interaty a 2 8 EXAMPLE 8.1. A source 1 probabilities Px) = 3° Pox) P(x)=P(x,)=2 Obtain the information content of eachory, i“ symbol. Souton, We know that the information content I(x) of a symbol ; is given by hegre lia) = lore a Thus, we can write lax) = T(x) = Joga 7 = log, 2*= 2 bits z I(x,) = log, 7 = 3bits Ans. 1 I a 1 Ixy = t 8 EXAMPLE 62. Calculate the amount of information if it is given that P(x) 4 Solution: We know thet amount of information I(x) of a discrete symbol x, is given by, 10) = logy Pap ‘Vue above cxpreseion may be written as under: Noahs jay = =P) Substituting given value of rae b i .) in above expression, we obtain Ja) = iat 2bits Ans. ? @ jikeleed : hiner en ete of information if binary digits (binits) oe “ a —--— up stest Ulin — AHO] bisymbol Ans. DIGITAL COMMUNICATIONS 438 444494242 _ 18 _ 1.976 bitsioutcome Ai = £t4e8e2t? 5 r ne. or HO) a a 1s r = 16 outcomes/sec. iteomes Now, rate of ou ‘Therefore, the rate of information = 16x = 30 bits/sec Ans. R will be R = rH(X) EXAMPLE 8.18. An analog signal bandlimit with probabilities of 1/4, 1/ ri information. nae f= 10x 2kHz.= 20 kHz Considering each HW) or HQ) = Ag the sampling fre or 20000 messages/sec. Hence, the information rate is R = rH(X) = 20000 x 2.84 = 56800 bits/sec. Ans. 8.8. 6.8.1. Channel Representation A communication channel may be defined as the path or medium through which the symbols flow to the receiver end. A discrete memoryless channel (DMC) is a statistical model with an input X and an output Yas shown in figure 8.1. During each unit of the time (signaling interval), the channel accepts an input symbol from X, and in response it generates an output symbol from Y. The channel is said to be “discrete” when the alphabets of X and Y are both finite, Also, itis said tw be “memoryless” when the current output depends on only the current input and not on any of the previous inputs. bee i fineram of a DMC with m inputs and n outputs has ‘rated in figure 8.1, The input X consists of input symbols x,, x2, Xm, The probabilities of these ° symbols Pix.) are assumed to be known, Th Galery, consists of output symbols y,, Pach paca to-output path is indicated along wack ereomai ae ‘ conditio: probability P(y,1x,), where Ply,lx,) is the conditional d to 10 kHz is quantized in 8 levels of 1/5, 1/10, 1/10, 1/20, 1/20 and 1/20 respectively. Find the eat 0 1 1 a a + jog, 20+ a flees 445 logs 5 +5 logs 5 + 75 lors 10+ 75 Jog, 10 + 55 log, 20+ 5° logs 20-455 logy 20 2 7 flowed a logs 5+ 55 log, 20 = 2.84 bits/message a PY and signal must be sampled ata freq ney ofthe cight quantized lovels as @ message, the entropy of the source may writen quency is 20 kHz then the rate at which message is produced, will be given ay THE DISCRETE MEMORYLESS CHANNELS (DMC) In this section, let us discuss different aspects related to discrete memoryless channels (DMC). DO YOU KNOW? The reader should be awareofthe fact that the data rate and information rate are two distinctly different quantities. 4 % te Fig. 8.1. Representation of a discret probability of obtaining out put iven th i Do not suppove opportinty will knack taise at SouPdcge ~~ — ~—— ———emaator . : << ¢ channel Matrix tm ; go sels completely specified by the com achat aan gee ven BY chat (aX ss Pilea) Piva lx) POI) PY2 bm) This matrix [P(Y1X)] is called the channel matrix. “punity. This means that gam to unity SY POI) = 1 for atti P(¥q Xm). INFORMATION THEORY 439 e plete set of transition probabili it : . ities. Accordingly, in figure 8.1 is often specified by the matrix of transition probabilities {Peso}. This P(yq 1x) pparixy = | POLED) Poel) Ply bea) (B13) since each input to the channel results in some output, each row of the channel matrix must : (B14) i Now, if the input probabilities P(X) are represented by the row matrix, then we have [POO] = (P&,) P(x)... P(&,,)] (B15) _sjso, the output probabilities P(Y) are represented by the row matrix as under: [PO] = PO) Po)... Py,)) (8.16) then (PM) = POOP 1X) --(8.17) Now, if POO is represented as a diagonal matrix, then we have am PQ) 0 0 — —) wooy=| 2 POP" o 0.18 0 0 + Pm) then (PCY) = [PR], POX) -AB.19) where the (i, ) element of matrix [P(X,Y)] has the form P(x, yj). , The matrix [P(X, Y)] is known as the joint probability matrix, and the element P(x, y)) is the joint probability of transmitting x, and receiving y;. 89. TYPES OF CHANNELS 89 TYPES OF CHANNELS ver than continuous and discrete channels, the are some special type of channels with their own channel matrices, In this subsection, let us discuss various special channels, £941, Lossiess Channel A channel deseribed by a channel matrix with only one ‘on-2ero element in each column is called a lossless channel. ©xample of a lossless channel has been shown in figure 8 (ojitd the corresponding channel matrix is given in equation $20) as under: eat 000 44 Paix) =|0 0 5 2 0 A820) 00001 aa Ys x ve Fig. 8.2. Lossless Channel | Jess channel, no 60" nol matrix WI may be noted that sinc unity by eau 1, it is clear Ww » Important Point: It sat must be therefore, this eleme cont in the deterministic channel 8.9.3. Noiseless Channel Jess if it is both lossless and A channel is called noisel deterministic. A noiseless channel has been shown in firure 64, The channel matrix has only one element in each row and in each column, and this element is unity. Note that the input and output alphabets are of the same ‘size, that is, m =n for the noiseless channel. ‘The matrix for a noiseless channel is given by 100 0 o 10 0 0010 000 1 £94. Binary Symmetric Channel (BSC) The binary symmetric channel (BSC) is defined by the channel diagram show? fig feng ite channe) matrix is given by paixy =|)? P (PY 1x)) | Y Ql 8.22) mere ae hhas two inputs (x, = 0, x, = 1) ad ime gate (=O, ¥,= 1). This channel ia prelicbae 7. ¢ probability of receiving a *?™ pecan tet it the same as the probability of She omen ie common transition 8s shown in figure 8.5, EXAMPLE 6.19. Given a x=0 () Find the channel binary channel shown in figure 8.6 S 1 matrix of — the chi ry —— annel. wise man will make ior urce information is lost in tra sm is ‘ith only one ya chan iar ribed by Cole called @ deterministic 1 nel has been ol matrix (B21) ion, ji 3 Fig. 8.3. Deterministic Channe e each row has only one non-zero ele ation (8.21). Thus , when a given source symbel 2 hich output symbol will be received. oes Fig. 8.4. Noiseless channel =P expression, we have py above INFORMATION THEORY 447 HOD = Pex, y,) log, PO Ixy — p . a ¥2 084 Pox) HON) = ~ a0 ~ py Jo, ~ PO yy) log, P or 82 (1p) — oy, lore b= (1 ea 1 Poy ¥9) log, POY xy) H(YLX) nl know that TY) >) ~ (1 ~ p) logs (1 — py ones " Ai) i We know that lors p+ (1 — py logs (py ii) POI = POO} [POX] when a= 0.5 and P= 0.1, then, we have (Pan = [0.5 001/99 aa] §Son DO YouKNow? “8 : Telephone ch; PO) = P,) =0: channels that are es ee eee alfected by switching transients Now, using me ev and dropouts, an microwave 2. radio links that a HO) = ~ 2 Polo Po) ore Keg ubiected to = with memory. HO) = ~ PG) log, P(y,) — POy,) log, Ply.) : oe HO) = ~055 log, 0.5-0.5 log, 0.5 <1 Plog, p + (1 —p)log, (1— py = ous 1% ¥) = 1- 0.469 é SA) When a =05 and p=0.5, we have 0.11083 0.1 + 0.9 log, 0.9 =~ 0.469 0.531 Ans, (PM) = fos os23 aa = (05 0.5} HO) =1 Plog, p+ (1—p) log, 1p) = 05 10g, 0.5 + 0.5 log, 0.5 = 10G¥) Thus, 1-1=0 Hence Proved. & Important Point: It ma; transmitted at all. Ane entirely and “flipping a y be noted that in this case, (p = 0.5) no information is being qually acceptable decision could be made by dispensing with the channel coin” at the receiver. When I(X;Y) = 0, the channel is said to be useless, 8.42, THE CHANNEL CAPACITY ooo eee In this section, let us discuss various aspects regarding channel capacity. £124. Channel Capacity Per Symbol C, The channel capacity per symbol of a discrete memoryless channel (DMC) is defined as (8.35) C max J (X:Y) b/symbol © (PGx;)} Where the m ‘tat the channel channel, dl mn X. Note ‘aximization is over all possible input probability distributions {PG on X, Note 1 capacity C, is a function of only the channel transition probabil Money a goad upon bar Thad aster 448. DIGITAL COMMUNICATIONS 8.12.2. Channel Capacity Per Second c nsmitted per second, then the maximum rate of tra ymbols are being tra 2M 7 u r coool a ie tC, This is the channel capacity per second and is denoteg mesg a Cc =r, b/s ts (8 8.12.3. Capacities of Special Channel in this subsection, let us discuss capacities of various special channel, 8.12.3.1. Lossless Channel Fora lossless channel, H(X|¥) = 0, and 10:Y) = HO) / us ‘Thus, the mutual information (information transfer) is equal to the input (source) entre, i ‘Opy, ro source information is lost in transmission. : and ‘Consequently, the channel capacity per symbol will be = mx WC) = Cy = Pay HOX) = logy m Sa where m is the number of symbols in X. 8.12.3.2. Deterministic Channel For a deterministic channel, H(Y1X) = 0 for all input distributions P(x), and 1aG¥) = HO) 3 Thus, the information transfer is equal to the output entropy. ‘The channel capacity per symbol will be C, = (Fay) HO = logy (6.40) where n is the number of symbols in Y. 6.12.8.3. Noiseless Channel Since a noiseless channel is both lossless and deterministic, we have 1QGY) = H(X) = H(Y) (841) and the channel capacity per symbol is C, = logym = log,n nl) 6.12.34. Binary Symmetric Channel (BSC) For the binary symmetric channel (BSC), the mutual information is UK) = H(Y) + p log, p + (1 — p) log, (1 - p) ead and the channel capacity per symbol will be + plog, p+ (1p) log, (1 - p) 7 EXAMPLE 8.29. Verify the following expression: C, = log,m where C, sink ,, is the channel capacity of a i Solution: For # lossless channel, we have eeeaeeiaa eee aaa HOLY) = 0 ‘Then, by equation (6.30), we have JOGY) = HOO - HOKLY) = HOX) Hence, by equations (8 35) and (8.39), we have a = max = RE oy) o Way TORY) or Cy= 712%, H(X) = logm Hence proved. i 'NFORMATION THEORY 454 C =2BxC, Bior(1+3 or | gn (8008 own ae the Shannon-Hartley law a a int: The Shannon-Hartl aot Poin Shi lartley law under es at we can excha for decreased signal power for a syste ge increased m with given capacity C, CHANNEL CAPACITY: A DETAILED STupY 5 pat the bandwidth and the noise power plac Canoe that ea restrictio jc ranemitted by 8 channel. It may be shown that in a charinel. on the rate of information that | ote Mee, one can transmit information ata rate of C bits ich is disturbed by a white a and is expressed 8s per second, where C is the the channel Cc Blog, (145 oe +3) (8.51) nthis expression, B = channel bandwidth in Hz S = Signal power N = Noise power rmay be noted that the expression (equation 8.50) for channel capacity is valid for white Gaussian wise However, for other types of noise, the expression is modified. Proof: Let us present a proof of channel capacity formula based upon the assumption that if a nalis mixed with noise, the signal amplitude can be recognized only within the root mean square emo we voltage. In other words, we can say that the uncertainty in recognizing the exact signal gain, let us assume that the average enplituée is equal to the root mean square noise voltage. A\ mal power and the noise power are S watts and N watts respectively. This means that the root een square value of the received signal is +N volts and the root mean square value of the Now, we have to distinguish the received signal volte in the presence of the noise amplitude YN volts. As a matter of fact, the input signal variation ber of the less than JRF volts will not be distinguished at the receiver end. Therefore, the nui | dstinet levels that ean be distinguished without error can be expressed as — VSN = “Ss s (8.52) M=Vlty noise voltage is JV volts. ofthe amplitude /S+N he maximum value of M. Thus, equation (8.62) expresses # Ss, tion carried by each pulse having yt yy distinet Now, the maximum amount of informal levels is given by (8.53) 464 DIGITAL COMMUNICATIONS 20. THE SOURCE CODING (i) Definition ‘A conversion of the output of a discrete (—pecrete memoryless source (DMS) into & sequence of | momory less con binary symbols (.e. Binary code word) is called mmemor ws] ; vance coding. The device that P& forms this — ereoser DS conversion is called the source vneoder. Figure % = Wn %ar% LX) SS Fig. 8.19. Block diagram for sour 08 Coding 8.19 shows a source encoder. ve of Source Coding cource coding is to mi the redundancy o Gi) Objecti An objective of the source by reducing ame related to Source Coding Process wing terms which are related to source coding dine press nimize the average bit rate required for repr f the information source. esenta 8.20.1. Few Tet Tn this sub section, let us study the follo (i) Codeword length (i) Average codeword length iii) Code efficiency (ix) Code Redundancy (i) Codeword Length Let X be a DMS with finite entropy H(®) and an alphabet xy. } with co probabilities of occurrence P(x)(i = 1,~.m). Let the binary codeword ‘assigned to sym soooder have Jength in, measured in bits. The length of a codeword is the number of bin: (ii) Average Codeword Length The average codeword length L, per source symbol is given by L= SE Pen, a ‘The parameter L represents it aera presents the average number of bits per source symbol used ir (iii) Code Efficiency The code efficiency n is defined as under: n= Loin where L, the L ‘win is the minimum possible value of L. sproaches nity, the cde is said to be efficient er DO YOU KNOW? (iv) Code Redundancy By increasing the redund. 3 vneoding, We C code redundancy ‘is defined as the encode ora! y=l-n x 8.20.2, The Source Coding Theorem The source codin, theorem stat Tenet ae cating theorem states that fora DMS X, with entropy HE), the aver coi | ' L2H INFORMATION THEORY 465. be made as clo: further, L can se to H(X) as desi (8.74) and = H(X), the cod i lesired for some sui ith Dyin , the code efficien suitably chosen code. ‘Thus Wi a ey can be rewritten as L (8.75) 3. classification of Codes 03. : . on of codes is best illust ~ ification o st illustrated by an example cere ‘4has encoded in binary codes with mest seine “8 consider Table 8.2 where a cure © cl Ml Table 8.2. Binary Codes dei | c _ ac le ode2 | Code3 Code 4 Code5 | Code 6 ey i a o a 0 0 1 SS: 00 10 \ 10 o1 on NS is | i iL 00 110 ou oo1 x | 1 m1 om 0001 9031. Fixed-Length Codes ‘A fixed-length code is one whose codeword length is fixed. Code 1 and fxed-Jength with length 2, ‘ode 1 and code 2 of Table 8.2 are 6.20.8.2. Variable-Length Codes A variable-length code is one whose codeword length is not fixed. All codes of Table 8.2 except codes | and 2 are variable-length codes. £.20.8.3. Distinct codes A code is distinct if each codeword is distinguishable from other codewords. All codes of Table (6.2) except code 1 are distinct codes ~ notice the codes for x, and xs. 620.34. Prefix-Free Codes A code in which no codeword can be formed by adding code symbols to another codeword is, «died 6 prefix-free code. Thus, in a prefix-free code, no codeword is a prefix of another. Codes 2, 4, ax¢ 6 of Table 8.2 are prefix-free codes. £20.45. Uniquely Decodable Codes — A distinct code uniquely decodable if the original source sequence can be reconstructed perfect ‘om the encoded binary peaione It may be noted that code 3 of Table 2 is not a vuniquely ‘eodable cade. For, example, the binary sequence 1001 may correspond to the souree sequences Lt OF ,x,%;%,. A sufficient condition to ensure that a code is uniquely decodable isthat no cade brefix of another. Thus, the prefix-free codes 2, 4, and 6 are uniquely ed i » decodability. For example, the prefix-free conditi : condition for unique decod Yy i eG ate eorea cornea ae and yet it is uniquely decodable since Ne that ic Sof Table 82 does not satisfy the prefix-free condition, 0 indicates the beginning of each codeword of the code, £20.36 3.6, Instanta: neous Codes di A uniquel ' Jp if the end of any codeword is ~nlquely decodable code is called an instantaneous oe i stantaneous codes have the Bnizal y aa b ery pre nithout examining subsequent code a yivofix of another codeword, For this reason, ty tT " * ining eeViously mentioned that no codeword is a Pret”! Codes are sometimes known as instantaneow INFORMATION THEORY 469 ortant Point: Note that equation Gi) implies the followin, [mp . 3! ‘lity should orvhich occur With high probab; . sibol with low probability. © assigned shorter coder ree Principle. f codewords than symbols ider a DMS X wi Le 8.42. Consit with symbols AMET, 2 =m Let ny be the length of the soaeol® Xi 8d correspond iliei a " codeword of x, ca fuera NZ probabilities P(x,) ones sn, 1 1S lors +1 ve ‘ i show that this relationship satisties the Krart inequality, a wn; Equation (i) can be rewritten as under: and find the bound on K. solution wlog, Pi S ny <— log, P+ 1 on log, Py 2 ~n, 2 Iogy P,— 1 «iy a Blog, P > 2 > own 9-1 awe 7 Peers in Giiy a on By Sons Sp ao a 12 Yee The indicates that the Kraft inequality is satisfied, and the bound on K will be 1 cy g IMPORTANT SOLVED EXAMPLES exauries4 ADMSX has four symbols xy, xy, xy,and x, with Pox)= 2, Poa) = 4 and Poxy Pia,) = i - Construct a Shannon-Fano code for X ; show that this code has the o- imure property (4) and that the code efficiency is 100 percent. Solution: The Shannon-Fano code is constructed as follows (see Table 8. 8). Table 8.8. x P(x) | Step1 | Step2 | Steps | Code % 12 c 0 xy wa 1 ° 10 X v8 1 1 0 10 x 18 1 1 1 ul 4 We know that, 5 Panton Loy tayety = * HO) = dws FQ) +54 gO = 1.7 1) +2(2) +468) +20)= f+ Z@+gO *56 ).= 1.75 100% Ans. Also pols. 5X has five equally likely sym F aly lfor X, and calculate the efficiency of the code, i) Construct & hannon-Fano © i is tract another Shannon-Fano ‘code and compare the results. ie ‘d compare the results. ti) Repeat for the Huffinan code ai “Fano code [by choosing two approximately equiprobable (0.4 versus 0.6) set sets) ig EXAMPLE 8.45. A DM Solution: constructed as follows Table 8.9. [oa [Pe Step1 | Step2 | Steps | Code x 02 0 0 00 z a 0 1 01 Py 02 1 ° a % 02 1 1 110 5. 0.2 1 1 1 qn A H®) = DPadogs P(x) = 5(- 0.2 log, 0.2) = 2.32 5 L= 2 Peon =0.22+24+24+3+3)24 The efficiency nis. n= Hoo = 222 = 0,967= 96.7% Ans. i) Another Shannon-Fano code [by choosiny is g another two aj a 6 vi 0.4) sets] is constructed as follows (see Table 8.10) el Table 8.10. x, P(x) | Step1 | Step2| Steps | Code x 02 0 0 x, 02 0 1 ae xy 02 0 1 a . 2 ° A 1 ou stp " ‘ L= YPan, Xa £16, Pix) =0.15, and P(x,) = 0.1, @ Construct a Shannon-Fano code for X, and calculate the effici f th ¢ efficiency of the code. Xp X, and x, with P(x, i) Repeat for the Huffman code and compare the results. solution: The Shannon-Fano code is constructed as follows (see Table 8.12) 5 > HOY = DL PGn = .4(1) + 0.19(2) + 0.16(2) + 0.15(3) + 0.1(8) = 2.25 oe ao a _ HO _ 2.15 a= = = 0.956 = 95.6% Ans. r Table 8.12. {ox PO) Step 1 Step2 | Step3 | Code x 04 0 0 00 % 0.19 0 1 o1 X% 0.16 1 ° 10 x, 0.15 1 1 ° no x on 1 1 1 mi (6) The Huffman code is conatructed as follows (see Table 8.13) or L = 0.4(1) + (0.19 + 0.1 ox ate HX) _ 2:15 = 7 22 or = 0.977 = 97.7% The oe (ode, Pome ress is not an a cident but a he average codeword length of the Hu! and thus the efficiency is higher than # 6 + 0.16 + 0.103) necessity; it is @ Pal ‘of nat 2.2 Ans. Ans. 475 4, Pixs) = 0.19, Play) = er than that of the Shannon-Fano code. ~~ Tifemben Spencer jure. DIGITAL COMMUNICATIONS: 472 Table 8.13. P(x) Code 00 000 0.25 UL 0.26 04 000 25 0.19 0.25 010 : "| X% O01 On 0.16 —5g7 EXAMPLE 8.47. Determine the Huffman code for the following messages with given x, Xp Xs x4 Xs x5 x, 0.05 «(0.15 «0.2 0050.15 0.8 on Solution: Arranging and grouping of messages is done as shown below: their probal Xp) Code % 03 00° 0300-03 00 03 00 >04 7 % 02 10 02 10 02 10,303 oF I 0.4 % (015 101 0.15 010302 11] o2. 0.3 3 it % 015 O11 015 Onn] O15. 02 010 % Of 110 oF 0.15_JO1T 110 x 0.05 o1 fit 1100, 00s fl” 06 1 Probability No. of bits in code 0.08 0.15 02 0.05 0.15 03 x, O14 ‘Therefore, average length Lis given by 2 L= Da Po,) ONeoawon INFORMATION THEORY 475, L = 4(0.05 + 0.05) + 3(0.15 + 0.16 + 0.1) + 2(0.2 + 0.3)=2.6 bits Ans. & py HOD is given by pt Sous tog, 2 HO) = DP) eK) 1 = oaton (35 +02 (a5 } 030 (as 0.1og,{ 1. - 2\ O5 | + 0.Uogs| 5 |+0-Lloee| 5 57 bits HQ) 2.57 7 N= LlogaM = Z6log,2 = gg =9885% Ans. “

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy