Chapter 5



Chapter 3 : Information Theory

Section 3.5 :

Ex. 3.5.3 : Consider a telegraph source having two symbols, dot and dash. The dot duration is

0.2 seconds; and the dash duration is 3 times the dot duration. The probability of the dot occurring is twice that of the dash, and the time between symbols is 0.2 seconds. Calculate the information rate of the telegraph source. .Page No. 3-14.

Soln. :

Given that : 1. Dot duration : 0.2 sec.

2. Dash duration : 3 ( 0.2 = 0.6 sec.

3. P (dot) = P (dot) = 2 P (dash).

4. Space between symbols is 0.2 sec.

Information rate = ?

1. Probabilities of dots and dashes :

Let the probability of a dash be “P”. Therefore the probability of a dot will be “2P”. The total probability of transmitting dots and dashes is equal to 1.

( P (dot) + P (dash) = 1

( P + 2P = 1 ( P = 1/3

( Probability of dash = 1/3

and probability of dot = 2/3 …(1)

2. Average information H (X) per symbol :

( H (X) = P (dot) · log2 [ 1/P (dot) ] + P (dash) · log2 [ 1/P (dash) ]

( H (X) = (2/3) log2 [ 3/2 ] + (1/3) log2 [ 3 ] = 0.3899 + 0.5283 = 0.9182 bits/symbol.

3. Symbol rate (Number of symbols/sec.) :

The total average time per symbol can be calculated as follows :

Average symbol time Ts = [ TDOT ( P ( DOT )] + [ TDASH ( P (DASH) ] + Tspace

( Ts = [ 0.2 ( 2/3 ] + [ 0.6 ( 1/3 ] + 0.2 = 0.5333 sec./symbol.

Hence the average rate of symbol transmission is given by :

r = 1/Ts = 1.875 symbols/sec.

4. Information rate (R) :

R = r ( H ( X ) = 1.875 ( 0.9182 = 1.72 bits/sec. ...Ans.

Ex. 3.5.4 : The voice signal in a PCM system is quantized in 16 levels with the following

probabilities :

P1 = P2 = P3 = P4 = 0.1 P5 = P6 = P7 = P8 = 0.05

P9 = P10 = P11 = P12 = 0.075 P13 = P14 = P15 = P16 = 0.025

Calculate the entropy and information rate. Assume fm = 3 kHz. .Page No. 3-15

Soln. :

It is given that,

1. The number of levels = 16. Therefore number of messages = 16.

2. fm = 3 kHz.

(a) To find the entropy of the source :

The entropy is defined as,

H = pk log2 (1/ pk) ...(1)

As M = 16 Equation (1) gets modified to,

H = pk log2 (1/ pk)

= 4 [0.1 log2 (1/0.1)] + 4 [0.05 log2 (1/0.05)]

+ 4 [0.075 log2 (1/0.075)] + 4 [0.025 log2 (1/0.025)]

( H = 0.4 log2 (10) + 0.2 log2 (20) + 0.3 log2 (13.33) + 0.1 log2 (40)

= 0.4 + + 0.3 +

( H = 3.85 bits/message ...(2) ...Ans.

(b) To find the message rate (r) :

The minimum rate of sampling is Nyquist rate.

Therefore fs = 2 ( fm

= 2 ( 3 kHz = 6 kHz ...(3)

Hence there are 6000 samples/sec. As each sample is converted to one of the 16 levels, there are 6000 messages/sec.

( Message rate r = 6000 messages/sec ...(4)

(c) To find the information rate (R) :

R = r ( H = 6000 ( 3.85 = 23100 bits/sec ...Ans.

Ex. 3.5.5 : A message source generates one of four messages randomly every microsecond. The probabilities of these messages are 0.4, 0.3, 0.2 and 0.1. Each emitted message is independent of other messages in the sequence :

1. What is the source entropy ?

2. What is the rate of information generated by this source in bits per second ?

.Page No. 3-15

Soln. :

It is given that,

1. Number of messages, M = 4, let us denote them by m1, m2, m3 and m4.

2. Their probabilities are p1 = 0.4, p2 = 0.3, p3 = 0.2 and p4 = 0.1.

3. One message is transmitted per microsecond.

( Message transmission rate r = = 1 ( 106 messages/sec.

(a) To obtain the source entropy (H) :

H = pk log2 ( 1/pk )

( H = p1 log2 ( 1/ p1 ) + p2 log2 ( 1/ p2 ) + p3 log2 ( 1/ p3 ) + p4 log2 ( 1/ p4 )

= 0.4 log2 ( 1/0.4 ) + 0.3 log2 ( 1/0.3 ) + 0.2 log2 ( 1/0.2 ) + 0.1 log2 ( 1/0.1 )

( H = 1.846 bits/message ...Ans.

(b) To obtain the information rate (R) :

R = H ( r = 1.846 ( 1 ( 106 = 1.846 M bits/sec ...Ans.

Ex. 3.5.6 : A source consists of 4 letters A, B, C and D. For transmission each letter is coded into a sequence of two binary pulses. A is represented by 00, B by 01, C by 10 and D by 11. The probability of occurrence of each letter is P(A) = , P (B) = , P (C) = and

P (D) = . Determine the entropy of the source and average rate of transmission of information. .Page No. 3-15

Soln. : The given data can be summarised as shown in the following table :

|Message |Probability |Code |

|A |1/5 |0 0 |

|B |1/4 |0 1 |

|C |1/4 |1 0 |

|D |3/10 |1 1 |

Assumption : Let us assume that the message transmission rate be r = 4000 messages/sec.

(a) To determine the source entropy :

H = log2 (5) + log2 (4) + log2 (4) + 0.3 log2 (10/3)

( H = 1.9855 bits/message ...Ans.

(b) To determine the information rate :

R = r ( H = [4000 messages/sec] ( [1.9855 bits/message]

R = 7942.3 bits/sec ...Ans.

(c) Maximum possible information rate :

Number of messages/sec = 000

But here the number of binary digits/message = 2

( Number of binary digits (binits)/sec = 4000 ( 2 = 8000 binits/sec.

We know that each binit can convey a maximum average information of 1 bit

( H = 1 bit/binit

( Maximum rate of information transmission = [8000 binits/s] ( [Hmax /binit]

= 8000 ( 1 bits/sec ...Ans.

Section 3.6 :

Ex. 3.6.3 : A discrete memoryless source has five symbols x1, x2, x3, x4 and x5 with probabilities

p ( x1 ) = 0.4, p ( x2 ) = 0.19, p ( x3 ) = 0.16, p ( x4 ) = 0.14 and p ( x5 ) = 0.11. Construct the Shannon-Fano code for this source. Calculate the average code word length and coding efficiency of the source. .Page No. 3-21

Soln. : Follow the steps given below to obtain the Shannon-Fano code.

Step 1 : List the source symbols in the order of decreasing probability.

Step 2 : Partition the set into two sets that are as close to being equiprobable as possible and assign

0 to the upper set and 1 to the lower set.

Step 3 : Continue this process, each time partitioning the sets with as nearly equal probabilities as possible until further partitioning is not possible.

(a) The Shannon-Fano codes are as given in Table P. 3.6.3.

Table P. 3.6.3 : Shannon-Fano codes

|Symbols |Probability |Step 1 |Step 2 |Step 3 | |Code word |

|x2 |0.19 |0 |1 |Stop here | |0 1 |

| | |Partition | | | | |

|x3 |0.16 |1 |0 |Stop here | |1 0 |

| | | |Partition | | | |

|x4 |0.14 |1 |1 |0 |Stop here |110 |

| | | | |Partition | | |

|x5 |0.11 |1 |1 |1 |Stop here |1 1 1 |

(b) Average code word length (L) :

The average code word length is given by :

L = pk ( (length of mk in bits)

= ( 0.4 ( 2 ) + ( 0.19 ( 2 ) + ( 0.16 ( 2 ) + ( 0.14 ( 3 ) + ( 0.11 ( 3 )

= 2.25 bits/message

(c) Entropy of the source (H) :

H = pk log2 ( 1 / pk )

= 0.4 log2 ( 1 / 0.4 ) + 0.19 log2 ( 1 / 0.19 ) + 0.16 log2 ( 1 / 0.16 )

+ 0.14 log2 ( 1 / 0.14 ) + 0.11 log2 ( 1 / 0.11 ) = 2.15

Ex. 3.6.6 : A discrete memoryless source has an alphabet of seven symbols with probabilities for its output as described in Table P. 3.6.6(a). .Page No. 3-25

Table P. 3.6.6(a)

|Symbol |S0 |S1 |S2 |

|S0 |0.25 |10 |2 bit |

|S1 |0.25 |11 |2 bit |

|S2 |0.125 |001 |3 bit |

|S3 |0.125 |010 |3 bit |

|S4 |0.125 |011 |3 bit |

|S5 |0.0625 |0000 |4 bit |

|S6 |0.0625 |0001 |4 bit |

To compute the efficiency :

1. The average code length = L = (length of symbol in bits)

From Table P. 3.7.4(b)

L = ( 0.25 ( 2 ) + ( 0.25 ( 2 ) + ( 0.125 ( 3 ) ( 3 + ( 0.0625 ( 4 ) ( 2

( L = 2.625 bits/symbol

2. The average information per message = H =

( H = [ 0.25 log2 ( 4 ) ] ( 2 + [ 0.125 log2 ( 8 ) ] ( 3 + [ 0.0625 log2 ( 16 ) ] ( 2

= [ 0.25 ( 2 ( 2 ] + [ 0.125 ( 3 ( 3 ] + [ 0.0625 ( 4 ( 2 ]

( H = 2.625 bits/message.

3. Code efficiency ( = ( 100 =

( ( = 100%

Note : As the average information per symbol (H) is equal to the average code length (L), the code efficiency is 100%.

Section 3.11 :

Ex. 3.11.5 : Calculate differential entropy H (X) of the uniformly distributed random variable X with probability density function.

fX (x) = 1/a 0 ( x ( a

= 0 elsewhere

for 1. a = 1 2. a = 2 3. a = 1/2. .Page No. 3-49.

Soln. :

The uniform PDF of the random variable X is as shown in Fig. P. 3.11.5.

Fig. P. 3.11.5

1. The average amount of information per sample value of x (t) is measured by,

H (X) = fX (x) · log2 [1/fX (x)] dx bits/sample …(1)

The entropy H (X) defined by the expression above is called as the differential entropy of X.

2. Substituting the value of fX (x) in the expression for H (X) we get,

H (X) = · log2 (a) dx ...(2)

(a) Substitute a = 1 to get, H (X) = 1 · log2 1 dx = 0 ...Ans.

(b) Substitute a = 2 to get, H (X) = · log2 2 dx = ( 2 = 1 ...Ans.

(c) Substitute a = to get, H (X) = 2 log2 (1/2) dx = – 2 log2 2 = – 2 ...Ans.

These are the values of differential entropy for various values of a.

Ex. 3.11.6 : A discrete source transmits messages x1, x2, x3 with probabilities p ( x1 ) = 0.3,

p ( x2 ) = 0.25, p ( x3 ) = 0.45. The source is connected to the channel whose conditional probability matrix is

y1 y2 y3

P (Y / X) =

Calculate all the entropies and mutual information with this channel. .Page No. 3-49

Soln. :

Steps to be followed :

Step 1 : Obtain the joint probability matrix P (X, Y).

Step 2 : Obtain the probabilities p (y1), p (y2), p (y3).

Step 3 : Obtain the conditional probability matrix P (X/Y)

Step 4 : Obtain the marginal densities H (X) and H (Y).

Step 5 : Calculate the conditional entropy H (X/Y).

Step 6 : Calculate the joint entropy H (X , Y).

Step 7 : Calculate the mutual information I (X , Y).

Step 1 : Obtain the joint probability matrix P (X, Y) :

The given matrix P (Y/X) is the conditional probability matrix. We can obtain the joint probability matrix P (X , Y) as :

P (X, Y) = P [ Y/X ] · P (X)

( P (X, Y) =

y1 y2 y3

( P (X, Y) = ...(1)

Step 2 : Obtain the probabilities p (y1), p (y2) and p (y3) :

The probabilities p ( y1 ), p ( y2 ) and p ( y3 ) can be obtained by adding the column entries of

P (X , Y) matrix of Equation (1).

( p ( y1 ) = 0.27 + 0 + 0 = 0.27

p ( y2 ) = 0.03 + 0.2 + 0.135 = 0.365

p ( y3 ) = 0 + 0.05 + 0.315 = 0.365

Step 3 : Obtain the conditional probability matrix P (X/Y) :

The conditional probability matrix P (X/Y) can be obtained by dividing the columns of the joint probability matrix P (X , Y) of Equation (1) by p (y1), p (y2) and p (y3) respectively.

( P (X /Y) =

y1 y2 y3

( P (X /Y) = ...(2)

Step 4 : To obtain the marginal entropies H (X) and H (Y) :

H (X) = p ( xi ) log2 [ 1/p ( xi )]

= p ( x1 ) log2 [ 1/p ( x1 ) ] + p ( x2 ) log2 [ 1/p ( x2 ) ] + p ( x3 ) log2 [ 1/p ( x3 )]

Substituting the values of p ( x1 ), p ( x2 ) and p ( x3 ) we get,

= 0.3 log2 (1/0.3) + 0.25 log2 (1/0.25) + 0.45 log2 (1/0.45)

= [ (0.3 ( 1.7369) + (0.25 ( 2) + (0.45 ( 1.152) ]

( H (X) = [ 0.521 + 0.5 + 0.5184 ] = 1.5394 bits/message ...Ans.

Similarly H (Y) = p ( y1 ) log2 [ 1/y1 ] + p ( y2 ) log2 [ 1/y2 ] + p ( y3 ) log2 [ 1/y3 ]

= 0.27 log2 [ 1/0.27 ] + 0.365 ( 2 ( log2 [ 1/0.365 ]

H (Y) = 0.51 + 1.0614 = 1.5714 bits/message ...Ans.

Step 5 : To obtain the conditional entropy H (X / Y) :

H (X/Y) = – p ( xi , yj ) log2 p ( xi/yj )

( H (X/Y) = – p ( x1 , y1 ) log2 p ( x1/y1 ) – p ( x1 , y2 ) log2 p ( x1/y2 ) – p ( x1 , y3 ) log2 p ( x1/y3 )

– p ( x2 , y1 ) log2 p ( x2/y1 ) – p ( x2 , y2 ) log2 p ( x2/y2 ) – p ( x2 , y3 ) log2 p ( x2/y3 )

– p ( x3 , y1 ) log2 p ( x3/y1 ) – p ( x3 , y2 ) log2 p ( x3/y2 ) – p ( x3 , y3 ) log2 p ( x3/y3 )

Refer to the joint and conditional matrices given in Fig. P. 3.11.6.

| |P (X, Y) | | | |P (X, Y) |

| |y1 |y2 |y3 | | | |y1 |y2 |y3 |

|x1 |0.27 |0.03 |0 | | |x1 |1 |0.0821 |0 |

|x2 |0 |0.2 |0.05 | | |x2 |0 |0.5479 |0.1369 |

|x3 |0 |0.135 |0.315 | | |x3 |0 |0.3698 |0.863 |

Fig. P. 3.11.6

Substituting various values from these two matrices we get,

H (X/Y) = – 0.27 log2 1 – 0.03 log2 (0.0821) – 0 – 0 – 0.2 log2 (0.5479)

– 0.05 log2 (0.1369) – 0 – 0.135 log2 (0.3698) – 0.315 log2 (0.863)

= 0 + 0.108 + 0.1736 + 0.1434 + 0.1937 + 0.0669

H (X/Y) = 0.6856 bits / message ...Ans.

Step 6 : To obtain the joint entropy H (X , Y) :

The joint entropy H (X , Y) is given by,

H (X, Y) = – p ( xi , yj ) · log2 p ( xi , yj )

( H (X, Y) = – [ p ( x1 , y1 ) log2 p ( x1 , y1 ) + p ( x1 , y2 ) log2 p ( x1 , y2 ) + p ( x1 , y3 )

log2 p ( x1 , y3) + p ( x2 , y1 ) log2 p ( x2 , y1 ) + p ( x2 , y2 ) log2 p ( x2 , y2 )

+ p ( x2 , y3 ) log2 p ( x2 , y3 ) + p ( x3 , y1 ) log2 p ( x3 , y1)

+ p ( x3 , y2 ) log2 p ( x3 , y2 ) + p ( x3 , y3 ) log2 p ( x3 , y3 ) ]

Referring to the joint matrix we get,

( H (X, Y) = – [ 0.27 log2 0.27 + 0.03 log2 0.03 + 0 + 0 + 0.2 log 0.2 + 0.05 log 0.05 + 0

+ 0.135 log2 0.135 + 0.315 log2 0.315 ]

= [ 0.51 + 0.1517 + 0.4643 + 0.216 + 0.39 + 0.5249]

( H (X, Y) = 2.2569 bits/message ...Ans.

Step 7 : To calculate the mutual information :

Mutual information, is given by,

I [ X, Y ] = H (X) – H (X/Y) = 1.5394 – 0.6856 = 0.8538 bits.

Ex. 3.11.7 : For the given channel matrix, find out the mutual information. Given that p ( x1 ) = 0.6,

p ( x2 ) = 0.3 and p ( x3 ) = 0.1. .Page No. 3-50

| | | |y1 |y2 |y3 |

| | |x1 |1 / 2 |1 / 2 |0 |

|p (y / x) |= |x2 |1 / 2 |0 |1 / 2 |

| | |x3 |0 |1 / 2 |1 / 2 |

Soln. :

Steps to be followed :

Step 1 : Obtain the joint probability matrix P (X , Y).

Step 2 : Calculate the probabilities p ( y1 ), p ( y2 ), p ( y3 ).

Step 3 : Obtain the conditional probability matrix P (X/Y).

Step 4 : Calculate the marginal densities H (X) and H (Y).

Step 5 : Calculate the conditional entropy H (X/Y).

Step 6 : Find the mutual information.

Step 1 : Obtain the joint probability matrix P (X , Y) :

We can obtain the joint probability matrix P (X , Y) as

P (X , Y) = P (Y/X) · P (X)

So multiply rows of the P (Y / X) matrix by p ( x1 ), p ( x2 ) and p ( x3 ) to get,

| | | | | | |

| | | |0.5 ( 0.6 |0.5 ( 0.6 |0 |

|P (X / Y) |= | |0.5 ( 0.3 |0 |0.5 ( 0.3 |

| | | |0 |0.5 ( 0.1 |0.5 ( 0.1 |

| | | |y1 |y2 |y3 |

| | |x1 |0.3 |0.3 |0 |

|( P (X, Y) |= |x2 |0.15 |0 |0.15 |

| | |x3 |0 |0.05 |0.05 |

Step 2 : Obtain the probabilities p ( y1 ), p ( y2 ), p ( y3 ) :

These probabilities can be obtained by adding the column entries of P (X , Y) matrix of

Equation (1).

( p ( y1 ) = 0.3 + 0.15 + 0 = 0.45

p ( y2 ) = 0.3 + 0 + 0.05 = 0.35

p ( y3 ) = 0 + 0.15 + 0.05 = 0.20

Step 3 : Obtain the conditional probability matrix P (X/Y) :

The conditional probability matrix P (X/Y) can be obtained by dividing the columns of the joint probability matrix P (X , Y) of Equation (1) by p ( y1 ), p ( y2 ) and p ( y3 ) respectively.

| | | | | | |

| | | |0.3 / 0.45 |0.3 / 0.35 |0 |

|( P (X / Y) |= | |0.15 / 0.45 |0 |0.15 / 0.2 |

| | | |0 |0.05 / 0.35 |0.05 / 0.2 |

| | | |y1 |y2 |y3 |

| | |x1 |0.667 |0.857 |0 |

|( P (X, Y) |= |x2 |0.333 |0 |0.75 |

| | |x3 |0 |0.143 |0.25 |

Step 4 : Calculate the marginal entropy H (X) :

H (X) = – p ( xi ) log2 p ( xi )

= – p ( x1 ) log2 p ( x1 ) – p ( x2 ) log2 p ( x2 ) – p ( x3 ) log2 p ( x3 )

= – 0.6 log2 (0.6) – 0.3 log2 (0.3) – 0.1 log2 (0.1)

= 0.4421 + 0.5210 + 0.3321

( H (X) = 1.2952 bits/message

Step 5 : Obtain the conditional entropy H (X/Y) :

H (X/Y) = – p ( xi , yj ) log2 ( xi / yj )

= – p ( x1 , y1 ) log2 p ( x1/y1 ) – p ( x1 , y2 ) log2 p ( x1/y2 ) – p ( x1 , y3 ) log2 p ( x1/y3 )

– p ( x2 , y1 ) log2 p ( x2/y1 ) – p ( x2 , y2 ) log2 p ( x2/y2 ) – p ( x2 , y3 ) log2 p ( x2/y3 )

– p ( x3 , y1 ) log2 p ( x3/y1 ) – p ( x3 , y2 ) log2 p ( x3/y2 ) – p ( x3 , y3 ) log2 p ( x3/y3 )

Refer to the joint and conditional matrices of Fig. P. 3.11.7.

| |P (X / Y) | | | |P (X, Y) |

| |y1 |y2 |y3 | | | |y1 |y2 |Y3 |

|x1 |0.667 |0.857 |0 | | |x1 |0.3 |0.3 |0 |

|x2 |0.333 |0 |0.75 | | |x2 |0.15 |0 |0.15 |

|x3 |0 |0.143 |0.25 | | |x3 |0 |0.05 |0.05 |

Fig. P. 3.11.7

Substituting various values from these two matrices we get,

H (X/Y) = – 0.3 log2 0.667 – 0.3 log2 0.857 – 0

– 0.15 log2 0.333 – 0 – 0.15 log2 0.75

– 0 – 0.05 log2 0.143 – 0.05 log2 0.25

( H (X/Y) = 0.1752 + 0.06678 + 0.2379 + 0.06225 + 0.1402 + 0.1

( H (X/Y) = 0.78233 bits/message

Step 6 : Mutual information :

I (X , Y) = H (X) – H (X/Y)

= 1.2952 – 0.78233 = 0.51287 bits ...Ans.

Ex. 3.11.8 : State the joint and conditional entropy. For a signal which is known to have a uniform density function in the range 0 ( x ( 5; find entropy H (X). If the same signal is amplified eight times, then determine H (X). .Page No. 3-50

Soln. : For the definitions of joint and conditional entropy refer to sections 3.10.1 and 3.10.2.

The uniform PDF of the random variable X is as shown in Fig. P. 3.11.8.

1. The differential entropy H (X) of the given R.V. X is given by,

H (X) = fX (x) log2 [1/fX (x)] dx bits/sample. Fig. P. 3.11.8

2. Let us define the PDF fX (x). It is given that fX (x) is uniform in the range 0 ( x ( 5.

( Let fX (x) = k .... 0 ( x ( 5

= 0 .... elsewhere

But area under fX (x) is always 1.

( fX (x) dx = 1

( k dx = 1

( k = 1/5

Hence the PDF of X is given by,

fX (x) = 1/5 .... 0 ( x ( 5

= 0 .... elsewhere

3. Substituting the value of fX (x) we get,

H (X) = log2 (5) dx

( H (X) = 2.322 bits/message ...Ans.

Ex. 3.11.9 : Two binary symmetrical channels are connected in cascade as shown in Fig. P. 3.11.9.

1. Find the channel matrix of the resultant channel.

2. Find p ( z1 ) and p ( z2 ) if p ( x1 ) = 0.6 and p ( x2 ) = 0.4. .Page No. 3-50

Fig. P. 3.11.9 : BSC for Ex. 3.11.10

Soln. :

Steps to be followed :

Step 1 : Write the channel matrix for the individual channels as P [ Y/X ] for the first one and

P [ Z/Y ] for the second channel.

Step 2 : Obtain the channel matrix for the cascaded channel as,

P [ Z/X ] = P [ Y/X ] · P [ Z/Y ]

Step 3 : Calculate the probabilities P ( z1 ) and P ( z2 ).

1. To obtain the individual channel matrix :

The channel matrix of a BSC consists of the transition probabilities of the channel. That means the channel matrix for channel – 1 is given by,

P [ Y/X ] = ...(1)

Substituting the values we get,

P [ Y/X ] = ...(2)

Similarly the channel matrix for second BSC is given by,

P [ Z/Y ] = ...(3)

Substituting the values we get,

P [ Z/Y ] = ...(4)

2. Channel matrix of the resultant channel :

The channel matrix of the resultant channel is given by,

P [ Z/X ] = ...(5)

The probability P ( z1/x1 ) can be expressed by referring to Fig. P. 3.11.10 as,

P ( z1/x1 ) = P ( z1/y1 ) · P ( y1/x1 ) + P ( z1/y2 ) · P ( y2/x2 ) ...(6)

Similarly we can obtain the expressions for the remaining terms in the channel matrix of resultant channel.

( P[ Z/X ] =

...(7)

The elements of the channel matrix of Equation (7) can be obtained by multiplying the individual channel matrices.

( P (Z/X) = P (Y/X) · P (Z/Y) …(8)

( P (Z/X) =

= ...Ans.

This is the required resultant channel matrix.

3. To calculate P ( z1 ) and P ( z2 ) :

From Fig. P.3.11.10 we can write the following expression,

P ( z1 ) = P ( z1/ y1 ) P ( y1 ) + P ( z1/ y2 ) · P ( y2 ) …(9)

Substituting P ( y1 ) = P ( x1 ) · P ( y1/ x1 ) + P ( x2 ) · P ( y1/ x2 )

= (0.6 ( 0.8) + (0.4 ( 0.2) = 0.56

and P ( y2 ) = P ( x1 ) · P ( y2/ x1 ) + P ( x2 ) · P ( y2/ x2 )

= (0.6 ( 0.2) + (0.4 ( 0.8) = 0.44

and P ( z1/ y1 ) = 0.7 and P ( z1/ y2 ) = 0.3

We get, P ( z1 ) = (0.7 ( 0.56) + (0.3 ( 0.44)

( P ( z1 ) = 0.392 + 0.132 = 0.524 ...Ans.

Similarly P ( z2 ) = P ( z2/ y1 ) P ( y1 ) + P ( z2/ y2 ) · P ( y2 )

= (0.3 ( 0.56) + (0.7 ( 0.44)

( P ( z2 ) = 0.476 ...Ans.

Ex. 3.11.10 : A binary channel matrix is given by :

y1 y2 ( outputs

inputs (

Determine H (X), H (X/Y), H (Y/X) and mutual information I (X ; Y) .Page No. 3-50

Soln. : The given channel matrix is

y1 y2

p (x, y) =

Step 1 : Obtain the individual probabilities :

The individual message probabilities are given by -

p ( x1 ) = 2/3 + 1/3 = 1

p ( x2 ) = 1/10 + 9/10 = 1

p ( y1 ) = 2/3 + 1/10 = 23/30

p ( y2 ) = 1/3 + 9/10 = 37/30

Step 2 : Obtain the marginal entropies H (X) and H (Y) :

H (X) = p ( x1 ) log2 [ 1/ p ( x1 ) ] + p ( x2 ) log2 [ 1/ p ( x2 ) ]

= 1 log2 (1) + 1 log2 (1)

( H (X) = 0

H (Y) = p ( y1 ) log2 [ 1/ p ( y1 ) ] + p ( y2 ) log2 [ 1/ p ( y2 ) ]

= (23/30) log2 [ 30/23 ] + (37/30) log2 [30/37]

H (Y) = 0.2938 – 0.3731 = – 0.07936 ( – 0.08

Step 3 : Obtain the joint entropy H (X, Y) :

H (X, Y) = p ( x1 , y1 ) log2 [ 1/ p ( x1 , y1 ) ] + p ( x1 , y2 ) log2 [ 1/ p ( x1 , y2 ) ]

+ p ( x2 , y1 ) log2 [ 1/ p ( x2 , y1 ) ] + p ( x2 , y2 ) log2 [ 1/ p ( x2 , y2 ) ]

( H (X, Y) = log2 (3/2) + log2 (3) + log2 (10) + log2 (10/9)

= 0.38 + 0.52 + 0.33 + 0.13 = 1.36 bits

Step 4 : Obtain the conditional probabilities H (X/Y) and H (Y/X) :

H (X/Y) = H (X , Y) – H (Y)

= 1.36 – (– 0.08) = 1.44 bits.

H (Y/X) = H (X , Y) – H (X)

= 1.36 – 0 = 1.36 bits.

Step 5 : Mutual information :

I (X, Y) = H (X) – H (X/Y)

= 0 – 1.44

= – 1.44 bits/message. ...Ans.

Ex. 3.11.11 : A channel has the following channel matrix :

[ P (Y/X) ] =

1. Draw the channel diagram.

2. If the source has equally likely outputs, compute the probabilities associated with

the channel outputs for P = 0.2. .Page No. 3-50

Soln. :

Part I :

1. The given matrix shows that the number of inputs is two i.e. x1 and x2 whereas the number of outputs is three i.e. y1 , y2 and y3.

2. This channel has two inputs x1 = 0 and x2 = 1 and three outputs y1 = 0, y2 = e and y3 = 1 as shown in Fig. P. 3.11.11.

Fig. P. 3.11.11 : The channel diagram

The channel diagram is as shown in Fig. P. 3.11.11 This type of channel is called as “binary erasure channel”. The output y2 = e indicates an erasure that means this output is in doubt and this output should be erased.

Part II : Given that the sources x1 and x2 are equally likely

( p ( x1 ) = p ( x2 ) = 0.5

It is also given that p = 0.2.

( p (y) = p (x) [ p (y/x) ]

= [p (x1) , p (x2) ]

( p (y) = [ 0.5, 0.5 ] = [ 0.4 0.2 0.4 ]

That means p (y1) = 0.4, p ( y2 ) = 0.2 and p ( y3 ) = 0.4

These are the required values of probabilities associated with the channel outputs for p = 0.2.

Ex. 3.11.13 : Find the mutual information and channel capacity of the channel as shown in

Fig. P. 3.11.13(a). Given that P ( x1 ) = 0.6 and P ( x2 ) = 0.4. .Page No. 3-57.

Fig. P. 3.11.13(a)

Soln. :

Given that : p ( x1 ) = 0.6, p ( x2 ) = 0.4

The conditional probabilities are,

p ( y1/x1 ) = 0.8, p ( y2/x1 ) = 0.2

p ( y1/x2 ) = 0.3 and p ( y2/x2 ) = 0.7

The mutual information can be obtained by referring to Fig. P. 3.11.13(b).

Fig. P. 3.11.13(b)

As already derived, the mutual information is given by,

I (X ; Y) = ( [ ( + (1 – ( – () p ] – p ( (() – (1 – p) ( (() ...(1)

Where ( is called as the horseshoe function which is given by,

( (p) = p log2 (1/p) + (1 – p) log2 (1/1 – p) ...(2)

Substituting the values we get,

I (X ; Y) = ( [ 0.3 + (1 – 0.2 – 0.3) 0.6 ] – 0.6 ( (0.2) – 0.4 ( (0.3)

( I (X ; Y) = ( (0.6) – 0.6 ( (0.2) – 0.4 ( (0.3) …(3)

Using the Equation (2) we get,

I (X ; Y) = [ 0.6 log2 (1/0.6) + 0.4 log2 (1/0.4) ] – 0.6 [0.2 log2 (1/0.2) + 0.8 log2 (1/0.8) ]

– 0.4 [ 0.3 log2 (1/0.3) + 0.7 log2 (1/0.7)]

( I (X ; Y) = 0.1868 bits. ...Ans.

Channel capacity (C) :

For the asymmetric binary channel,

C = 1 – p ( (() – (1 – p) ( (()

= 1 – 0.6 ( (0.2) – 0.4 ( (0.3)

= 1 – 0.6 [ 0.2 log2 (1/0.2) + 0.8 log2 (1/0.8) ] – 0.4 [ 0.3 log2 (1/0.3) + 0.7 log2 (1/0.7) ]

= 1 – 0.433 – 0.352

C = 0.214 bits ...Ans.

Section 3.12

Ex. 3.12.3 : In a facsimile transmission of a picture, there are about [2.25 ( 106 ] picture elements per frame. For good reproduction, twelve brightness levels are necessary. Assuming all these levels to be equiprobable, calculate the channel bandwidth required to transmit one picture in every three minutes for a single to noise power ratio of 30 dB. If SNR requirement increases to 40 dB, calculate the new bandwidth. Explain the trade-off between bandwidth and SNR, by comparing the two results. .Page No. 3-67

Soln. :

Given : Number of picture elements per frame = 2.25 ( 106

Number of brightness levels = 12 = M

All the twelve brightness levels are equiprobable.

Number of pictures per minute = 1/3

SNR1 = 30 dB SNR2 = 40 dB

1. Calculate the information rate :

The number of picture elements per frame is 2.25 ( 106 and these elements can be of any brightness out of the possible 12 brightness levels.

The information rate (R) = No. of messages/sec. ( Average information per message.

R = r ( H ...(1)

Where r = = = 12500 elements/sec. ...(2)

and H = log2 M = log2 12 ...as all brightness levels are equiprobable. ...(3)

( R = 12,500 ( log2 12

( R = 44.812 k bits/sec. ...(4)

2. Calculate the bandwidth B :

The Shannon’s capacity theorem states that,

R ( C where C = B log2 ...(5)

Substitute = 30 dB = 1000 we get,

( 44.812 ( 103 ( B log2 [1 + 1000]

( B (

( B ( 4.4959 kHz. ...Ans.

3. BW for S/N = 40 dB :

For signal to noise ratio of 40 dB or 10,000 let us calculate new value of bandwidth.

( 44.812 ( 103 ( B log2 [1 + 10000 ]

( B (

( B ( 3.372 kHz. ...Ans.

Trade off between bandwidth and SNR : As the signal to noise ratio is increased from

30 dB to 40 dB, the bandwidth will have to be decreased.

Ex. 3.12.4 : An analog signal having bandwidth of 4 kHz is sampled at 1.25 times the Nyquist rate, with each sample quantised into one of 256 equally likely levels.

1. What is information rate of this source ?

2. Can the output of this source be transmitted without error over an AWGN channel

with bandwidth of 10 kHz and SNR or 20 dB ?

3. Find SNR required for error free transmission for part (ii).

4. Find bandwidth required for an AWGN channel for error free transmission this

source if SNR happens to be 20 dB. .Page No. 3-68

Soln. :

Given : fm = 4 kHz., fs = 1.25 ( 2 ( fm = 1.25 ( 2 ( 4 kHz = 10 kHz.

Quantization levels Q = 256 (equally likely).

1. Information rate (R) :

R = r ( H ...(1)

Where r = Number of messages/sec.

= Number of samples/sec. = 10 kHz.

and H = log2 256 ...as all the levels are equally likely

( R = 10 ( 103 ( log2 256 = 10 ( 103 ( 8

( R = 80 k bits/sec. ...Ans.

2. Channel capacity (C) :

In order to answer the question asked in (ii) we have to calculate the channel capacity C.

Given :

B = 10 kHz and = 20 dB = 100

( C = B log2 = 10 ( 103 log2 [101].

( C = 66.582 k bits/sec.

For error free transmission, it is necessary that R ( C. But here R = 80 kb/s and C = 66.582 kb/s hence R > C hence errorfree transmission is not possible.

3. S/N ratio for errorfree transmission in part (2) :

Substitute C = R = 80 kb/s. we get,

80 ( 103 = B log2

( 80 ( 103 = 10 ( 103 log2 [1+ (S/N)]

( 8 = log2 [1+ (S/N)]

( 256 = 1+ (S/N)

( S/N = 255 or 24.06 dB ...Ans.

This is the required value of the signal to noise ratio to ensure the error free transmission.

4. BW required for the errorfree transmission :

Given :

C = 80 kb/s, S/N = 20 dB = 100

( C = B log2

( 80 = B log2 [1 + 100]

( B ( 12 kHz. ...Ans.

Ex. 3.12.5 : A channel has a bandwidth of 5 kHz and a signal to noise power ratio 63. Determine the bandwidth needed if the S/N power ratio is reduced to 31. What will be the signal power required if the channel bandwidth is reduced to 3 kHz ? .Page No. 3-68

Soln. :

1. To determine the channel capacity :

It is given that B = 5 kHz and = 63. Hence using the Shannon Hartley theorem the channel capacity is given by,

C = B log2 = 5 ( 103 log2 [1+ 63]

( C = 30 ( 103 bits/sec ...(1)

2. To determine the new bandwidth :

The new value of = 31. Assuming the channel capacity “C” to be constant we can write,

30 ( 103 = B log2 [1+ 31]

( B = = 6 kHz ...(2)

3. To determine the new signal power :

Given that the new bandwidth is 3 kHz. We know that noise power N = N0 B.

Let the noise power corresponding to a bandwidth of 6 kHz be N1 = 6 N0 and the noise power corresponding to the new bandwidth of 3 kHz be N2 = 3 N0.

( = = 2 ...(3)

The old signal to noise ratio = = 31

( S1 = 31 N1 ...(4)

The new signal to noise ratio = . We do not know its value, hence let us find it out.

30 ( 103 = 3 ( 103 log2

( = 1023 ...(5)

( S2 = 1023 N2

But from Equation (3), N2 = , substituting we get,

( S2 = 1023 ...(6)

Dividing Equation (6) by Equation (4) we get,

= = 16.5

( S2 = 16.5 S1 ...Ans.

Thus if the bandwidth is reduced by 50% then the signal power must be increased 16.5 times i.e. 1650% to get the same capacity.

Ex. 3.12.6 : A 2 kHz channel has signal to noise ratio of 24 dB :

(a) Calculate maximum capacity of this channel.

(b) Assuming constant transmitting power, calculate maximum capacity when channel

bandwidth is : 1. halved 2. reduced to a quarter of its original value.

.Page No. 3-68

Soln. :

Data : B = 2 kHz and (S/N) = 24 dB.

The SNR should be converted from dB to power ratio.

( 24 = 10 log10 (S/N)

( = 251 ...(1)

(a) To determine the channel capacity :

C = B log2 = 2 ( 103 log2 [1 + 251] = 2 ( 103

( C = 15.95 ( 103 bits/sec ...Ans.

(b) 1. Value of C when B is halved :

The new bandwidth B2 = 1 kHz, let the old bandwidth be denoted by B1 = 2 kHz.

We know that the noise power N = N0 B

( Noise power with old bandwidth = N1 = N0 B1 ...(2)

and Noise power with new bandwidth = N2 = N0 B2 ...(3)

( = = =

( = ...(4)

As the signal power remains constant, the SNR with new bandwidth is,

= = 2

But we know that = 251 ...See Equation (1)

( = 2 ( 251 = 502 ...(5)

Hence the new channel capacity is given by,

C = B2 log2 = 1 ( 103 log2 (503)

= 1 ( 103

( C = 8.97 ( 103 bits/sec ...Ans.

2. Value of C when B is reduced to 1/4 of original value :

The Equation (4) gets modified to,

= ...(6)

( = 4 = 4 ( 251 = 1004 ...(7)

Hence new channel capacity is given by,

C = B3 log2 = 500 log2 (1004)

( C = 4.99 ( 103 bits/sec ...Ans.

θθθ

-----------------------

… (2)

… (1)

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download