Multiple Parity Bits - The Hamming (7,4) Code

Multiple Parity Bits - The Hamming (7,4) Code

September 29, 2013

Introduction

Recall that for a n bit code, if the last bit is the parity bit, then the rst n ? 1 bits are the information bit

c0 c1 ...cn?2 cn?1

| {z } | {z}

info bit parity

For using even-parity method, the receiver only need to do the following operation for error detection

n?1

S = c0 ¨’ c1 ¨’ c2 ¨’ ... ¨’ cn?2 ¨’ cn?1 =

?

?

? 0

?

?

1

no error

error occurred

What if .... we use TWO parity bits?

The Two parity bits system

For a code with length n

c0 c1 ...cn?3 cn?2 cn?1

{z

}

|

length n

The rst n ? 2 bits are information bit, and the last 2 bits are parity bits

c0 c1 ...cn?3

| {z }

cn?2 cn?1

| {z }

information bits parity

Then the error-detection operations performed by the receiver can be

S1 = c0 ¨’ c1 ¨’ c2 ¨’ ... ¨’ cn?3 ¨’ cn?2

S2 = c0 ¨’ c1 ¨’ c2 ¨’ ... ¨’ cn?3 ¨’ cn?1

Then we have S1 S2 as TWO bits that can be used to represent FOUR things , 00 for no-error and 01,10,11 for

some type of error

1

The Hamming (7,4) Code

The Hamming (7,4) code can detect and correct all one-bit error.

For a code with length 7, 4 bits are information, 3 bits are parity bits

abcd¦Á¦Â¦Ã

We can have 3 chekcing equations for the receiver

S1 = a ¨’ b ¨’ c ¨’ ¦Á

S2 = a ¨’ b ¨’ d ¨’ ¦Â

S3 = a ¨’ c ¨’ d ¨’ ¦Ã

Therefore ¦Á, ¦Â, ¦Ã actually is checking all the 4 information bits

¦Á is checking a, b, c

¦Â is checking a, b, d

¦Ã is checking a, c, d

Then S1 S2 S3 forms a 3bits message that can represent 8 information

Position of error

no error

S1 S2 S3

000

001

010

011

100

101

110

111

¦Ã

¦Â

¦Á

d

c

b

a

Some explanation

When S1 S2 S3 = 000 , that means all even-parity pit equations are showing that there is no error

When S1 S2 S3 = 001 , the means for S3 = a ¨’ c ¨’ d ¨’ ¦Ã has one error

Then for a, c, d, ¦Ã , one of the bit has error. But since S1 = S2 = 0 , it guarantee the a has no error ( otherwise

if a has error, S1 ?= 0 and S2 ?= 0 ), thus c, d, ¦Ã may have error.

For c, d, ¦Ã , S1 has guarantee that c has no error ( otherwise if c has error, then S1 ?= 0 ), thus d, ¦Ã may have error

For d, ¦Ã , S2 has guarantee that d has no error ( otherwise if d has error, then S2 ?= 0 ) , thus ¦Ã has error

Therefore for S1 S2 S3 = 001 , ¦Ã has error

The Hamming Weight and Hamming Distance

Condier 4 code examples C1 = 1101 , C2 = 1001, C3 = 0000 , C4 = 1111

The Hamming Weight of one code is the number of non-zero bit

w(C1 ) = 3

w(C2 ) = 2

w(C3 ) = 0

2

w(C4 ) = 4

The Hamming Distance between 2 codes is the number of bits that is dierent

d(C1 , C2 ) = d(C1 , C4 ) = 1

d(C1 , C3 ) = 3

d(C3 , C4 ) = 4

By applying the properties of modular 2 addition

d(Ci , Cj ) = w(Ci ¨’ Cj )

Thus, notice that the smallest Hamming Distance between 2 codes is the Hamming Weight

min d = d(Ci ¨’ Ci ) = w(Ci )

Generator Matrix and Parity Check MatrixP

Consider the codes that have no-error

S1 = 0 = a ¨’ b ¨’ c ¨’ ¦Á = 1 ¡¤ a + 1 ¡¤ b + 1 ¡¤ c + 0 ¡¤ d + 1 ¡¤ ¦Á + 0 ¡¤ ¦Â + 0 ¡¤ ¦Ã

S2 = 0 = a ¨’ b ¨’ d ¨’ ¦Â = 1 ¡¤ a + 1 ¡¤ b + 0 ¡¤ c + 1 ¡¤ d + 0 ¡¤ ¦Á + 1 ¡¤ ¦Â + 0 ¡¤ ¦Ã

S3 = 0 = a ¨’ c ¨’ d ¨’ ¦Ã = 1 ¡¤ a + 0 ¡¤ b + 1 ¡¤ c + 1 ¡¤ d + 0 ¡¤ ¦Á + 0 ¡¤ ¦Â + 1 ¡¤ ¦Ã

Express these equations in matrix form

?

1

? 1

1

|

1

1

0

?

1

0

1

0 1

1 0

1 0

{z

?

??

0 ?

?

0 ??

?

1 ?

}?

?

0

1

0

H

|

a

b

c

d

¦Á

¦Â

¦Ã

{z

?

?

? ?

?

?

0

?

?=? 0 ?

?

?

0

? | {z

}

?

0

}

CT

Express as Parity Check matrix H and codeword vector C

HCT = 0

Notice that

?

H = [P3¡Á4 I3 ]

P3¡Á4

1

=? 1

1

1

1

0

1

0

1

?

0

1 ?

1

?

I3 = ?

?

1

?

1

1

About P matrix

Notice that P relate the parity bits and information bits

?

? ?

¦Á

1

? ¦Â ?=? 1

¦Ã

1

1

1

0

1

0

1

?

?

? a+b+c=¦Á

??

a+b+d=¦Â

?

?

a+c+d=¦Ã

?

?

?

?

? a

a+b+c

0 ?

?

b ? ?

?

1 ??

? c ?= a+b+d

a+c+d

1

d

?

?

? a+b+c?¦Á=0

??

a+b+d?¦Â =0

?

?

a+c+d?¦Ã =0

Recall that binary system is actually GaloisField(2) with the following property in modular 2 addition

+ 0 1

0 0 1

1 1 0

3

Thus all the element in GF(2) fulll following equations

?

x+x=0

x = ?x

Thus

?

?

? a+b+c?¦Á=0

a+b+d?¦Â =0

?

?

a+c+d?¦Ã =0

?

?

? a+b+c+¦Á=0

??

a+b+d+¦Â =0

?

?

a+c+d+¦Ã =0

Which is the error-free conditions

Now consider the parity bit- information bit relation

?

?

?

¦Á

1 1

? ¦Â ?=? 1 1

¦Ã

1 0

1

0

1

?

a

0 ?

b ?

?

1 ??

? c ?

1

d

?

?

This equations show that parity bits are the linear combination of the information bits, therefore, Hamming

(7,4) code is a kind of linearcode.

Take the transpose

?

1 1

? 1 1

[¦Á ¦Â ¦Ã] = [a b c d] ?

? 1 0

0 1

|

{z

?

1

0 ?

?

1 ?

1

}

(

recall (AB)T = B T AT

)

Q

Form an augmented matrix by identity matrix, called generator matrix G

?

?

G=?

?

1

1

?

1 1

1 0 ?

? = [I4 Q4¡Á3 ]

0 1 ?

1 1

1

1

1

1

1 0

Notice that the code C can be generated from information bits M = [a, b, c, d] and generator matrix G

?

?

[a b c d ¦Á ¦Â ¦Ã] = [a b c d] ?

?

1

1

1

?

1 1 1

1 1 0 ?

?

1 0 1 ?

1 0 1 1

C = MG

This equations demonstrate the fundamental idea of channel coding / error-control coding : error-detection and

error-correction abilit can be gauranteed by adding redundant bits (parity bits) after the information bits.

?EN D?

4

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download