# 6.29: Error-Correcting Codes - Hamming Codes

- Page ID
- 1881

- A description of some strategies for minimizing there errors in a transmitted bit-stream.

For the (7,4) example, we have **2 ^{N}^{-K} -1 =7** error patterns that can be corrected. We start with single-bit error patterns, and multiply them by the parity check matrix. If we obtain unique answers, we are done; if two or more error patterns yield the same result, we can try double-bit error patterns. In our case, single-bit error patterns give a unique result.

This corresponds to our decoding table: We associate the parity check matrix multiplication result with the error pattern and add this to the received word. If more than one error occurs (unlikely though it may be), this "error correction" strategy usually makes the error worse in the sense that more bits are changed from what was transmitted.

As with the repetition code, we must question whether our (7,4) code's error correction capability compensates for the increased error probability due to the necessitated reduction in bit energy. Figure 6.29.1 shows that if the signal-to-noise ratio is large enough channel coding yields a smaller error probability. Because the bit stream emerging from the source decoder is segmented into four-bit blocks, the fair way of comparing coded and uncoded transmission is to compute the probability of **block** error: the probability that any bit in a block remains in error despite error correction and regardless of whether the error occurs in the data or in coding buts. Clearly, our (7,4) channel code does yield smaller error rates, and is worth the additional systems required to make it work.

Note that our (7,4) code has the length and number of data bits that perfectly fits correcting single bit errors. This pleasant property arises because the number of error patterns that can be corrected, **2**^{N}^{-K}** -1**, equals the codeword length **N**. Codes that have **2**^{N}^{-K}** -1 = N** are known as **Hamming codes**, and the following table provides the parameters of these codes. Hamming codes are the simplest single-bit error correction codes, and the generator/parity check matrix formalism for channel coding and decoding works for them.

Unfortunately, for such large blocks, the probability of multiple-bit errors can exceed the number of single-bit errors unless the channel single-bit error probability **p _{e} **is very small. Consequently, we need to enhance the code's error correcting capability by adding double as well as single-bit error correction.

What must the relation between **N **and **K **be for a code to correct all single- and double-bit errors with a "perfect fit"?

**Solution**

In a length-**N **block, **N **single-bit and \[\frac{N(N-1)}{2} \nonumber \] double-bit errors can occur. The number of non-zero vectors resulting from \[H\hat{c} \nonumber \] must equal or exceed the sum of these two numbers.

\[2^{N-K}-1\geq N+\frac{N(N-1)}{2}\; or\; 2^{N-K}\geq \frac{N^{2}+N+2}{2} \nonumber \]

The first two solutions that attain equality are (5,1) and (90,78) codes. However, **no** perfect code exists other than the single-bit error correcting Hamming code.