# 6.25: Repetition Codes

- Page ID
- 1877

Learning Objectives

- Explains the repetition code for error correction.

Perhaps the simplest error correcting code is the **repetition code**.

Here, the transmitter sends the data bit several times, an odd number of times in fact. Because the error probability **p _{e} **is always less than

**1/2**, we know that more of the bits should be correct rather than in error. Simple majority voting of the received bits (hence the reason for the odd number) determines the transmitted bit more accurately than sending it alone. For example, let's consider the three-fold repetition code: for every bit

**b(n)**emerging from the source coder, the channel coder produces three. Thus, the bit stream emerging from the channel coder

**c(l)**has a data rate three times higher than that of the original bit stream

**b(n)**. The coding table illustrates when errors can be corrected and when they can't by the majority-vote decoder.

Thus, if one bit of the three bits is received in error, the receiver can correct the error; if more than one error occurs, the channel decoder announces the bit is **1** instead of transmitted value of **0**. Using this repetition code, the probability of bˆ

\[\hat{b}(n)\neq 0\]

equals

\[3p_{e}^{2}\times (1-p_{e})+p_{e}^{3}\]

This probability of a decoding error is always less than **p _{e} **, the uncoded value, so long as

\[p_{e}< \frac{1}{2}\]

Exercise \(\PageIndex{1}\)

Demonstrate mathematically that this claim is indeed true.

\[3p_{e}^{2}\times (1-p_{e})+p_{e}^{3}\leq p_{e}\]

**Solution**

This question is equivalent to

\[3p_{e}\times (1-p_{e})+p_{e}^{2}\leq 1\; or\; 2p_{e}^{2}-3p_{e}+1\geq 0\]

Because this is an upward-going parabola, we need only check where its roots are. Using the quadratic formula, we find that they are located at **1/2 **and** 1**. Consequently in the range

\[0\leq p_{e}\leq \frac{1}{2}\]

the error rate produced by coding is smaller.

## Contributor

- ContribEEOpenStax