6.28: ErrorCorrecting Codes  Channel Decoding
 Page ID
 1880
Learning Objectives
 Channel decoding must be coordinated with coding for accurate results.
Because the idea of channel coding has merit (so long as the code is efficient), let's develop a systematic procedure for performing channel decoding. One way of checking for errors is to try recreating the error correction bits from the data portion of the received block \[\hat{c}\] Using matrix notation, we make this calculation by multiplying the received block \[\hat{c}\] by the matrix H known as the parity check matrix. It is formed from the generator matrix G by taking the bottom, errorcorrection portion of G and attaching to it an identity matrix. For our (7,4) code,
\[H=\begin{bmatrix} 1 & 1 & 1 & 0 &\; \; \; \; \; 1 & 0 & 0\\ 0 & 1 & 1 & 1 & \; \; \; \; \; 0 & 1 & 0\\ 1 & 1 & 0 & 1 & \; \; \; \; \; 0& 0 & 1\\ \end{bmatrix}\]
\[\; \; \;Lower\: portion\: of\: G\; \;Identity\]
The parity check matrix thus has size (NK)×N, and the result of multiplying this matrix with a received word is a length (NK) binary vector. If no digital channel errors occur—we receive a codeword so that
\[\hat{c}=c\]
then
\[H\hat{c}=0\]
For example, the first column of
\[G,(1,0,0,0,1,0,1)^{T}\]
is a codeword. Simple calculations show that multiplying this vector by H results in a length (NK) zerovalued vector.
Exercise \(\PageIndex{1}\)
Show that H_{C} = 0 for all the columns of G. In other words, show that HG = 0 an (NK)×N matrix of zeroes. Does this property guarantee that all codewords also satisfy H_{C} = 0?
Solution
When we multiply the paritycheck matrix times any codeword equal to a column of G, the result consists of the sum of an entry from the lower portion of G and itself that, by the laws of binary arithmetic, is always zero.
Because the code is linear—sum of any two codewords is a codeword—we can generate all codewords as sums of columns of G. Since multiplying by H is also linear,
H_{C} = 0.
When the received bits \[\hat{c}\] do not form a codeword, \[H\hat{c}\] does not equal zero, indicating the presence of one or more errors induced by the digital channel. Because the presence of an error can be mathematically written as
\[\hat{c}=c\oplus e\]
with e a vector of binary values having a 1 in those positions where a bit error occurred.
Exercise \(\PageIndex{1}\)
Show that adding the error vector (1,0,...,0)^{T} to a codeword flips the codeword's leading bit and leaves the rest unaffected.
Solution
In binary arithmetic see this table, adding 0 to a binary value results in that binary value while adding 1 results in the opposite binary value.
Consequently,
\[H\hat{c}=Hc\oplus e=He\]
Because the result of the product is a length (NK) vector of binary values, we can have 2^{NK}  1 nonzero values that correspond to nonzero error patterns e. To perform our channel decoding,
 compute (conceptually at least) \[H\hat{c}\]
 if this result is zero, no detectable or correctable error occurred

if nonzero, consult a table of length (NK) binary vectors to associate them with the minimal error pattern that could have resulted in the nonzero result

then add the error vector thus obtained to the received vector \[\hat{c}\] to correct the error because \[c\oplus e\oplus e=c\]

select the data bits from the corrected word to produce the received bit sequence \[\hat{b}(n)\]
The phrase minimal in the third item raises the point that a double (or triple or quadruple …) error occurring during the transmission/reception of one codeword can create the same received word as a singlebit error or no error in another codeword.
For example, (1,0,0,0,1,0,1)^{T} and (0,1,0,0,1,1,1)^{T} are both codewords in the example (7,4) code. The second results when the first one experiences three bit errors (first, second, and sixth bits). Such an error pattern cannot be detected by our coding strategy, but such multiple error patterns are very unlikely to occur. Our receiver uses the principle of maximum probability: An errorfree transmission is much more likely than one with three errors if the biterror probability p_{e} is small enough.
Exercise \(\PageIndex{1}\)
How small must p_{e }be so that a singlebit error is more likely to occur than a triplebit error?
Solution
The probability of a singlebit error in a lengthN block is
\[Np_{e}(1p_{e})^{N1}\]
and a triplebit error has probability
\[\binom{N}{3}p_{e}^{3}(1p_{e})^{N3}\]
For the first to be greater than the second, we must have
\[p_{e}< \frac{1}{\sqrt{\frac{(N1)(N2)}{6}}+1}\]
\[For \; N=7,\; p_{e}< 0.31\]
Contributor
 ContribEEOpenStax