Skip to main content
Engineering LibreTexts

7.2.2: Example- Binary Channel

  • Page ID
    51039
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    The binary channel is well described by the probability model. Its properties, many of which were discussed in Chapter 6, are summarized below.

    Consider first a noiseless binary channel which, when presented with one of two possible input values 0 or 1, transmits this value faithfully to its output. This is a very simple example of a discrete memoryless process. We represent this channel by a probability model with two inputs and two outputs. To indicate the fact that the input is replicated faithfully at the output, the inner workings of the box are revealed, in Figure 7.6(a), in the form of two paths, one from each input to the corresponding output, and each labeled by the probability (1). The transition matrix for this channel is

    \(\begin{bmatrix} c_{00} & c_{01} \\ c_{10} & c_{11} \end{bmatrix} = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} \tag{7.9}\)

    The input information \(I\) for this process is 1 bit if the two values are equally likely, or if \(p(A_0) \neq p(A_1)\) the input information is

    \(I = p(A_0)\log_2\Big (\dfrac{1}{p(A_0)}\Big) + p(A_1)\log_2\Big(\dfrac{1}{p(A_1)}\Big) \tag{7.10}\)

    The output information \(J\) has a similar formula, using the output probabilities \(p(B_0)\) and \(p(B_1)\). Since the input and output are the same in this case, it is always possible to infer the input when the output has been observed. The amount of information out \(J\) is the same as the amount in \(I: J = I\). This noiseless channel is effective for its intended purpose, which is to permit the receiver, at the output, to infer the value at the input.

    Next, let us suppose that this channel occasionally makes errors. Thus if the input is 1 the output is not always 1, but with the “bit error probability” ε is flipped to the “wrong” value 0, and hence is “correct” only with probability 1 − \(\epsilon\). Similarly, for the input of 0, the probability of error is \(\epsilon\). Then the transition matrix is


    This page titled 7.2.2: Example- Binary Channel is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Paul Penfield, Jr. (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.