Skip to main content
Engineering LibreTexts

1.3: Sigmoid neurons

  • Page ID
    3740
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    Learning algorithms sound terrific. But how can we devise such algorithms for a neural network? Suppose we have a network of perceptrons that we'd like to use to learn to solve some problem. For example, the inputs to the network might be the raw pixel data from a scanned, handwritten image of a digit. And we'd like the network to learn weights and biases so that the output from the network correctly classifies the digit. To see how learning might work, suppose we make a small change in some weight (or bias) in the network. What we'd like is for this small change in weight to cause only a small corresponding change in the output from the network. As we'll see in a moment, this property will make learning possible. Schematically, here's what we want (obviously this network is too simple to do handwriting recognition!):

    tikz8.png

    If it were true that a small change in a weight (or bias) causes only a small change in output, then we could use this fact to modify the weights and biases to get our network to behave more in the manner we want. For example, suppose the network was mistakenly classifying an image as an "8" when it should be a "9". We could figure out how to make a small change in the weights and biases so the network gets a little closer to classifying the image as a "9". And then we'd repeat this, changing the weights and biases over and over to produce better and better output. The network would be learning.

    The problem is that this isn't what happens when our network contains perceptrons. In fact, a small change in the weights or bias of any single perceptron in the network can sometimes cause the output of that perceptron to completely flip, say from \(0\) to \(1\). That flip may then cause the behaviour of the rest of the network to completely change in some very complicated way. So while your "9" might now be classified correctly, the behaviour of the network on all the other images is likely to have completely changed in some hard-to-control way. That makes it difficult to see how to gradually modify the weights and biases so that the network gets closer to the desired behaviour. Perhaps there's some clever way of getting around this problem. But it's not immediately obvious how we can get a network of perceptrons to learn.

    We can overcome this problem by introducing a new type of artificial neuron called a sigmoid neuron. Sigmoid neurons are similar to perceptrons, but modified so that small changes in their weights and bias cause only a small change in their output. That's the crucial fact which will allow a network of sigmoid neurons to learn.

    Okay, let me describe the sigmoid neuron. We'll depict sigmoid neurons in the same way we depicted perceptrons:

    tikz9.png

    Just like a perceptron, the sigmoid neuron has inputs, \(x_1,x_2,…\) But instead of being just \(0\) or \(1\), these inputs can also take on any values between \(0\) and \(1\). So, for instance, \(0.638…\) is a valid input for a sigmoid neuron. Also just like a perceptron, the sigmoid neuron has weights for each input, \(w_1,w_2,…\),and an overall bias, bb. But the output is not \(0\) or \(1\). Instead, it's \(σ(w⋅x+b)\), where \(σ\) is called the sigmoid function**Incidentally, \(σ\) is sometimes called the logistic function, and this new class of neurons called logistic neurons. It's useful to remember this terminology, since these terms are used by many people working with neural nets. However, we'll stick with the sigmoid terminology., and is defined by:

    \[ σ(z)≡ \frac{1}{1 + e^{−z}} \]

    To put it all a little more explicitly, the output of a sigmoid neuron with inputs \(x_1,x_2,…\), weights \(w_1,w_2,…\), and bias \(b\) is

    \[ \frac{1}{1+exp(-\sum_j{w_jx_j} - b)}\]

    At first sight, sigmoid neurons appear very different to perceptrons. The algebraic form of the sigmoid function may seem opaque and forbidding if you're not already familiar with it. In fact, there are many similarities between perceptrons and sigmoid neurons, and the algebraic form of the sigmoid function turns out to be more of a technical detail than a true barrier to understanding.

    To understand the similarity to the perceptron model, suppose \(z ≡ w⋅x + b\) is a large positive number. Then \(e^{−z} ≈ 0\) and so \(σ(z)≈1\). In other words, when \(z = w⋅x +b \) is large and positive, the output from the sigmoid neuron is approximately \(1\), just as it would have been for a perceptron. Suppose on the other hand that \(z = w⋅x+b \) is very negative. Then \(e^{−z} → ∞\), and \(σ(z) ≈ 0\). So when \( z = w⋅x+ b\) is very negative, the behaviour of a sigmoid neuron also closely approximates a perceptron. It's only when \(w⋅x+b\) is of modest size that there's much deviation from the perceptron model.

    What about the algebraic form of \(σ\) ? How can we understand that? In fact, the exact form of \(σ\) isn't so important - what really matters is the shape of the function when plotted. Here's the shape:

    aa.PNG

    This shape is a smoothed out version of a step function:

    xz.PNG

    If \(σ\) had in fact been a step function, then the sigmoid neuron would be a perceptron, since the output would be \(1\) or \(0\) depending on whether \(w⋅x + b \) was positive or negative**Actually, when \(w⋅x + b= 0\) the perceptron outputs \(0\), while the step function outputs \(1\). So, strictly speaking, we'd need to modify the step function at that one point. But you get the idea.. By using the actual \(σ\) function we get, as already implied above, a smoothed out perceptron. Indeed, it's the smoothness of the \(σ\) function that is the crucial fact, not its detailed form. The smoothness of σσ means that small changes \(Δw_j\) in the weights and \(Δb\) in the bias will produce a small change \(Δ output\) in the output from the neuron. In fact, calculus tells us that \(Δ output\) is well approximated by

    \[ Δ output ≈ \sum_j \frac{∂ output}{∂w_j}Δw_j + \frac{∂ output}{∂b} Δb, \]

    where the sum is over all the weights, \(w_j\), and \(∂output / ∂w_j\) and \(∂output / ∂b\) denote partial derivatives of the output with respect to \(w_j\) and \(b\), respectively. Don't panic if you're not comfortable with partial derivatives! While the expression above looks complicated, with all the partial derivatives, it's actually saying something very simple (and which is very good news): \(Δ output\) is a linear functionof the changes \(Δw_j\) and \(Δb\) in the weights and bias. This linearity makes it easy to choose small changes in the weights and biases to achieve any desired small change in the output. So while sigmoid neurons have much of the same qualitative behaviour as perceptrons, they make it much easier to figure out how changing the weights and biases will change the output.

    If it's the shape of σσ which really matters, and not its exact form, then why use the particular form used for \(σ\) in Equation (3)? In fact, later in the book we will occasionally consider neurons where the output is \(f(w⋅x+b)\) for some other activation function \(f(⋅)\). The main thing that changes when we use a different activation function is that the particular values for the partial derivatives in Equation (5) change. It turns out that when we compute those partial derivatives later, using \(σ\) will simplify the algebra, simply because exponentials have lovely properties when differentiated. In any case, \(σ\) is commonly-used in work on neural nets, and is the activation function we'll use most often in this book.

    How should we interpret the output from a sigmoid neuron? Obviously, one big difference between perceptrons and sigmoid neurons is that sigmoid neurons don't just output \(0\) or \(1\). They can have as output any real number between \(0\) and \(1\), so values such as \(0.173…\) and \(0.689…\) are legitimate outputs. This can be useful, for example, if we want to use the output value to represent the average intensity of the pixels in an image input to a neural network. But sometimes it can be a nuisance. Suppose we want the output from the network to indicate either "the input image is a 9" or "the input image is not a 9". Obviously, it'd be easiest to do this if the output was a \(0\) or a \(1\), as in a perceptron. But in practice we can set up a convention to deal with this, for example, by deciding to interpret any output of at least \(0.5\) as indicating a "9", and any output less than \(0.5\) as indicating "not a 9". I'll always explicitly state when we're using such a convention, so it shouldn't cause any confusion.

    Exercises

    • Sigmoid neurons simulating perceptrons, part I
      Suppose we take all the weights and biases in a network of perceptrons, and multiply them by a positive constant, \(c\gt0\). Show that the behaviour of the network doesn't change.
    • Sigmoid neurons simulating perceptrons, part II
      Suppose we have the same setup as the last problem - a network of perceptrons. Suppose also that the overall input to the network of perceptrons has been chosen. We won't need the actual input value, we just need the input to have been fixed. Suppose the weights and biases are such that \(w⋅x + b ≠ 0\) for the input \(x\) to any particular perceptron in the network. Now replace all the perceptrons in the network by sigmoid neurons, and multiply the weights and biases by a positive constant \(c\gt0\). Show that in the limit as \(c →∞\) the behaviour of this network of sigmoid neurons is exactly the same as the network of perceptrons. How can this fail when \( w⋅x + b= 0\) for one of the perceptrons?

    This page titled 1.3: Sigmoid neurons is shared under a CC BY-NC 3.0 license and was authored, remixed, and/or curated by Michael Nielson via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.

    • Was this article helpful?