Skip to main content
Engineering LibreTexts

3.6: Other techniques

  • Page ID
    3757
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    Each technique developed in this chapter is valuable to know in its own right, but that's not the only reason I've explained them. The larger point is to familiarize you with some of the problems which can occur in neural networks, and with a style of analysis which can help overcome those problems. In a sense, we've been learning how to think about neural nets. Over the remainder of this chapter I briefly sketch a handful of other techniques. These sketches are less in-depth than the earlier discussions, but should convey some feeling for the diversity of techniques available for use in neural networks.

    Variations on stochastic gradient descent

    Stochastic gradient descent by backpropagation has served us well in attacking the MNIST digit classification problem. However, there are many other approaches to optimizing the cost function, and sometimes those other approaches offer performance superior to mini-batch stochastic gradient descent. In this section I sketch two such approaches, the Hessian and momentum techniques.

    Hessian technique: To begin our discussion it helps to put neural networks aside for a bit. Instead, we're just going to consider the abstract problem of minimizing a cost function \(C\) which is a function of many variables, \(w=w1,w2,…\), so \(C=C(w)\). By Taylor's theorem, the cost function can be approximated near a point \(w\) by

    \[ C(w+Δw)=C(w)+\sum_j{\frac{∂C}{∂w_j}Δwj}+\frac{1}{2}\sum_{jk}{Δw_j\frac{∂^2C}{∂w_j∂w_k}Δw_k}+…\label{103}\tag{103} \]

    We can rewrite this more compactly as

    \[ C(w+Δw)=C(w)+∇C⋅Δw+\frac{1}{2}Δw^THΔw +...,\label{104}\tag{104} \]

    where \(∇C\) is the usual gradient vector, and \(H\) is a matrix known as the Hessian matrix, whose \(jk\)th entry is \(∂^2C/∂w_j∂w_k\). Suppose we approximate \(C\) by discarding the higher-order terms represented by …… above,

    \[ C(w+Δw)≈C(w)+∇C⋅Δw+\frac{1}{2}Δw^THΔw.\label{105}\tag{105} \]

    Using calculus we can show that the expression on the right-hand side can be minimized*

    *Strictly speaking, for this to be a minimum, and not merely an extremum, we need to assume that the Hessian matrix is positive definite. Intuitively, this means that the function \(C\) looks like a valley locally, not a mountain or a saddle. by choosing

    \[ Δw=−H^{−1}∇C.\label{106}\tag{106} \]

    Provided \(\ref{105}\) is a good approximate expression for the cost function, then we'd expect that moving from the point \(w\) to \(w+Δw=w−H^{−1}∇C\) should significantly decrease the cost function. That suggests a possible algorithm for minimizing the cost:

    • Choose a starting point, \(w\).
    • Update \(w\) to a new point \(w′=w−H^{−1}∇C\), where the Hessian HH and \(∇C\) are computed at \(w\).
    • Update w′w′ to a new point \(w′′=w′−H′^{−1}∇′C\), where the Hessian \(H′\) and \(∇′C\) are computed at \(w′\).
    • ……

    In practice, \(\ref{105}\) is only an approximation, and it's better to take smaller steps. We do this by repeatedly changing \(w\) by an amount \(Δw=−ηH^{−1}∇C\), where \(η\) is known as the learning rate.

    This approach to minimizing a cost function is known as the Hessian technique or Hessian optimization. There are theoretical and empirical results showing that Hessian methods converge on a minimum in fewer steps than standard gradient descent. In particular, by incorporating information about second-order changes in the cost function it's possible for the Hessian approach to avoid many pathologies that can occur in gradient descent. Furthermore, there are versions of the backpropagation algorithm which can be used to compute the Hessian.

    If Hessian optimization is so great, why aren't we using it in our neural networks? Unfortunately, while it has many desirable properties, it has one very undesirable property: it's very difficult to apply in practice. Part of the problem is the sheer size of the Hessian matrix. Suppose you have a neural network with \(10^7\) weights and biases. Then the corresponding Hessian matrix will contain \(10^7×10^7=10^{14}\) entries. That's a lot of entries! And that makes computing \(H^−1∇C\) extremely difficult in practice. However, that doesn't mean that it's not useful to understand. In fact, there are many variations on gradient descent which are inspired by Hessian optimization, but which avoid the problem with overly-large matrices. Let's take a look at one such technique, momentum-based gradient descent.

    Momentum-based gradient descent: Intuitively, the advantage Hessian optimization has is that it incorporates not just information about the gradient, but also information about how the gradient is changing. Momentum-based gradient descent is based on a similar intuition, but avoids large matrices of second derivatives. To understand the momentum technique, think back to our original picture of gradient descent, in which we considered a ball rolling down into a valley. At the time, we observed that gradient descent is, despite its name, only loosely similar to a ball falling to the bottom of a valley. The momentum technique modifies gradient descent in two ways that make it more similar to the physical picture. First, it introduces a notion of "velocity" for the parameters we're trying to optimize. The gradient acts to change the velocity, not (directly) the "position", in much the same way as physical forces change the velocity, and only indirectly affect position. Second, the momentum method introduces a kind of friction term, which tends to gradually reduce the velocity.

    Let's give a more precise mathematical description. We introduce velocity variables \(v=v1,v2,…\), one for each corresponding \(w_j\) variable*

    *In a neural net the \(w_j\) variables would, of course, include all weights and biases.

    Then we replace the gradient descent update rule \(w→w′=w−η∇C\) by

    \[ v→v′=μv−η∇C\label{107}\tag{107} \]

    \[ w→w′=w+v′.\label{108}\tag{108} \]

    In these equations, \(μ\) is a hyper-parameter which controls the amount of damping or friction in the system. To understand the meaning of the equations it's helpful to first consider the case where \(μ=1\), which corresponds to no friction. When that's the case, inspection of the equations shows that the "force" \(∇C\) is now modifying the velocity, \(v\), and the velocity is controlling the rate of change of \(w\). Intuitively, we build up the velocity by repeatedly adding gradient terms to it. That means that if the gradient is in (roughly) the same direction through several rounds of learning, we can build up quite a bit of steam moving in that direction. Think, for example, of what happens if we're moving straight down a slope:

    tikz34.png

    With each step the velocity gets larger down the slope, so we move more and more quickly to the bottom of the valley. This can enable the momentum technique to work much faster than standard gradient descent. Of course, a problem is that once we reach the bottom of the valley we will overshoot. Or, if the gradient should change rapidly, then we could find ourselves moving in the wrong direction. That's the reason for the μμ hyper-parameter in \(\ref{107}\). I said earlier that \(μ\) controls the amount of friction in the system; to be a little more precise, you should think of \(1−μ\) as the amount of friction in the system. When \(μ=1\), as we've seen, there is no friction, and the velocity is completely driven by the gradient \(∇C\). By contrast, when \(μ=0\) there's a lot of friction, the velocity can't build up, and Equations \(\ref{107}\) and \(\ref{108}\) reduce to the usual equation for gradient descent, \(w→w′=w−η∇C\). In practice, using a value of \(μ\) intermediate between \(0\) and \(1\) can give us much of the benefit of being able to build up speed, but without causing overshooting. We can choose such a value for \(μ\) using the held-out validation data, in much the same way as we select \(η\) and \(λ\).

    I've avoided naming the hyper-parameter \(μ\) up to now. The reason is that the standard name for \(μ\) is badly chosen: it's called the momentum co-efficient. This is potentially confusing, since \(μ\) is not at all the same as the notion of momentum from physics. Rather, it is much more closely related to friction. However, the term momentum co-efficient is widely used, so we will continue to use it.

    A nice thing about the momentum technique is that it takes almost no work to modify an implementation of gradient descent to incorporate momentum. We can still use backpropagation to compute the gradients, just as before, and use ideas such as sampling stochastically chosen mini-batches. In this way, we can get some of the advantages of the Hessian technique, using information about how the gradient is changing. But it's done without the disadvantages, and with only minor modifications to our code. In practice, the momentum technique is commonly used, and often speeds up learning.

    Exercise

    • What would go wrong if we used \(μ>1\) in the momentum technique?
    • What would go wrong if we used \(μ<0\) in the momentum technique?

    Problem

    • Add momentum-based stochastic gradient descent tonetwork2.py.

    Other approaches to minimizing the cost function: Many other approaches to minimizing the cost function have been developed, and there isn't universal agreement on which is the best approach. As you go deeper into neural networks it's worth digging into the other techniques, understanding how they work, their strengths and weaknesses, and how to apply them in practice. A paper I mentioned earlier*

    *Efficient BackProp, by Yann LeCun, Léon Bottou, Genevieve Orr and Klaus-Robert Müller (1998). introduces and compares several of these techniques, including conjugate gradient descent and the BFGS method (see also the closely related limited-memory BFGS method, known as L-BFGS). Another technique which has recently shown promising results*

    *See, for example, On the importance of initialization and momentum in deep learning, by Ilya Sutskever, James Martens, George Dahl, and Geoffrey Hinton (2012). is Nesterov's accelerated gradient technique, which improves on the momentum technique. However, for many problems, plain stochastic gradient descent works well, especially if momentum is used, and so we'll stick to stochastic gradient descent through the remainder of this book.

    Other models of artificial neuron

    Up to now we've built our neural networks using sigmoid neurons. In principle, a network built from sigmoid neurons can compute any function. In practice, however, networks built using other model neurons sometimes outperform sigmoid networks. Depending on the application, networks based on such alternate models may learn faster, generalize better to test data, or perhaps do both. Let me mention a couple of alternate model neurons, to give you the flavor of some variations in common use.

    Perhaps the simplest variation is the tanh (pronounced "tanch") neuron, which replaces the sigmoid function by the hyperbolic tangent function. The output of a tanh neuron with input \(x\), weight vector \(w\), and bias bb is given by

    \[ tanh(w⋅x+b),\label{109}\tag{109} \]

    where \(tanh\) is, of course, the hyperbolic tangent function. It turns out that this is very closely related to the sigmoid neuron. To see this, recall that the \(tanh\) function is defined by

    \[ tanh(z)≡\frac{e^z−e^{−z}}{e^z+e^{−z}}.\label{110}\tag{110} \]

    With a little algebra it can easily be verified that

    \[ σ(z)=\frac{1+tanh(z/2)}{2},\label{111}\tag{111} \]

    that is, \(tanh\) is just a rescaled version of the sigmoid function. We can also see graphically that the \(tanh\) function has the same shape as the sigmoid function,

    clipboard_e4d63a89efce70e2bdd59a7b554bdfd1f.png

    One difference between \(tanh\) neurons and sigmoid neurons is that the output from \(tanh\) neurons ranges from -1 to 1, not 0 to 1. This means that if you're going to build a network based on \(tanh\) neurons you may need to normalize your outputs (and, depending on the details of the application, possibly your inputs) a little differently than in sigmoid networks.

    Similar to sigmoid neurons, a network of \(tanh\) neurons can, in principle, compute any function*

    *There are some technical caveats to this statement for both \(tanh\) and sigmoid neurons, as well as for the rectified linear neurons discussed below. However, informally it's usually fine to think of neural networks as being able to approximate any function to arbitrary accuracy. mapping inputs to the range -1 to 1. Furthermore, ideas such as backpropagation and stochastic gradient descent are as easily applied to a network of \(tanh\) neurons as to a network of sigmoid neurons.

    Exercise

    • Prove the identity in Equation \(\ref{111}\).

    Which type of neuron should you use in your networks, the \(tanh\) or sigmoid? A priori the answer is not obvious, to put it mildly! However, there are theoretical arguments and some empirical evidence to suggest that the tanh sometimes performs better*

    *See, for example, Efficient BackProp, by Yann LeCun, Léon Bottou, Genevieve Orr and Klaus-Robert Müller (1998), and Understanding the difficulty of training deep feedforward networks, by Xavier Glorot and Yoshua Bengio (2010).

    Let me briefly give you the flavor of one of the theoretical arguments for \(tanh\) neurons. Suppose we're using sigmoid neurons, so all activations in our network are positive. Let's consider the weights \(w^{l+1}_{jk}\) input to the \(j\)th neuron in the \(l+1th\) layer. The rules for backpropagation (see here) tell us that the associated gradient will be \(a^l_kδ^{l+1}_j\). Because the activations are positive the sign of this gradient will be the same as the sign of \(δ^{l+1}_j\). What this means is that if \(δ^{l+1}_j\) is positive then all the weights \(w^{l+1}_{jk}\) will decrease during gradient descent, while if \(δ^{l+1}_j\) is negative then all the weights \(w^{l+1}_{jk}\) will increase during gradient descent. In other words, all weights to the same neuron must either increase together or decrease together. That's a problem, since some of the weights may need to increase while others need to decrease. That can only happen if some of the input activations have different signs. That suggests replacing the sigmoid by an activation function, such as tanhtanh, which allows both positive and negative activations. Indeed, because \(tanh\) is symmetric about zero, \(tanh(−z)=−tanh(z)\), we might even expect that, roughly speaking, the activations in hidden layers would be equally balanced between positive and negative. That would help ensure that there is no systematic bias for the weight updates to be one way or the other.

    How seriously should we take this argument? While the argument is suggestive, it's a heuristic, not a rigorous proof that \(tanh\) neurons outperform sigmoid neurons. Perhaps there are other properties of the sigmoid neuron which compensate for this problem? Indeed, for many tasks the \(tanh\) is found empirically to provide only a small or no improvement in performance over sigmoid neurons. Unfortunately, we don't yet have hard-and-fast rules to know which neuron types will learn fastest, or give the best generalization performance, for any particular application.

    Another variation on the sigmoid neuron is the rectified linear neuron or rectified linear unit. The output of a rectified linear unit with input \(x\), weight vector \(w\), and bias \(b\) is given by

    \[ max(0,w⋅x+b).\label{112}\tag{112} \]

    Graphically, the rectifying function \(max(0,z)\) looks like this:

    clipboard_eae5cedf542a30965d153cce3a98ee02b.png

    Obviously such neurons are quite different from both sigmoid and \(tanh\) neurons. However, like the sigmoid and \(tanh\) neurons, rectified linear units can be used to compute any function, and they can be trained using ideas such as backpropagation and stochastic gradient descent.

    When should you use rectified linear units instead of sigmoid or \(tanh\) neurons? Some recent work on image recognition*

    *See, for example, What is the Best Multi-Stage Architecture for Object Recognition?, by Kevin Jarrett, Koray Kavukcuoglu, Marc'Aurelio Ranzato and Yann LeCun (2009), Deep Sparse Rectifier Neural Networks, by Xavier Glorot, Antoine Bordes, and Yoshua Bengio (2011), and ImageNet Classification with Deep Convolutional Neural Networks, by Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton (2012). Note that these papers fill in important details about how to set up the output layer, cost function, and regularization in networks using rectified linear units. I've glossed over all these details in this brief account. The papers also discuss in more detail the benefits and drawbacks of using rectified linear units. Another informative paper is Rectified Linear Units Improve Restricted Boltzmann Machines, by Vinod Nair and Geoffrey Hinton (2010), which demonstrates the benefits of using rectified linear units in a somewhat different approach to neural networks. has found considerable benefit in using rectified linear units through much of the network. However, as with tanh neurons, we do not yet have a really deep understanding of when, exactly, rectified linear units are preferable, nor why. To give you the flavor of some of the issues, recall that sigmoid neurons stop learning when they saturate, i.e., when their output is near either \(0\) or \(1\). As we've seen repeatedly in this chapter, the problem is that \(σ′\) terms reduce the gradient, and that slows down learning. \(Tanh\) neurons suffer from a similar problem when they saturate. By contrast, increasing the weighted input to a rectified linear unit will never cause it to saturate, and so there is no corresponding learning slowdown. On the other hand, when the weighted input to a rectified linear unit is negative, the gradient vanishes, and so the neuron stops learning entirely. These are just two of the many issues that make it non-trivial to understand when and why rectified linear units perform better than sigmoid or tanh neurons.

    I've painted a picture of uncertainty here, stressing that we do not yet have a solid theory of how activation functions should be chosen. Indeed, the problem is harder even than I have described, for there are infinitely many possible activation functions. Which is the best for any given problem? Which will result in a network which learns fastest? Which will give the highest test accuracies? I am surprised how little really deep and systematic investigation has been done of these questions. Ideally, we'd have a theory which tells us, in detail, how to choose (and perhaps modify-on-the-fly) our activation functions. On the other hand, we shouldn't let the lack of a full theory stop us! We have powerful tools already at hand, and can make a lot of progress with those tools. Through the remainder of this book I'll continue to use sigmoid neurons as our go-to neuron, since they're powerful and provide concrete illustrations of the core ideas about neural nets. But keep in the back of your mind that these same ideas can be applied to other types of neuron, and that there are sometimes advantages in doing so.

    On stories in neural networks

    Question: How do you approach utilizing and researching machine learning techniques that are supported almost entirely empirically, as opposed to mathematically? Also in what situations have you noticed some of these techniques fail?

    Answer You have to realize that our theoretical tools are very weak. Sometimes, we have good mathematical intuitions for why a particular technique should work. Sometimes our intuition ends up being wrong [...] The questions become: how well does my method work on this particular problem, and how large is the set of problems on which it works well.

    - Question and answer with neural networks researcher Yann LeCun

    Once, attending a conference on the foundations of quantum mechanics, I noticed what seemed to me a most curious verbal habit: when talks finished, questions from the audience often began with "I'm very sympathetic to your point of view, but [...]". Quantum foundations was not my usual field, and I noticed this style of questioning because at other scientific conferences I'd rarely or never heard a questioner express their sympathy for the point of view of the speaker. At the time, I thought the prevalence of the question suggested that little genuine progress was being made in quantum foundations, and people were merely spinning their wheels. Later, I realized that assessment was too harsh. The speakers were wrestling with some of the hardest problems human minds have ever confronted. Of course progress was slow! But there was still value in hearing updates on how people were thinking, even if they didn't always have unarguable new progress to report.

    You may have noticed a verbal tic similar to "I'm very sympathetic [...]" in the current book. To explain what we're seeing I've often fallen back on saying "Heuristically, [...]", or "Roughly speaking, [...]", following up with a story to explain some phenomenon or other. These stories are plausible, but the empirical evidence I've presented has often been pretty thin. If you look through the research literature you'll see that stories in a similar style appear in many research papers on neural nets, often with thin supporting evidence. What should we think about such stories?

    In many parts of science - especially those parts that deal with simple phenomena - it's possible to obtain very solid, very reliable evidence for quite general hypotheses. But in neural networks there are large numbers of parameters and hyper-parameters, and extremely complex interactions between them. In such extraordinarily complex systems it's exceedingly difficult to establish reliable general statements. Understanding neural networks in their full generality is a problem that, like quantum foundations, tests the limits of the human mind. Instead, we often make do with evidence for or against a few specific instances of a general statement. As a result those statements sometimes later need to be modified or abandoned, when new evidence comes to light.

    One way of viewing this situation is that any heuristic story about neural networks carries with it an implied challenge. For example, consider the statement I quoted earlier, explaining why dropout works*

    *From ImageNet Classification with Deep Convolutional Neural Networks by Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton (2012).: "This technique reduces complex co-adaptations of neurons, since a neuron cannot rely on the presence of particular other neurons. It is, therefore, forced to learn more robust features that are useful in conjunction with many different random subsets of the other neurons." This is a rich, provocative statement, and one could build a fruitful research program entirely around unpacking the statement, figuring out what in it is true, what is false, what needs variation and refinement. Indeed, there is now a small industry of researchers who are investigating dropout (and many variations), trying to understand how it works, and what its limits are. And so it goes with many of the heuristics we've discussed. Each heuristic is not just a (potential) explanation, it's also a challenge to investigate and understand in more detail.

    Of course, there is not time for any single person to investigate all these heuristic explanations in depth. It's going to take decades (or longer) for the community of neural networks researchers to develop a really powerful, evidence-based theory of how neural networks learn. Does this mean you should reject heuristic explanations as unrigorous, and not sufficiently evidence-based? No! In fact, we need such heuristics to inspire and guide our thinking. It's like the great age of exploration: the early explorers sometimes explored (and made new discoveries) on the basis of beliefs which were wrong in important ways. Later, those mistakes were corrected as we filled in our knowledge of geography. When you understand something poorly - as the explorers understood geography, and as we understand neural nets today - it's more important to explore boldly than it is to be rigorously correct in every step of your thinking. And so you should view these stories as a useful guide to how to think about neural nets, while retaining a healthy awareness of the limitations of such stories, and carefully keeping track of just how strong the evidence is for any given line of reasoning. Put another way, we need good stories to help motivate and inspire us, and rigorous in-depth investigation in order to uncover the real facts of the matter.


    This page titled 3.6: Other techniques is shared under a CC BY-NC 3.0 license and was authored, remixed, and/or curated by Michael Nielson via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.