Previous topic

Introduction to Multi-Layer Perceptrons (Feedforward Neural Networks)

Next topic

Denoising autoencoders vs. ordinary autoencoders

This Page

The challenges of training deep neural networksΒΆ

This is a summary of section 4.2 of Learning Deep Architectures for AI.

  • Several experimental results show that training neural networks (supervised training from random initialization) is more difficult when the network is deep (3, 4 or 5 layers) compared to shallow (1 or 2 layers). See Why Does Unsupervised Pre-training Help Deep Learning?. In this regime, the generalization error is always worse for deep models compared to shallow ones (not the same can be said about training error). This could very well explain the absence of scientific literature on deep architectures before 2006 (with the exception of convolutional networks).
  • Proposed explications:
    • The learning gets stuck into local minima or in a plateau (where due to low curvature the gradients become extremely small; because the weights do not change that much, in this case, it might seem like the model had already reached a local minima)
    • Every layer of a deep network means an extra layer of nonlinearities which increases the chances of the optimization process to be even more difficult
    • In a deep network, one could easily be just learning the top layer, while the lower layers remain random transformations that do not capture much of the input. This is due to the fact that by the time the gradients reach the lower layers they are too diluted to be able to guide the learning process. This view is confirmed by the success of algorithms like the one proposed by Weston et al. (ICML 2008) which uses additional error signal from solving some unsupervised tasks for some or all of the intermediate layers. This hypothesis of diluted gradients is also coherent with the observation that one can train successfully a convolutional deep network.
  • While it is clear that pre-training deep networks (using some unsupervised training procedure) helps, we still have to address the question of why it works? This has been the focus of the paper Why Does Unsupervised Pre-training Help Deep Learning? and of section 4.2 of Learning Deep Architectures for AI. The main proposed reasons are:
    • The unsupervised pre-training procedure is applied locally at each layer and does not have to deal with the difficulties of learning deep models. Note that even training Deep Boltzmann Machines or a Deep Auto-Encoders is difficult and usually requires layer-wise pre-training.
    • Unsupervised pre-training behaves like a regularizer, pushing the lower layers towards modelling the distribution of the input, P(x). The semi-supervised hypothesis says that learning aspects of P(x) can improve models of the conditional distribution of the target given the input, P(y|x), because transformations useful for P(x) are useful or closely related to those needed for P(y|x), i.e. the two distribution share something. Note that, if there are not sufficiently many hidden units, the regularization can harm by wasting resources on modeling aspects of P(x) which are not helpful for P(y|x).
    • It seems that the weights obtained through pre-training lie in a good region of the parameter space, from where one reaches better local minima through supervised training, compared to random initialization. The learning trajectory (in the parameter space) also seems quite different if the model has been pre-trained or randomly initialised.
    • A supervised pre-training procedure is worse then an unsupervised one. A reason might be that supervised pre-training is to greedy, eliminating projections of the data that are not helpful for the local cost, but are helpful for the global cost of the deep model. This might not happen in the unsupervised case, because when modeling the input locally, we model all its aspects, not only those helpful for some local cost.
    • Unlike other regularizers, the unsupervised pre-training still has an effect when the number of training examples grows to be very large, due to its link with non-convex optimization (even with a lot of examples, starting from a random initialization the reachable local minima are still worse then those reachable from the pre-trained model).
    • One does not deal with an optimization problem in the usual sense because the training error can be reduced by optimizing only the last layer (and it can even reach 0 if this last layer is sufficiently large). However, in order to improve the generalize error one needs to be able to train the lower layers as well. They need to capture all the relevant variations and characteristics of the input.
    • One might need more hidden units when using unsupervised pre-training because most of them are used to learn features which while they are useful to reconstruct the input, they are not useful for the discriminative task of classification. Nonetheless a subset of these features are helpful for classification, and much more so, then those obtained by random initialization.