Note
This section assumes the reader has already read through Classifying MNIST digits using Logistic Regression and Multilayer Perceptron. Additionally it uses the following Theano functions and concepts : TODO
The Stacked Denoising Autoencoder (SdA) is an extension of the stacked autoencoder [Bengio07] and it was introduced in [Vincent08]. We will start the tutorial with a short discussion on Autoencoders and then move on to how classical autoencoders are extended to denoising autoencoders (dA). Throughout the following subchapters we will stick as close as possible to the original paper ( [Vincent08] ).
See section 4.6 of [Bengio09] for an overview of auto-encoders.
An autoencoder takes an input and first
maps it (with an encoder) to a hidden representation
through a deterministic mapping:
The latent representation , or code is then mapped back (with a decoder) into a
reconstruction
of same shape as
through a similar transformation, namely:
where ‘ does not indicate transpose, and
should be seen as a prediction of
.
The weight matrix
of the reverse mapping may be
optionally constrained by
, which is
an instance of tied weights. The parameters of this model (namely
,
,
and, if one doesn’t use tied weights, also
) are optimized such that the average reconstruction
error is minimized. The reconstruction error can be measured using the
traditional squared error
,
or if the input is interpreted as either bit vectors or vectors of
bit probabilities by the reconstruction cross-entropy defined as :
The hope is that the code is a distributed representation
that captures the coordinates along the main factors of variation in the data:
because
is viewed as a lossy compression of
, it cannot
be a good compression (with small loss) for all
, so learning
drives it to be one that is a good compression in particular for training
examples, and hopefully for others as well (and that is the sense
in which an auto-encoder generalizes), but not for arbitrary inputs.
If there is one linear hidden layer (the code) and
the mean squared error criterion is used to train the network, then the
hidden units learn to project the input in the span of the first
principal components of the data. If the hidden
layer is non-linear, the auto-encoder behaves differently from PCA,
with the ability to capture multi-modal aspects of the input
distribution.
We want to implement an auto-encoder using Theano, in the form of a class,
that could be afterwards used in constructing a stacked autoencoder. The
first step is to create shared variables for the parameters of the
autoencoder ( ,
and
, since we are using tied weights in this tutorial ):
class AutoEncoder(object):
def __init__(self, n_visible= 784, n_hidden= 500, input = None):
# initial values for weights and biases
# note : W' was written as `W_prime` and b' as `b_prime`
# W is initialized with `initial_W` which is uniformely sampled
# from -6./sqrt(n_visible+n_hidden) and 6./sqrt(n_hidden+n_visible)
# the output of uniform if converted using asarray to dtype
# theano.config.floatX so that the code is runable on GPU
initial_W = numpy.asarray( numpy.random.uniform( \
low = -numpy.sqrt(6./(n_visible+n_hidden)), \
high = numpy.sqrt(6./(n_visible+n_hidden)), \
size = (n_visible, n_hidden)), dtype = theano.config.floatX)
initial_b = numpy.zeros(n_hidden)
initial_b_prime= numpy.zeros(n_visible)
# theano shared variables for weights and biases
self.W = theano.shared(value = initial_W, name = "W")
self.b = theano.shared(value = initial_b, name = "b")
# tied weights, therefore W_prime is W transpose
self.W_prime = W.T
self.b_prime = theano.shared(value = initial_b_prime, name = "b'")
Note that we pass the input to the autoencoder as a
parameter. This is such that later we can concatenate layers of
autoencoders to form a deep network: the symbolic output (the above, self.y
in the code below) of
the k-th layer will be the symbolic input of the (k+1)-th.
Now we can compute the latent representation and the reconstructed signal :
self.y = T.nnet.sigmoid(T.dot(self.x, self.W ) + self.b)
self.z = T.nnet.sigmoid(T.dot(self.y, self.W_prime) + self.b_prime)
# note : we sum over the size of a datapoint; if we are using minibatches,
# L will be a vector, with one entry per example in minibatch
self.L = - T.sum( self.x*T.log(self.z) + (1-self.x)*T.log(1-self.z), axis=1 )
# note : L is now a vector, where each element is the cross-entropy cost
# of the reconstruction of the corresponding example of the
# minibatch. We need to compute the average of all these to get
# the cost of the minibatch
self.cost = T.mean(self.L)
Training the autoencoder consist now in updating the parameters W, b and b_prime by stochastic gradient descent such that the cost is minimized.
train = theano.function( [x], cost, updates = { \
self.W : self.W - T.grad(self.cost, self.W )*learning_rate,
self.b : self.b - T.grad(self.cost, self.b )*learning_rate,
self.b_prime : self.b_prime - T.grad(self.cost, self.b_prime)*learning_rate})
Note that for the stacked denoising autoencoder we will not use the train function as defined here, this is here just to illustrate how the autoencoder would work. In [Bengio07] autoencoders are used to build deep networks.
One serious potential issue with auto-encoders is that if there is no other
constraint besides minimizing the reconstruction error,
then an auto-encoder with inputs and an
encoding of dimension at least
could potentially just learn
the identity function, for which many encodings would be useless (e.g.,
just copying the input). Surprisingly, experiments reported
in [Bengio07] suggest that in practice, when trained with
stochastic gradient descent, non-linear auto-encoders with more hidden units
than inputs (called overcomplete) yield useful representations
(in the sense of classification error measured on a network taking this
representation in input). A simple explanation is based on the
observation that stochastic gradient
descent with early stopping is similar to an L2 regularization of the
parameters. To achieve perfect reconstruction of continuous
inputs, a one-hidden layer auto-encoder with non-linear hidden units
needs very small weights in the first layer (to bring the non-linearity of
the hidden units in their linear regime) and very large weights in the
second layer.
With binary inputs, very large weights are
also needed to completely minimize the reconstruction error. Since the
implicit or explicit regularization makes it difficult to reach
large-weight solutions, the optimization algorithm finds encodings which
only work well for examples similar to those in the training set, which is
what we want. It means that the representation is exploiting statistical
regularities present in the training set, rather than learning to
replicate the identity function.
There are different ways that an auto-encoder with more hidden units than inputs could be prevented from learning the identity, and still capture something useful about the input in its hidden representation. One is the addition of sparsity (forcing many of the hidden units to be zero or near-zero), and it has been exploited very successfully by many. Another is to add randomness in the transformation from input to reconstruction. This is exploited in Restricted Boltzmann Machines (discussed later in this tutorial), as well as in Denoising Auto-Encoders, discussed below.
The idea behind denoising autoencoders is simple. In order to enforce the hidden layer to discover more roboust features we train the autoencoder to reconstruct the input from a corrupted version of it. The denoising auto-encoder is a stochastic version of the auto-encoder. Intuitively, a denoising auto-encoder does two things: try to encode the input (preserve the information about the input), and try to undo the effect of a corruption process stochastically applied to the input of the auto-encoder. The latter can only be done by capturing the statistical dependencies between the inputs. The denoising auto-encoder can be understood from different perspectives ( the manifold learning perspective, stochastic operator perspective, bottom-up – information theoretic perspective, top-down – generative model perspective ), all of which are explained in [Vincent08]. See also section 7.2 of [Bengio09] for an overview of auto-encoders.
In [Vincent08], the stochastic corruption process consists in randomly setting some of the inputs (as many as half of them) to zero. Hence the denoising auto-encoder is trying to predict the missing values from the non-missing values, for randomly selected subsets of missing patterns. Note how being able to predict any subset of variables from the rest is a sufficient condition for completely capturing the joint distribution between a set of variables.
To convert the autoencoder class into a denoising autoencoder one, all we need to do is to add a stochastic corruption step operating on the input. The input can be corrupted in many ways, in this tutorial we will stick to the original corruption mechanism of randomly masking entries of the input by making them zero. The code below does just that :
from theano.tensor.shared_randomstreals import RandomStreams
theano_rng = RandomStreams()
corrupted_x = x * theano.rng.binomial(x.shape, 1, 0.9)
The final denoising autoencoder class becomes :
class dA(object):
def __init__(self, n_visible= 784, n_hidden= 500, input= None):
self.n_visible = n_visible
self.n_hidden = n_hidden
# create a Theano random generator that gives symbolic random values
theano_rng = RandomStreams()
# create a numpy random generator
numpy_rng = numpy.random.RandomState()
# initial values for weights and biases
# note : W' was written as `W_prime` and b' as `b_prime`
initial_W = numpy.asarray( numpy.random.uniform( \
low = -numpy.sqrt(6./(n_visible+n_hidden)), \
high = numpy.sqrt(6./(n_visible+n_hidden)), \
size = (n_visible, n_hidden)), dtype = theano.config.floatX)
initial_b = numpy.zeros(n_hidden)
initial_b_prime= numpy.zeros(n_visible)
# theano shared variables for weights and biases
self.W = theano.shared(value = initial_W, name = "W")
self.b = theano.shared(value = initial_b, name = "b")
# tied weights, therefore W_prime is W transpose
self.W_prime = self.W.T
self.b_prime = theano.shared(value = initial_b_prime, name = "b'")
# if no input is given, generate a variable representing the input
if input == None :
# we use a matrix because we expect a minibatch of several examples,
# each example being a row
self.x = T.dmatrix(name = 'input')
else:
self.x = input
self.tilde_x = theano_rng.binomial( self.x.shape, 1, 0.9) * self.x
self.y = T.nnet.sigmoid(T.dot(self.tilde_x, self.W ) + self.b)
self.z = T.nnet.sigmoid(T.dot(self.y, self.W_prime) + self.b_prime)
self.L = - T.sum( self.x*T.log(self.z) + (1-self.x)*T.log(1-self.z), axis=1 )
# note : L is now a vector, where each element is the cross-entropy cost
# of the reconstruction of the corresponding example of the
# minibatch. We need to compute the average of all these to get
# the cost of the minibatch
self.cost = T.mean(self.L)
# note : y is computed from the corrupted `tilde_x`. Later on,
# we will need the hidden layer obtained from the uncorrupted
# input when for example we will pass this as input to the layer
# above
self.hidden_values = T.nnet.sigmoid( T.dot(self.x, self.W) + self.b)
The denoising autoencoders can now be stacked to form a deep network by
feeding the latent representation (output code)
of the denoising auto-encoder found on the layer
below as input to the current layer. The unsupervised pre-training of such an
architecture is done one layer at a time. Each layer is trained as
a denoising auto-encoder by minimizing the reconstruction of its input
(which is the output code of the previous layer).
Once the first layers
are trained, we can train the
-th layer because we can now
compute the code or latent representation from the layer below.
Once all layers are pre-trained, the network goes through a second stage
of training called fine-tuning. Here we consider supervised fine-tuning
where we want to minimize prediction error on a supervised task.
For this we first add a logistic regression
layer on top of the network (more precisely on the output code of the
output layer). We then
train the entire network as we would train a multilayer
perceptron. At this point, we only consider the encoding parts of
each auto-encoder.
This stage is supervised, since now we use the target during
training (see the Multilayer Perceptron for details on the multilayer perceptron).
This can be easily implemented in Theano, using the class defined before for a denoising autoencode :
class StackedAutoencoder():
def __init__(self, input, n_ins, hidden_layers_sizes, n_outs):
""" This class is made to support a variable number of layers.
:param input: symbolic variable describing the input of the SdA
:param n_ins: dimension of the input to the sdA
:param n_layers_sizes: intermidiate layers size, must contain
at least one value
:param n_outs: dimension of the output of the network
"""
Next step, we create an denoising autoencoder for each layer and link them together:
self.layers =[]
if len(hidden_layers_sizes) < 1 :
raiseException (' You must have at least one hidden layer ')
# add first layer:
layer = dA(n_ins, hidden_layers_sizes[0], input = input)
self.layers += [layer]
# add all intermidiate layers
for i in xrange( 1, len(hidden_layers_sizes) ):
# input size is that of the previous layer
# input is the output of the last layer inserted in our list
# of layers `self.layers`
layer = dA( hidden_layers_sizes[i-1], \
hidden_layers_sizes[i], \
input = self.layers[-1].hidden_values )
self.layers += [layer]
self.n_layers = len(self.layers)
Note that during the second stage of training (fine-tuning) we need to use the weights of the autoencoders to define a multilayer perceptron. This is already given by the above lines of code, in the sense that the hidden_values of the last denoising autoencoder already computes what should be the input of the logistic regression layer that sits at the top of the MLP. All we need now is to add the logistic layer. We will use the LogisticRegression class introduced in Classifying MNIST digits using Logistic Regression.
# add a logistic layer on top
self.logLayer = LogisticRegression(\
input = self.layers[-1].hidden_values,\
n_in = hidden_layers_sizes[-1], n_out = n_outs)
The negative log likelihood of this MLP (formed from reusing the weights of the denoising autoencoders) is given by the negative log likelihood function of the logistic layer :
def negative_log_likelihood(self, y):
"""Return the mean of the negative log-likelihood of the prediction
of this model under a given target distribution. In our case this
is given by the logistic layer.
:param y: corresponds to a vector that gives for each example the
:correct label
"""
return self.logLayer.negative_log_likelihood(y)
def errors(self, y):
"""Return a float representing the number of errors in the minibatch
over the total number of examples of the minibatch
"""
return self.logLayer.errors(y)
The few lines of code below constructs the stacked denoising autoencoder :
# construct the logistic regression class
classifier = SdA( input=x, n_ins=28*28, \
hidden_layers_sizes = [500, 500, 500], n_outs=10)
There are two stages in training this network, a layer wise pre-training and fine-tuning afterwads.
For the pre-training stage, we will loop over all the layers of the network. For each layer we will compile a theano function that implements a SGD step towards optimizing the weights for reducing the reconstruction cost of that layer. This function will be apllied to the training set for a fixed number of epochs given by pretraining_epochs
## Pre-train layer-wise
for i in xrange(classifier.n_layers):
# compute gradients of layer parameters
gW = T.grad(classifier.layers[i].cost, classifier.layers[i].W)
gb = T.grad(classifier.layers[i].cost, classifier.layers[i].b)
gb_prime = T.grad(classifier.layers[i].cost, \
classifier.layers[i].b_prime)
# updated value of parameters after each step
new_W = classifier.layers[i].W - gW * pretraining_lr
new_b = classifier.layers[i].b - gb * pretraining_lr
new_b_prime = classifier.layers[i].b_prime- gb_prime* pretraining_lr
cost = classifier.layers[i].cost
layer_update = theano.function([index], [cost], \
updates = {
classifier.layers[i].W : new_W \
, classifier.layers[i].b : new_b \
, classifier.layers[i].b_prime : new_b_prime },
givens = {
x :train_set_x[index*batch_size:(index+1)*batch_size]})
# go through pretraining epochs
for epoch in xrange(pretraining_epochs):
# go through the training set
for batch_index in xrange(n_train_batches):
c = layer_update(batch_index)
print 'Pre-training layer %i, epoch %d'%(i,epoch)
The fine-tuning loop is very similar with the one in Multilayer Perceptron, we just have a slighly more complex training function. The reason is that now we need to update all parameters of the network in one call of the training function ( this includes the weight and baiases of the denoising autoencoders plus those of the logistic regression layer). To create this function, we will loop over the layers and create an update list containing pairs of the form (parameter before the SGD step, paramter after the SGD step). The new value of a paramter can be easily computed by calling T.grad to compute the corresponding gradient, multiply it with the learning rate and subtract the result from the old value of the parameter:
# Fine-tune the entire model
# the cost we minimize during training is the negative log likelihood of
# the model
cost = classifier.negative_log_likelihood(y)
# compute the gradient of cost with respect to theta and add them to the
# updates list
updates = []
for i in xrange(classifier.n_layers):
g_W = T.grad(cost, classifier.layers[i].W)
g_b = T.grad(cost, classifier.layers[i].b)
new_W = classifier.layers[i].W - learning_rate * g_W
new_b = classifier.layers[i].b - learning_rate * g_b
updates += [ (classifier.layers[i].W, new_W) \
, (classifier.layers[i].b, new_b) ]
# add the gradients of the logistic layer
g_log_W = T.grad(cost, classifier.logLayer.W)
g_log_b = T.grad(cost, classifier.logLayer.b)
new_log_W = classifier.logLayer.W - learning_rate * g_log_W
new_log_b = classifier.logLayer.b - learning_rate * g_log_b
updates += [ (classifier.logLayer.W, new_log_W) \
, (classifier.logLayer.b, new_log_b) ]
# compiling a theano function `train_model` that returns the cost, but
# in the same time updates the parameter of the model based on the rules
# defined in `updates`
train_model = theano.function([index], cost, updates=updates,
givens = {
x: train_set_x[index*batch_size:(index+1)*batch_size],
y: train_set_y[index*batch_size:(index+1)*batch_size]})
Now we pass this training_model (together with a validate_model and a test_model generated as in the other tutorials) to the early stopping loop and we are done.
TODO
[Bengio07] | Bengio Y., Lamblin P., Popovici D. and Larochelle H. Greedy Layer-Wise Training of Deep Networks. Advances in Neural Information Processing Systems 19 (NIPS‘06), pages 153-160, MIT Press 2007. |
[Vincent08] | Vincent, P., Larochelle H., Bengio Y. and Manzagol P.A. Extracting and Composing Robust Features with Denoising Autoencoders. Proceedings of the Twenty-fifth International Confrence on Machine Learning (ICML‘08), pages 1096 - 1103, ACM, 2008. |
[Bengio09] | Bengio Y. Learning deep architectures for AI, Foundations and Trends in Machine Learning 1(2) pages 1-127. |