Table Of Contents

Previous topic

Denoising autoencoders vs. ordinary autoencoders

This Page

Version française

Probabilistic models for deep architectures

Of particular interest are Boltzmann machine models, certain variants of which are used in deep architectures such as Deep Belief Networks and Deep Boltzmann Machines. See section 5 of Learning Deep Architectures for AI.

The Boltzmann distribution is generally defined on binary variables x_i \in \{0,1\}, with

P(x) = \frac{e^{x' W x + b'x}}{\sum_{\tilde x} \tilde{x}' W \tilde{x} + b'\tilde{x}}

where the denominator is simply a normalizing constant such that \sum_x P(x)=1, and the W_{ij} indicates the nature of the interaction (e.g. a positive value indicates that x_i et x_j prefer to take the same value) between pairs of variables, and b_i indicates the inclination of a given x_i to take a value of 1.

Readings on probabilistic graphical models

See

Graphical models: probabilistic inference. M. I. Jordan and Y. Weiss. In M. Arbib (Ed.), The Handbook of Brain Theory and Neural Networks, 2nd edition. Cambridge, MA: MIT Press, 2002.

Certain distributions can be written P(x) for a vector of variables x=(x_1,x_2,\ldots) in the form

P(x) = \frac{1}{Z} \prod_c \psi_c(x_c)

where Z is the normalizing constant (called the partition function), and the product is over cliques (subsets x_c of elements of the vector x), and the \psi_c(.) are functions (one per clique) that indicate how the variables in each clique interact.

A particular case where Z may be simplified a bit (factorized over cliques) is the case of directed models where variables are structured as a directed acyclic graph, with a topological ordering that associates a group of parent variables parents(x_i) with each variable x_i:

P(x) = \prod_i P_i(x_i | parents(x_i))

where it can be seen that there is one clique for a variable and its parents, i.e., P_i(x_i | parents(x_i)) = \psi_i(x_i, parents(x_i))/Z_i.

In the general case (represented with an undirected graph), the potential functions \psi_c are directly parameterized, often in the space of logarithms of \psi_c, leading to a formulation known as a Markov random field:

P(x) = \frac{1}{Z} e^{-\sum_c E_c(x_c)}

where:math:E(x)=sum_c E_c(x_c) is called the energy function. The energy function of a Boltzmann machine is a second degree polynomial in x. The most common parameterization of Markov random fields has the following form, which is log-linear:

P(x) = \frac{1}{Z} e^{-\sum_c \theta_c f_c(x_c)}

where the only free parameters are the \theta_c, and where the complete log likelihood (when x is completely observed in each training example) is log-linear in the parameters \theta. One can easily show that this function is convex in \theta.

Inference

One of the most important obstacles in the practical application of the majority of probabilistic models is the difficulty of inference: given certain variables (a subset of x), predict the marginal distribution (separately for each) or joint distribution of certain other variables. Let x=(v,h) with h (hidden) being the variables we would like to predict, and v (visible) being the observed subset. One would like to calculate, or at least sample from,

P(h | v).

Inference is obviously useful if certain variables are missing, or if, while using the model, we wish to predict a certain variable (for example the class of an image) given some other variables (for example, the image itself). Note that if the model has hidden variables (variables that are never observed in the data) we do not try to predict the values directly, but we will still implicitly marginalize over these variables (sum over all configurations of these variables).

Inference is also an essential component of learning, in order to calculate gradients (as seen below in the case of Boltzmann machines) or in the use of the Expectation-Maximization (EM) algorithm which requires a marginalization over all hidden variables.

In general, exact inference has a computational cost exponential in the size of the cliques of a graph (in fact, the unobserved part of the graph) because we must consider all possible combinations of values of the variables in each clique. See section 3.4 of Graphical models: probabilistic inference for a survey of exact inference methods.

A simplifed form of inference consists of calculating not the entire distribution, but only the mode (the most likely configuration of values) of the distribution:

h^* = {\rm argmax}_{h} P(h | v)

This is known as MAP = Maximum A Posteriori inference.

Approximate inference

The two principal families of methods for approximate inference in probabilistic models are Markov chain Monte Carlo (MCMC) methods and variational inference.

The principle behind variational inference is the following. We will define a simpler model than the target model (the one that interests us), in which inference is easy, with a similar set of variables (though generally with more simple dependencies between variables than those contained in the target model). We then optimize the parameters of the simpler model so as to approximate the target model as closely as possible. Finally, we do inference using the simpler model. See section 4.2 of Graphical models: probabilistic inference for more details and a survey.

Inference with MCMC

In general P(h | v) can be exponentially expensive to represent (in terms of the number of hidden variables, because we must consider all possible configurations of h). The principle behind Monte Carlo inference is that we can approximate the distribution P(h | v) using samples from this distribution. Indeed, in practice we only need an expectation (for example, the expectation of the gradient) under this conditional distribution. We can thus approximate the desired expectation with an average of these samples.

See the page site du zéro sur Monte-Carlo (in French) for a gentle introduction.

Unfortunately, for most probabilistic models, even sampling from P(h |
v) exactly is not feasible (taking time exponential in the dimension of de h). Therefore the most general approach is based on an approximation of Monte-Carlo sampling called Markov chain Monte Carlo (MCMC).

A (first order) Markov chain is a sequence of random variables Z_1,Z_2,\ldots, where Z_k is independent of Z_{k-2}, Z_{k-3}, \ldots given Z_{k-1}:

P(Z_k | Z_{k-1}, Z_{k-2}, Z_{k-3}, \ldots) = P(Z_k | Z_{k-1})

P(Z_1 \ldots Z_n) = P(Z_1) \prod_{k=2}^n P(Z_k|Z_{k-1})

The goal of MCMC is to construct a Markov chain whose asymptotic marginal distribution, i.e. the distribution of Z_n as n \rightarrow \infty, converges towards a given target distribution, such as P(h | v) or P(x).

Gibbs sampling

Numerous MCMC-based sampling methods exist. The one most commonly used for deep architectures is Gibbs sampling. It is simple and has a certain plausible analogy with the functioning of the brain, where each neuron decides to send signals with a certain probability as a function of the signals it receives from other neurons.

Let us suppose that we wish to sample from the distribution P(x) where x is a set of variables x_i (we could optionally have a set of variables upon which we have conditioned, but this would not change the procedure, so we ignore them in the following description).

Let x_{-i}=(x_1,x_2,\ldots,x_{i-1},x_{i+1},\ldots,x_n), i.e. all variables in x excluding x_i. Gibbs sampling is performed using the following algorithm:

  • Choose an initial value of x in an arbitrary manner (random or not)
  • For each step of the Markov chain:
    • Iterate over each x_k in x
      • Draw x_k from the conditional distribution P(x_k | x_{-k})

In some cases one can group variables in x into blocks or groups of variables such that drawing samples for an entire group, given the others, is easy. In this case it is advantageous to interpret the algorithm above with x_i as the i^{\mathrm{th}} group rather than the i^{\mathrm{th}} variable. This is known as block Gibbs sampling.

The gradient in a log-linear Markov random field

See Learning Deep Architectures for AI for detailed derivations.

Log-linear Markov random fields are undirected probabilistic models where the energy function is linear in the parameters \theta of the model:

P(x) \propto e^{-\sum_i \theta_i f_i(x)}

where f_i(.) are known as sufficient statistics of the model, because the expectations E[f_i(x)] are sufficient for characterizing the distribution and estimating parameters.

Note that e^{\theta_i f_i(x)} = \psi_i(x) is associated with each clique in the model (in general, only a sub-vector of x influences each f_i(x)).

Getting back to sufficient statistics, one can show that the gradient of the log likelihood is as follows:

\frac{- \partial \log P(x)}{\partial \theta_i} = f_i(x) - \sum_x P(x) f_i(x)

and the average gradient over training examples x_t is thus

\frac{1}{T} \sum_t \frac{-\partial log P(x_t)}{\partial \theta_i} =
          \frac{1}{T}\sum_t f_i(x_t) - \sum_x P(x) f_i(x)

Thus, it is clear that the gradient vanishes when the average of the sufficient statistics under the training distribution equals their expectation under the model distribution :math:`P`.

Unfortunately, calculating this gradient is difficult. We do not want to sum over all possible x, but fortunately one can obtain a Monte-Carlo approximation by one or more samples from P(x), which gives us a noisy estimate of the gradient. In general, however, even to obtain an unbiased sample from P(x) is exponentially costly, and thus one must use an MCMC method.

We refer to the terms of the gradient due to the numerator of the probability density (-f_i(x)) as the ‘positive phase’ gradient, and the terms of the gradient corresponding to the partition function (denominator of the probability density) as the ‘negative phase’ gradient.

Marginalization over hidden variables

When a model contains hidden variables, the gradient becomes a bit more complicated since one must marginalize over the hidden variables. Let x=(v,h), with v being the visible part and h being the hidden part, with statistics from functions of the two, f_i(v,h). The average gradient of the negative log likelihood of the observed data is thus

\frac{1}{T} \sum_t \frac{-\partial \log P(v_t)}{\partial \theta_i} =
          \frac{1}{T}\sum_t \sum_h P(h|v_t) f_i(v_t,h) - \sum_{h,v} P(v,h) f_i(v,h).

In the general case, it will be necessary to resort to MCMC not only for the negative phase gradient but also for the positive phase gradient, i.e. to sample P(v|h_t).

The Boltzmann Machine

A Boltzmann machine is an undirected probabilistic model, a particular form of log-linear Markov random field, containing both visible and hidden variables, where the energy function is a second degree polynomial of the variables x:

E(x) = -d'x - x'Ax

The classic Boltzmann machine has binary variables and inference is conducted via Gibbs sampling, which requires samples from P(x_i | x_{-i}). It can be easily shown that

P(x_i=1 | x_{-i}) = {\rm sigmoid}(d_i + \omega_i x_{-i})

where \omega_i is the i^{th} row of A excluding the i^{th} element (the diagonal of A is 0 in this model). Thus, we see the link with networks of neurons.

Restricted Boltzmann Machines

A Restricted Boltzmann Machine, or RBM, is a Boltzmann machine without lateral connections between the visible units v_i or between the hidden units h_i. The energy function thus becomes

E(v,h) = -b'h - c'v - v'W h.

where the matrix A is entirely 0 except in the submatrix W. The advantage of this connectivity restriction is that inferring P(h|v) (and also P(v|h)) becomes very easy, can be performed analytically, and the distribution factorizes:

P(h|v) = \prod_i P(h_i|v)

and

P(v|h) = \prod_i P(v_i|h)

In the case where the variables (“units”) are binary, we obtain once again have a sigmoid activation probability:

P(h_j=1 | v) = {\rm sigmoid}(b_j + \sum_i W_{ij} v_i)

P(v_i=1 | h) = {\rm sigmoid}(c_i + \sum_j W_{ij} h_j)

Another advantage of the RBM is that the distribution P(v) can be calculated analytically up to a constant (the unknown constant being the partition function). This permits us to define a generalization of the notion of an energy function in the case when we wish to marginalize over the hidden variables: the free energy (inspired by notions from physics)

P(v) = \frac{e^{-FE(v)}}{Z} = \sum_h P(v,h) = \frac{\sum_h e^{-E(v,h)}}{Z}

FE(v) = -\log \sum_h e^{-E(v,h)}

and in the case of RBMs, we have

FE(v) = -b'v - \sum_i \log \sum_{h_i} e^{h_i (c_i + v' W_{.i})}

where the sum over h_i is a sum over values that the hidden variables can take, which in the case of binary units yields

FE(v) = -b'v - \sum_i \log (1 + e^{c_i + v' W_{.i}})

FE(v) = -b'v - \sum_i {\rm softplus}(c_i + v' W_{.i})

Gibbs sampling in RBMs

Although sampling from P(h|v) is easy and immediate in an RBM, drawing samples from P(v) or from P(v,h) cannot be done exactly and is thus generally accomplished with MCMC, most commonly with block Gibbs sampling, where we take advantage of the fact that sampling from P(h|v) and P(v|h) is easy:

v^{(1)} \sim {\rm exemple\;\; d'apprentissage}

h^{(1)} \sim P(h | v^{(1)})

v^{(2)} \sim P(v | h^{(1)})

h^{(2)} \sim P(h | v^{(2)})

v^{(3)} \sim P(v | h^{(2)})

\ldots

In order to visualize the generated data at step k, it is better to use expectations (i.e. E[v^{(k)}_i|h^{(k-1)}]=P(v^{(k)}_i=1|h^{(k-1)})) which are less noisy than the samples v^{(k)} themselves.

Training RBMs

The exact gradient of the parameters of an RBM (for an example v) is

\frac{\partial \log P(v)}{\partial W} = v' E[h | v] - E[v' h]

\frac{\partial \log P(v)}{\partial b} = E[h | v] - E[h]

\frac{\partial \log P(v)}{\partial c} = v - E[v]

where the expectations are under the distribution of the RBM. The conditional expectations can be calculated analytically (since E[h_i | v]=P(h_i=1|v)= the output of a hidden unit, for binary h_i) but the unconditional expectations must be approximated using MCMC.

Contrastive Divergence

The first and simplest approximation of E[v' h], i.e., for obtaining ‘negative examples’ (for the ‘negative phase’ gradient), consists of running a short Gibbs chain (of k steps) beginning at a training example. This algorithm is known as CD-k (Contrastive Divergence with k steps). See algorithm 1 in Learning Deep Architectures for AI:

W \leftarrow W + \epsilon( v^{(1)} \hat{h}^{(1)'} - v^{(2)} \hat{h}^{(2)'} )

b \leftarrow b + \epsilon( \hat{h}^{(1)} - \hat{h}^{(2)} )

c \leftarrow c + \epsilon( v^{(1)} - v^{(2)} )

where \epsilon is the gradient step size, and we refer to the notation for Gibbs sampling from RBMs above, with \hat{h}^{(1)} denotes the vector of probabilities P(h^{(1)}_i=1|v_1) and in the same fashion \hat{h}^{(2)}_i=P(h^{(2)}_i=1|v^{(2)}).

What is surprising is that even with k=1, we obtain RBMs that work well in the sense that they extract good features from the data (which we can verify visually byt looking at the filters, the stochastic reconstructions after one step of Gibbs, or quantitatively by initializing each layer of a deep network with W and b obtained by pretraining an RBM at each layer).

It can be shown that CD-1 is very close to the training procedure of an autoencoder by minimizing reconstruction error, and one can see that the reconstruction error diminishes in a mostly monotonic fashion during CD-1 training.

It can also be shown that CD-k tends to the true gradient (in expected value) when k becomes large, but at the same time increases computation time by a factor of k.

Persistent Contrastive Divergence

In order to obtain a less biased estimator of the true gradient without significantly increasing the necessary computation time, we can use the Persistent Contrastive Divergence (PCD) algorithm. Rather than restarting a Gibbs chain after each presentation of a training example v, PCD keeps a chain running in order to obtain negative examples. This chain is a bit peculiar because its transition probabilities change (slowly) as we update the parameters of the RBM. Let {v^-, h^-} be the state of our negative phase chain. The learning algorithm is then

\hat{h}_i = P(h_i=1 | v)

\forall i, \hat{v}^-_i = P(v_i=1 | h^-)

v^- \sim \hat{v}^-

\forall i, \widehat{h_i}^- = P(h_i=1 | v^-)

h^- \sim \hat{h}^-

W \leftarrow W + \epsilon( v \hat{h}' - v^- \hat{h}^{-'} )

b \leftarrow b + \epsilon( \hat{h} - \hat{h}^- )

c \leftarrow c + \epsilon( v - \hat{v}^- )

Experimentally we find that PCD is better in terms of generating examples that resemble the training data (and in terms of the likelihood \log P(v)) than CD-k, and is less sensitive to the initialization of the Gibbs chain.

Stacked RBMs and DBNs

RBMs can be used, like autoencoders, to pretrain a deep neural network in an unsupervised manner, and finish training in the usual supervised fashion. One stacks RBMs with the hidden layer of one (given its input) i.e., les P(h|v) or h \sim P(h|v), becomes the data for the next layer.

The pseudocode for greedy layer-by-layer training of a stack of RBMs is presented in section 6.1 (algorithm 2) of Learning Deep Architectures for AI. To train the k^{\mathrm{th}} RBM, we propagate forward samples h \sim P(h|v) or the posteriors P(h|v) through the k-1 previously trained RBMs and use them as data for training the k^{\mathrm{th}} RBM. They are trained one at a time: once we stop training the k^{\mathrm{th}}, we move on to the {k+1}^{\mathrm{th}}.

An RBM has the same parameterization as a layer in a classic neural network (with logistic sigmoid hidden units), with the difference that we use only the weights W and the biases b of the hidden units (since we only need P(h|v) and not P(v|h)).

Deep Belief Networks

We can also consider a stacking of RBMs in a generative manner, and we call these models Deep Belief Networks:

P(x,h^1,\ldots,h^{\ell}) = \left( \prod_{k=0}^{\ell-2} P(h^k | h^{k+1}) \right) P(h^{\ell-1}, h^{\ell})

where we denote x=h^0 and h^k as the random variable (vector) associated with layer k. The last two layers have a joint distribution given by an RBM (the last of the stack). The RBMs below serve only to define the conditional probabilities P(h^k | h^{k+1}) of the DBN, where h^k play the role of visible units and :math:`h^{k+1}`similarly play the role of hidden units in RBM k+1.

Sampling from a DBN is thus performed as follows:

  • Sample a h^{\ell-1} from the top RBM (number \ell), for example by running a Gibbs chain

  • For k from \ell-1 to 1
    • sample the visible units (h^k) given the hidden units (h^{k+1}) in RBM k
  • Return h^k, the last sample obtained, which is the result of generating from the DBN

Unfolding an RBM and RBM - DBN equivalence

It can be shown (see section 8.1 of Learning Deep Architectures for AI.) that an RBM corresponds to DBN with a particular architecture, where the weights are shared between all the layers: level 1 of the DBN uses the weights W of the RBM, level 2 uses the weights W', level 3 uses W, etc. alternating between W and W'. The last pair of layers of the DBN is an RBM with weights W or :math`W’` depending on whether the number of layers is odd or even. Note that in this equivalence, the DBN has layer sizes that alternate (number of visible units of the RBM, number of hidden units of the RBM, number of visibles, etc.)

In fact we can continue the unfolding of an RBM infinitely and obtain an infinite directed network with shared weights, equivalently. See figure 13 in the same section, 8.1.

It can be seen that this infinite network corresponds exactly to an infinite Gibbs chain that leads to (finishes on) the visible layer of the original RBM, i.e. that generates the same examples. The even layers correspond to sampling P(v|h) (of the original RBM) and the odd layers to sampling P(h|v).

Finally, it can be shown that if we take an RBM and unfold it one time (mirrored), the continued training of the new RBM on top (initialized with W') maximizes a lower bound on the log likelihood of the corresponding DBN. In passing from an RBM to a DBN, we replace the marginal distribution P(h) of the RBM (which is encoded implicitly in the parameters of the RBM) with the distribution generated by the part of the DBN above this RBM (the DBN consists of all layers above h), since this h corresponds to visible units of this DBN. The proof is simple and instructive, and uses the letter Q for the probabilities according to the RBM (at the bottom) and the letter P for the probabilities according to the DBN obtained by modeling h differently (i.e. by replacing Q(h) by P(h)). We also remark that P(x|h) = Q(x|h), but this is not true for P(h|x) and Q(h|x).

\log P(x) = \left(\sum_{h} Q(h|x)\right) \log P(x) = \sum_{h} Q(h|x) \log \frac{P(x,h)}{P(h|x)}

\log P(x) = \sum_{h} Q(h|x) \log \frac{P(x,h)}{P(h|x)} \frac{Q(h|x)}{Q(h|x)}

\log P(x) = H_{Q(h|x)} + \sum_{h} Q(h|x) \log P(x, h) + \sum_{h} Q(h|x) \log \frac{Q(h|x)}{P(h|x)}

\log P(x) = KL(Q(h|x)||P(h|x)) + H_{Q(h|x)} + \sum_{h} Q(h|x) \left(\log P(h) + \log P(x|h) \right)

\log P(x) \geq \sum_{h} Q(h|x) \left(\log P(h) + \log P(x|h) \right)

This shows that one can actually increase the lower bound (last line) by doing maximum likelihood training of P(h) using as training data the h drawn from Q(h|x), where x is drawn from the training distribution of the bottom RBM. Since we have decoupled the weights below from those above, we don’t touch the bottom RBM (P(x|h) and Q(h|x)), and only modify P(h).

Approximate inference in DBNs

Contrary to the RBM, inference in DBNs (inferring the states of the hidden units given the visible units) is very difficult. Given that we initialize DBNs as a stack of RBMs, in practice the following approximation is used: sample the h^k given the h^{k-1} using the new weights of level k. This would be exact inference if this was still an isolated RBM, but it is no longer exact with the DBN.

We saw that this is an approximation in the previous section because the marginal P(h) (of the DBN) differs from the marginal Q(h) (of the bottom RBM), after modifying the upper weights so that they are no longer the tranpose of the bottom weights, and thus P(h|x) differs from Q(h|x).

Deep Boltzmann Machines

Finally, we can also use a stack of RBMs for initializing a deep Boltzmann machine (Salakhutdinov and Hinton, AISTATS 2009). This is a Boltzmann machine organized in layers, where each layer is connected the layer below and the layer above, and there are no within-layer connections.

Note that the weights are somehow two times too big when doing the initialization described above, since now each unit receives input from the layer above and the layer below, whereas in the original RBM it was either one or the other. Salakhutdinov proposes thus dividing the weights by two when we make the transition from stacking RBMs to deep Boltzmann machines.

It is also interesting to note that according to Salakhutdinov, it is crucial to initialize deep Boltzmann machines as a stack of RBMs, rather than with random weights. This suggests that the difficulty of training deep deterministic MLP networks is not unique to MLPs, and that a similar difficulty is found in deep Boltzmann machines. In both cases, the initialization of each layer according to a local training procedure seems to help a great deal. Salakhutdinov obtains better results with his deep Boltzmann machine than with an equivalent-sized DBN, although training the deep Boltzmann machine takes longer.