r1000 - 03 Jan 2008 - 21:11:55 - PascalLamblinYou are here: TWiki >  Public Web  > DeepBeliefNetworks > DBNEquations

Contrastive divergence multi-layer RBMs

Boltzmann machines

Random variable $X$ (may be) observed, $H$ is (always) hidden. Their joint is given by the Boltzmann distribution associated with an energy function $energy(x,h)$:

\[  P(X=x,H=h) = \frac{exp(-energy(x,h))}{Z} \]

where Z is the appropriate normalization constant:

\[  Z = \sum_{x,h} exp(-energy(x,h)) \]
.

In ordinary Boltzmann machines, the energy function is a quadratic polynomial. Let z=(x,h), then

\[ energy(z) = -\sum_i b_i z_i    -    \sum_{ij} w_{ij} z_i z_j \]

where $z_i \in \{0,1\}$. There is no need for a constant term since it would cancel out in $Z$.

If $X=x$ is observed while $H$ remains hidden, the likelihood involves a sum over all configurations of $H$:

\[  P(X=x) = \sum_h P(X=x,H=h) = \sum_h \frac{exp(-energy(x,h))}{Z} \]

In unrestricted Boltzmann machines, summing over $h$ (in the numerator) and over $x$ (in the denominator $Z$) are both intractable. We will see below that the sum over $h$ becomes tractable when interactions between hidden units are removed (this is the Restricted Boltzmann Machine).

Hence, the exact gradient of $\log P(x)$ wrt $b$ or $w$ is also intractable in a general Boltzmann machine. The gradient can be written as a sum of the corresponding two terms:

\[ \frac{\partial -\log P(x)}{\partial \theta} = \sum_h P(H=h|X=x) \frac{\partial energy(h,x)}{\partial \theta} \mbox{ [this is the POSITIVE phase contribution] } \]
\[ - \sum_{h,x}  P(H=h,X=x) \frac{\partial energy(h,x)}{\partial \theta} \mbox{ [this is the NEGATIVE phase contribution] } \]

The derivation of this result is easy (we do a similar derivation for the restricted Boltzmann machine below).

The standard way to estimate the gradient, to avoid these sums, is to perform an MCMC scheme to obtain one or more samples from $P(h|x)$ with $x \sim$ training set and from $P(x,h)$.

Restricted Boltzmann machines

If we set the weight between $h_i$ and $h_j$ to $0$ and the weight between $x_i$ and $x_j$ to $0$ , we obtain a Restricted Boltzmann Machine (RBM). The advantage of an RBM is that all the $H_i$ 's become independent when conditioning on $X$ , and (symmetrically) all the $X_i$ become independent when conditioning on $H$ .

Energy functions for Restricted Boltzmann Machines

(Note that $w_{ij} = w_{ji}$ )

  • energy term for binomial unit $i$ with value $v_i$ and inputs $u_j$ , parameters ($b_i$, $w_{i\cdot}$ ):

\[    -b_i v_i - \sum_j w_{ij} v_i u_j \]
\[       ==> P(v_i=1 | u) = \frac{exp(b_i + \sum_j w_{ij} u_j) }{ 1 + exp(b_i + \sum_j w_{ij} u_j) } = sigmoid\left(b_i + \sum_j w_{ij} u_j\right) \]

  • energy term for fixed-variance Gaussian unit $i$ with value $v_i$ and inputs $u_j$, parameters $(a_i,b_i, w_{i\cdot})$ :
\[ a_i^2 v_i^2 - b_i v_i  -  \sum_j w_{ij} v_i u_j \]
\[       ==> P(v_i | u) = \frac{1}{Z} exp\left(-a_i^2 v_i^2 + b_i v_i + \sum_j w_{ij} v_i u_j\right) = \frac{1}{Z} exp\left(-\frac{(v_i - \mu)^2}{2\sigma^2}\right) \]
\[       ==> P(v_i | u) = N(v_i; \mu, \sigma^2) \mbox{ with } \sigma^2 = \frac{1}{2 a_i^2}, \mu = \frac{ b_i + \sum_j w_{ij} u_j }{ 2 a_i^2 } \]

Note how these blow up when $a_i$ is too small. We may want to use $(a_i + \epsilon)^2$ instead, with $\epsilon$ fixed.

  • energy term for softmax units $i$ with value $v_i$ and inputs $u_j$ , parameters $(b_i, w_{i\cdot})$ :
\[    -b_i v_i - \sum_j w_{ij} v_i u_j \]
\[       ==> P(v_i=1 | u) = \frac{exp(b_i + \sum_j w_{ij} u_j) }{ \sum_{i'} exp(b_{i'} + \sum_j w_{i'j} u_j) } = softmax\left(b_i + \sum_j w_{ij} u_j\right) \]

Update rule

Likelihood gradient for a RBM with observed inputs x, hidden outputs y:

use $Z = \sum_{x,y} exp(-energy(x,y))$

\[    P(x,y) = \frac{exp(-energy(x,y))}{Z} \]
\[    P(y|x) = \frac{exp(-energy(x,y)) }{ \sum_y exp(-energy(x,y)) } \]

For ANY energy-based (Boltzmann) distribution:

\[  \frac{\partial}{\partial \theta}(-\log P(x) ) = \frac{\partial}{\partial \theta} \left(- log \sum_y P(x,y)\right) \]
\[                        = \frac{\partial}{\partial \theta}\left(- log \sum_y \frac{exp(-energy(x,y))}{Z}\right) \]
\[                        = - \frac{Z}{\sum_y exp(-energy(x,y))} \left( \sum_y \frac{1}{Z} \frac{\partial exp(-energy(x,y))}{\partial \theta} - \sum_y \frac{exp(-energy(x,y))}{Z^2} \frac{\partial Z}{\partial \theta}\right) \]
\[                        = \sum_y \left(\frac{exp(-energy(x,y))}{\sum_{\hat y} exp(-energy(x,\hat y))} \frac{\partial energy(x,y)}{\partial \theta}\right) + \frac{1}{Z} \frac{\partial Z}{\partial \theta} \]
\[                        = \sum_y P(y|x) \frac{\partial energy(x,y)}{\partial \theta} - \frac{1}{Z} \sum_{x,y} exp(-energy(x,y)) \frac{\partial energy(x,y)}{\partial \theta} \]
\[                        = \sum_y P(y|x) \frac{\partial energy(x,y)}{\partial \theta} - \sum_{x,y} P(x,y) \frac{\partial energy(x,y)}{\partial \theta} \]
\[                        = E\left[\left. \frac{\partial energy(x,y)}{\partial \theta} \right| x \right] - E\left[ \frac{\partial energy(x,y)}{\partial \theta} \right] \]
where E is over the model's distribution
\[                        = \mbox{ ``positive phase contribution'' } - \mbox{ ``negative phase contribution'' } \]

The positive phase tries to lower the energy of observed $x$ while the negative phase tries to increase the energy of all $x \sim P$ .

For a RBM, $P(y|x)$ factorizes into $P(y_i|x)$ and energy is a sum over $energy_i(x,y_i)$ , and $\theta={\theta_i}$ so that

\[  \sum_y P(y|x) \frac{\partial energy(x,y)}{\partial \theta_i} = \sum_{y_i} P(y_i|x) \frac{\partial energy_i(x,y_i)}{\partial \theta_i} \]

with $\theta_i$ the parameters associated with $y_i$ .

With Contrastive Divergence we replace the expectation over $(x,y)$ by a sample taken after 1 (or more) Gibbs sampling steps

\[   \mbox{ observed } x = x^0  \stackrel{P(y|x^0)}{\longrightarrow} y^0 \stackrel{P(x|y^0)}{\longrightarrow} x^1 \stackrel{P(y|x^1)}{\longrightarrow} y^1 \]

and the pair $(x^1,y^1)$ serves as that sample in the case of 1 step (= ``CD1'').

  • output binomial unit $i$ <-> input binomial unit $j$
    • weight $w_{ij}$ :
      • positive phase contribution: $ P(y^0_i=1|x^0) 1 \times x^0_j + (1 - P(y^0_i=1|x^0)) 0 \times x^0_j = P(y^0_i=1|x^0) x^0_j $
      • negative phase contribution: $ P(y^1_i=1|x^1) 1 \times x^1_j + (1 - P(y^1_i=1|x^1)) 0 \times x^1_j = P(y^1_i=1|x^1) x^1_j $
    • bias $b_i$ :
      • positive phase contribution: $ P(y^0_i=1|x^0) $
      • negative phase contribution: $ P(y^1_i=1|x^1) $

  • output binomial unit $i$ <-> input Gaussian unit $j$
    • bias $b_i$ and weight $w_{ij}$ as above
    • parameter $a_j$ :
      • positive phase contribution: $2 a_j (x^0_j)^2$
      • negative phase contribution: $2 a_j (x^1_j)^2$

  • output softmax unit $i$ <-> input binomial unit $j$
    same formulas as for binomial units, except that $P(y_i=1|x)$ is computed differently (with softmax instead of sigmoid)

Using and training the last layers

Using

If we train the network in a supervised fashion, we introduce a layer containing the outputs (or targets), Y. Let's call the last layer L and the previous layer P, and define:

         [______ L ______]
            /         \
[_______ P _______] [___ Y ___]

R.V. (=sample) of the last layer = $L_i$

R.V. (=sample) of the previous (next-to-last) layer = $P_j$

R.V. of the supervised layer = $Y_k$

energy parameters between $P_j$ and $L_i$: $V, C$; energies $-V_{ij} L_i P_j - C_i L_i$

energy parameters between $L_i$ and $Y_k$: $W,B$; energies $-W_{ki} Y_k L_i - B_k Y_k$

"output" (expectation) of next-to-last layer = $p(P)$ (given the inputs of the network)

  • The activation of $Y$ is computed from $p(P)$, not $p(L)$:

\[actY_k = B_k + \sum_i softplus\left(W_{ki} + C_i + \sum_j V_{ij} p(P_j)\right)\]

with $ softplus: x \mapsto log(1 + exp(x)) $

(see below for explanation)

  • The expectations of $Y$ , from the activations ($Y$ is a multinomial units set)

\[ P(Y|inputs) = softmax(actY) \]

  • fprop = expectations ( activations )

Training

There are two ways of learning the parameters $V$, $C$, $W$ and $B$

  • By simple gradient descent:
    • We compute the $ cost = - \log P(Y=onehot(k)|inputs)$, where $k$ is the observed target, and we backprop all the way.

  • By using contrastive divergence:
    • we consider [ Y, P ] = X a big layer, beyond L
    • we put [onehot(k), p(P)] = x in X, as input of L
    • (X,L) forms an RBM, which we train using contrastive divergence

Why?

The output probabilities are computed as follows:

\[ P(Y=onehot(k)|inputs) = \frac{exp\left(B_k + \sum_i softplus(W_{ki} + C_i + \sum_j V_{ij} p(P_j)\right)}{Z} \]

This formula can be derived by considering that P, L, and Y are binary random variables following the Boltzmann distribution with energy:

\[ E(L,Y,P) = -B'Y - Y'WL - C'L - P'V'L \]

During training, both P and Y are observed, so that E is linear in L, i.e. P(L|P,Y) is a product of $P(L_i|P,Y)$ : the $L_i$ are conditionally independant given P and Y.

This corresponds to an undirected graphical model with full connectivity between each $L_i$ and each $Y_k$ (and similarly between $L_i$ and each $P_j$), but no connection among the $L_i$ or among the $Y_k$'s. Because of this factorization we obtain that

\[ P(Y|P) = \sum_L \frac{exp(-E(L,Y,P))}{Z} \]

and

\[ \sum_L exp(-E(L,Y,P)) = exp(B'Y) \prod_i(exp(-E_i(1,Y,P)) + exp(-E_i(0,Y,P))) \]

where

\[ E_i(l,Y,P) = \mbox{ the term in ``$L_i=l$'' in the energy } \]
\[ = -l (\sum_k Y_k W_{ki} + C_i + \sum_j V_{ij} P_j) \]

Since $E_i(0,Y,P) = 0$, we obtain that

\[ \sum_L exp(-E(L,Y,P)) = exp(B'Y) exp\left(\sum_i log(1 + exp(-E_i(1,Y,P)))\right) \]
\[ = exp\left(B'Y + \sum_i softplus\left(\sum_k Y_k W_{ki} + C_i + \sum_j V_{ij} P_j\right)\right) \]

which gives the above formula for P(Y=onehot(k)|inputs), if we replace P by its expectation p(P).

-- PascalLamblin - 19 Jun 2007

Edit | WYSIWYG | Attach | Printable | Raw View | Backlinks: Web, All Webs | History: r1000 < r999 < r998 < r997 < r996 | More topic actions
Public.DBNEquations moved from Neurones.DBNEquations on 21 Jun 2007 - 20:09 by PascalLamblin - put it back
 
Home
This site is powered by the TWiki collaboration platformCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback