r44 - 28 Jan 2008 - 09:40:50 - YoshuaBengioYou are here: TWiki >  Public Web  > DeepBeliefNetworks

Deep Belief Networks

Deep Belief Networks (here referred to as DBNs) are deep neural networks, trained in a greedy layer-wise fashion. They were introduced by Hinton et al. (2006). See this review paper for a detailed introduction to such models.

Each layer of the network tries to model the distribution of its input, using unsupervised training in a Restricted Boltzmann Machine:

  • H = hidden causes,
  • P(h|x) = representation of x.

The unsupervised greedy layer-wise training serves as initialization, replacing the traditional random initialization of multi-layer networks.

DBNs.png
Layer-wise training of a DBN

Restricted Boltzmann Machines

RBM.png

The probabilities of the states that a Boltzmann Machine can take is defined by its energy function:

\[ P(X=x,H=h) \propto  e^{-{\cal E}(x,h)} = e^{x'b + h'c + h'W x} \]

Inference is easy: $P(H|X)$ factorizes ($H_i \independent H_j | X,\, i\neq j$)

\[ P(X_j=1|H=h) = \mathrm{sigmoid}(b_j + \sum_i h_i W_{ij}) \]
\[ Q(H_i=1|X=x) = \mathrm{sigmoid}(c_i + \sum_j  W_{ij} x_j) .\]
No explaining away.

Training is easy: Contrastive Divergence (Hinton 2002)

Let $x_0$ be the observed input. Imagine Gibbs Markov chain :

\[ x_0 \stackrel{Q(h_0|x_0)}{\longrightarrow} h_0 \stackrel{P(x_1|h_0)}{\longrightarrow} x_1  \stackrel{Q(h_1|x_1)}{\longrightarrow} h_1 \ldots \]

Then, for $k\rightarrow\infty$:

\[ \frac{\partial \log P(x_0)}{\partial \theta} =     - E_{h_0}\left[\left.\frac{\partial {\cal E}(x_0, h_0)}{\partial \theta} \right| x_0 \right]     + E_{x_k, h_k} \left[ \frac{\partial {\cal E}(x_k, h_k)}{\partial \theta}\right] \]

where expectation over $h$ + sampling $x_k$ with $k$ small is easy.

This is contrastive divergence and $k\!=\!1$ works (Carreira-Perpiñán 2005).

For more detailed equations, and extension to Gaussian and truncated-exponential units, please see DBNEquations.

Implementation

The detailed procedure for training a whole DBN is detailed on DBNPseudoCode.

It was implemented using the PLearn machine learning library. The class for a DBN is DeepBeliefNetwork, and it uses different other classes, deriving from RBMLayer (RBMBinomialLayer, RBMGaussianLayer, RBMTruncExpLayer, and RBMMixedLayer) and RBMConnection (RBMMatrixConnection, RBMMixedConnection).

Examples

Here are two examples of PLearn scripts implementing Deep Belief Nets, applied to MNIST digit recognition task.

You will need to change the path to your data and the hyper-parameters (and possibly the network architecture), either by editing the args class at the beginning of the script, or by passing the appropriate command-line arguments (see the scripts for more details).

-- PascalLamblin - 22 Jun 2007

toggleopenShow attachmentstogglecloseHide attachments
Topic attachments
I Attachment Action Size Date Who Comment
elsepyplearn example_mnist.pyplearn manage 5.1 K 21 Jun 2007 - 21:13 PascalLamblin Example of .pyplearn script implementing a Deep Belief Net, applied to MNIST digit recognition dataset
elsepyplearn example_mnist_earlystopping.pyplearn manage 6.4 K 21 Jun 2007 - 21:15 PascalLamblin Example of .pyplearn script implementing a Deep Belief Net, applied to MNIST digit recognition dataset, with early-stopping on a validation set during supervised phases
pngpng DBNs.png manage 36.8 K 21 Jun 2007 - 22:27 PascalLamblin Layer-wise training of a DBN
pngpng RBM.png manage 7.2 K 21 Jun 2007 - 23:10 PascalLamblin A Restricted Boltzmann Machine
Edit | WYSIWYG | Attach | Printable | Raw View | Backlinks: Web, All Webs | History: r44 < r43 < r42 < r41 < r40 | More topic actions
 
Home
This site is powered by the TWiki collaboration platformCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback