This is the RBM update procedure for binomial units. It also works for exponential and truncated exponential units, and for the linear parameters of a Gaussian unit (using the appropriate sampling procedure for Q and P). It can be readily adapted for the variance parameter of Gaussian units.
v[0]
is a sample from the training distribution for the RBM
epsilon
is a learning rate for the stochastic gradient descent in
W
is the RBM weight matrix, of dimension (number of hidden units, number of
b
is the RBM biases vector for hidden units
c
is the RBM biases vector for input units
RBMupdate(v[0], epsilon, W, b, c): for all hidden units i: compute Q(h[0][i] = 1 | v[0]) # for binomial units, sigmoid(b[i] + sum_j(W[i][j] * v[0][j])) sample h[0][i] from Q(h[0][i] = 1 | v[0]) for all visible units j: compute P(v[1][j] = 1 | h[0]) # for binomial units, sigmoid(c[j] + sum_i(W[i][j] * h[0][i])) sample v[1][j] from P(v[1][j] = 1 | h[0]) for all hidden units i: compute Q(h[1][i] = 1 | v[1]) # for binomial units, sigmoid(b[i] + \sum_j(W[i][j] * v[1][j])) W += epsilon * (h[0] * v[0]' - Q(h[1][.] = 1 | v[1]) * v[1]') b += epsilon * (h[0] - Q(h[1][.] = 1 | v[1])) c += epsilon * (v[0] - v[1])$
Train a DBN in a purely unsupervised way, with the greedy layer-wise procedure in which each added layer is trained as an RBM by contrastive divergence.
X
is the input training distribution for the network
epsilon
is a learning rate for the stochastic gradient descent in Contrastive Divergence
L
is the number of layers to train
n=(n[1], ...,n[L])
is the number of hidden units in each layer
W[i]
is the weight matrix for level i
, for i
from 1
to L
b[i]
is the bias vector for level i
, for i
from 0
to L
TrainUnsupervisedDBN(X, epsilon, L, n, W, b): initialize b[0]=0 for l=1 to L: initialize W[i]=0, b[i]=0 while not stopping criterion: sample g[0]=x from X for i=1 to l-1: sample g[i] from Q(g[i]|g[i-1]) RBMupdate(g[l-1], epsilon, W[l], b[l], b[l-1])
After a DBN has been initialized by pre-training, this procedure will
optimize all the parameters with respect to the supervised criterion C
,
using stochastic gradient descent.
Z
is the supervised training distribution for the DBN, with (input,target) samples (x,y)
C
is a training criterion, a function that takes a network output f(x)
and a target y
and returns a scalar differentiable in f(x)
epsilon_C
is a learning rate for the stochastic gradient descent on supervised cost C
L
is the number of layers
n=(n[1], ..., n[L])
is the number of hidden units in each layer
W[i] is the weight matrix for level =i
, for i
from 1
to L
b[i]
is the bias vector for level i
, for i
from 0
to L
V
is a weight matrix for the supervised output layer of the network
DBNSupervisedFineTuning(Z, C, epsilon_C, L, n, W, b, V): Recursively define mean-field propagation mu[i](x)=Expectation(g[i]|g[i-1]=mu[i-1](x)) where mu[0](x)=x, and Expectation(g[i]|g[i-1]=mu[i-1]) is the expected value of g[i] under the RBM conditional distribution Q(g[i]|g[i-1]), when the values of g[i-1] are replaced by the mean-field values mu[i-1](x) # In the case where g[i] has binomial units, Expectation(g[i][j]|g[i-1]=mu[i-1]) = sigmoid(b[i][j] + sum_k W[i][j][k] mu[i-1][k](x) Define the network output function f(x) = V * (mu[L](x)', 1)' Iteratively minimize the expected value of C(f(x),y) for pairs (x,y) sampled from Z by tuning parameters W, b, V. This can be done by stochastic gradient descent with learning rate epsilon_C, using an appropriate stopping criterion such as early stopping on a validation set.
Train a DBN for a supervised learning task, by first performing pre-training
of all layers (except the output weights V
), followed by supervised fine-tuning
to minimize a criterion C
.}\
Z
is the supervised training distribution for the DBN, with (input,target) samples (x,y)
C
is a training criterion, a function that takes a network output f(x)
and a target y
and returns a scalar differentiable in f(x)
epsilon_CD
is a learning rate for the stochastic gradient descent with Contrastive Divergence
epsilon_C
is a learning rate for the stochastic gradient descent on supervised cost C
L
is the number of layers
n=(n[1], ..., n[L])
is the number of hidden units in each layer
W[i] is the weight matrix for level =i
, for i
from 1
to L
b[i]
is the bias vector for level i
, for i
from 0
to L
V
is a weight matrix for the supervised output layer of the network
TrainSupervisedDBN(Z, C, epsilon_CD, epsilon_C, L, n, W, b, V): let X the marginal over the input part of Z TrainUnsupervisedDBN(X, epsilon_CD, L, n, W, b) DBNSupervisedFineTuning(Z, C, epsilon_C, L, n, W, b, V)
-- PascalLamblin - 21 Jun 2007