r9 - 21 Jun 2007 - 21:49:07 - PascalLamblinYou are here: TWiki >  Public Web  > DeepBeliefNetworks > DBNPseudoCode
Pseudo-code for training Deep Belief Networks

Training of Restricted Boltzmann Machines

This is the RBM update procedure for binomial units. It also works for exponential and truncated exponential units, and for the linear parameters of a Gaussian unit (using the appropriate sampling procedure for Q and P). It can be readily adapted for the variance parameter of Gaussian units.

  • v[0] is a sample from the training distribution for the RBM
  • epsilon is a learning rate for the stochastic gradient descent in Contrastive Divergence
  • W is the RBM weight matrix, of dimension (number of hidden units, number of inputs)
  • b is the RBM biases vector for hidden units
  • c is the RBM biases vector for input units

RBMupdate(v[0], epsilon, W, b, c):

    for all hidden units i:
        compute Q(h[0][i] = 1 | v[0]) # for binomial units, sigmoid(b[i] + sum_j(W[i][j] * v[0][j]))
        sample h[0][i] from Q(h[0][i] = 1 | v[0])

    for all visible units j:
        compute P(v[1][j] = 1 | h[0]) # for binomial units, sigmoid(c[j] + sum_i(W[i][j] * h[0][i]))
        sample v[1][j] from P(v[1][j] = 1 | h[0])

    for all hidden units i:
        compute Q(h[1][i] = 1 | v[1]) # for binomial units, sigmoid(b[i] + \sum_j(W[i][j] * v[1][j]))

    W += epsilon * (h[0] * v[0]' - Q(h[1][.] = 1 | v[1]) * v[1]')
    b += epsilon * (h[0] - Q(h[1][.] = 1 | v[1]))
    c += epsilon * (v[0] - v[1])$

Pre-training of Deep Belief Networks (Unsupervised)

Train a DBN in a purely unsupervised way, with the greedy layer-wise procedure in which each added layer is trained as an RBM by contrastive divergence.

  • X is the input training distribution for the network
  • epsilon is a learning rate for the stochastic gradient descent in Contrastive Divergence
  • L is the number of layers to train
  • n=(n[1], ...,n[L]) is the number of hidden units in each layer
  • W[i] is the weight matrix for level i, for i from 1 to L
  • b[i] is the bias vector for level i, for i from 0 to L

PreTrainUnsupervisedDBN(X, epsilon, L, n, W, b):
    initialize b[0]=0
    for l=1 to L:
        initialize W[i]=0, b[i]=0
        while not stopping criterion:
            sample g[0]=x from X
            for i=1 to l-1:
                sample g[i] from Q(g[i]|g[i-1])
            RBMupdate(g[l-1], epsilon, W[l], b[l], b[l-1])

Supervised Training of Deep Belief Network

After a DBN has been initialized by pre-training, this procedure will optimize all the parameters with respect to the supervised criterion C, using stochastic gradient descent.

  • Z is the supervised training distribution for the DBN, with (input,target) samples (x,y)
  • C is a training criterion, a function that takes a network output f(x) and a target y and returns a scalar differentiable in f(x)
  • epsilon_C is a learning rate for the stochastic gradient descent on supervised cost C
  • L is the number of layers
  • n=(n[1], ..., n[L]) is the number of hidden units in each layer
  • W[i] is the weight matrix for level =i, for i from 1 to L
  • b[i] is the bias vector for level i, for i from 0 to L
  • V is a weight matrix for the supervised output layer of the network
  • c is the bias vector for the supervised output layer

DBNSupervisedFineTuning(Z, C, epsilon_C, L, n, W, b, V, c):
    Recursively define mean-field propagation mu[i](x)=Expectation(g[i]|g[i-1]=mu[i-1](x))
        where mu[0](x)=x, and Expectation(g[i]|g[i-1]=mu[i-1]) is the expected value of g[i] under the RBM conditional distribution Q(g[i]|g[i-1]),
            when the values of g[i-1] are replaced by the mean-field values mu[i-1](x)
    # In the case where g[i] has binomial units:
    # Expectation(g[i][j]|g[i-1]=mu[i-1]) = sigmoid(b[i][j] + sum_k W[i][j][k] mu[i-1][k](x)
    Define the network output function f(x) = V * mu[L](x)' + c
    Iteratively minimize the expected value of C(f(x),y)
        for pairs (x,y) sampled from Z by tuning parameters W, b, V, c.
        This can be done by stochastic gradient descent with learning rate epsilon_C,
        using an appropriate stopping criterion such as early stopping on a validation set.

Global Training Procedure

Train a DBN for a supervised learning task, by first performing pre-training of all layers (except the output weights V), followed by supervised fine-tuning to minimize a criterion C.}\

  • Z is the supervised training distribution for the DBN, with (input,target) samples (x,y)
  • C is a training criterion, a function that takes a network output f(x) and a target y and returns a scalar differentiable in f(x)
  • epsilon_CD is a learning rate for the stochastic gradient descent with Contrastive Divergence
  • epsilon_C is a learning rate for the stochastic gradient descent on supervised cost C
  • L is the number of layers
  • n=(n[1], ..., n[L]) is the number of hidden units in each layer
  • W[i] is the weight matrix for level =i, for i from 1 to L
  • b[i] is the bias vector for level i, for i from 0 to L
  • V is a weight matrix for the supervised output layer of the network
  • c is the bias vector for the supervised output layer

TrainSupervisedDBN(Z, C, epsilon_CD, epsilon_C, L, n, W, b, V):
    let X the marginal over the input part of Z
    PreTrainUnsupervisedDBN(X, epsilon_CD, L, n, W, b)
    DBNSupervisedFineTuning(Z, C, epsilon_C, L, n, W, b, V, c)

Alternative (Supervised) Training of Deep Belief Networks' Last Layer

When the units of the last layer are binomial, and if the target is a class label, instead of training the last layer of a DBN using unsupervised Contrastive Divergence, we can train a joint RBM with (g[L-1], y) as input. In TrainSupervisedDBN, We then replace PreTrainUnsupervisedDBN by PreTrainSupervisedDBN.

The network configuration is this one:

         [______ L ______]
            /         \
[______ L-1 ______] [___ Y ___]

Train the first layers of a DBN in a purely unsupervised way, with the greedy layer-wise procedure in which each added layer is trained as an RBM by contrastive divergence. Then, train the last layer as an RBM modelling the joint distribution of the previous layer and the target.

  • Z is the supervised training distribution for the DBN, with (input,target) samples (x,y)
  • epsilon is a learning rate for the stochastic gradient descent in Contrastive Divergence
  • L is the total number of layers to train
  • n=(n[1], ...,n[L]) is the number of hidden units in each layer
  • n_t is the number of different class labels
  • W[i] is the weight matrix for level i, for i from 1 to L
  • b[i] is the bias vector for level i, for i from 0 to L
  • V is the weight matrix between the last layer and target layer
  • c is the bias vector for target layer

PreTrainSupervisedDBN(Z, epsilon, L, n, W, b, V, c):
    let X the marginal over the input part of Z
    initialize b[0]=0
    for l=1 to L-1:
        initialize W[i]=0, b[i]=0
        while not stopping criterion:
            sample g[0]=x from X
            for i=1 to l-1:
                sample g[i] from Q(g[i]|g[i-1])
            RBMupdate(g[l-1], epsilon, W[l], b[l], b[l-1])
    initialize W[L]=0, b[L]=0, V=0, c=0
    while not stopping criterion:
        sample (g[0]=x, y) from Z
        for i=1 to L-1:
            sample g[i] from Q(g[i]|g[i-1])
        RBMupdate((g[L-1],y), epsilon, (W[L],V'), b[L], (b[L-1],c))

-- PascalLamblin - 21 Jun 2007

Edit | WYSIWYG | Attach | Printable | Raw View | Backlinks: Web, All Webs | History: r9 < r8 < r7 < r6 < r5 | More topic actions
 
Home
This site is powered by the TWiki collaboration platformCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback