r25 - 21 Jun 2007 - 11:40:40 - DumitruErhanYou are here: TWiki >  Public Web  > StackedAutoassociators

Stacked Autoassociators

Description

The Stacked Autoassociators model is a neural network with sigmoidal activation nodes, which is trained in the following way:

  1. Greedy learning: for each layer (starting with the input layer)
    1. add a hidden layer
    2. train this layer so as to minimize a reconstruction error on the previous layer
  2. Fine-tuning: when all layers but the output layer were added
    1. add output layer (with softmax activation function)
    2. train all layers using a supervised signal (NLL of target class)

Here is an illustration of the training process:

saa_model.png

A reconstruction error for the autoassociator can easily be derived for any types of layer units based on a choice of distributions for these units. For example, by assuming Bernoulli distributions over the layer units and setting the parameters of the distributions to be the reconstruction given by the autoassociator for that layer, the corresponding reconstrution error is simply the sum of the cross entropies between the layer units and their reconstruction. This reconstruction error can also be used for units taking values in the [0,1] interval. In this case, the reconstruction cost corresponds to the a weighted likelihood.

Finally, we usually tie the encoding weights of an autoassociator (i.e. the weights going from the input to the hidden layer) to the transpose of the decoding weights (between the hidden layer and the reconstruction layer), just like in a Restricted Boltzmann Machine.

Implementation

The Stacked Autoassociators model was implemented and is available in the PLearn library, a C++ machine learning library. The code corresponds to the StackedAutoassociatorNet class and is highly inspired from the DeepBeliefNet class. It supports the usage of RBMMatrix and RBMLayer classes.

Here is an example of a PLearn script instantiating a 3 hidden layer Stacked Autoassociators model using this class:

StackedAutoassociatorsNet(
    seed = ${seed}
    training_schedule = [ 0 0 0 ]
    nstages = 0
    greedy_learning_rate = ${slr}
    greedy_decrease_ct = ${dc}
    fine_tuning_learning_rate = ${slr_sup}
    fine_tuning_decrease_ct = ${dc_sup}
    layers = [ 
      RBMBinomialLayer( size = ${inputsize};  )
      RBMBinomialLayer( size = ${nhid1}; )
      RBMBinomialLayer( size = ${nhid2}; )
      RBMBinomialLayer( size = ${nhid3}; )
      ]
    connections = [
      *1->RBMMatrixConnection( down_size = ${inputsize}; 
                           up_size = ${nhid1}; )
      *2->RBMMatrixConnection( down_size = ${nhid1}    ; 
                           up_size = ${nhid2}; )
      *3->RBMMatrixConnection( down_size = ${nhid2}    ; 
                           up_size = ${nhid3}; )
      ]
    
    reconstruction_connections = [
      RBMMatrixTransposeConnection( rbm_matrix_connection = *1 )
      RBMMatrixTransposeConnection( rbm_matrix_connection = *2 )
      RBMMatrixTransposeConnection( rbm_matrix_connection = *3 )
      ]
    
    final_module = ModuleStackModule(
      modules = [ 
        GradNNetLayerModule( 
          input_size = ${nhid3}
          output_size = ${noutputs}
          )
        SoftmaxModule( input_size = ${noutputs} )
        ]
      )
    final_cost = CombiningCostsModule(
      cost_weights = [ 1 0 ]
      sub_costs = [ 
        NLLCostModule( input_size = ${noutputs} ) 
        ClassErrorCostModule( input_size = ${noutputs} ) 
        ]
      )
    )

Here are the different parameters to define for this script:

  • slr and dc are the learning rate and decreaste constant of the unsupervised greedy steps
  • slr_sup and dc_sup are the learning rate and decreaste constant of the supervised fine-tuning step
  • inputsize, nhid1, nhid2, nhid3 and noutputs are the sizes of the input, hidden and output layers

In order to train the above Stacked Autoassociator network in a greedy layer-wise and unsupervised fashion (by stopping layer training when the training reconstruction error doesn't sufficiently improves), and also to use early stopping on a validation set classification error when training all layers using the supervised signal, the following script can be used:

HyperLearner(

  tester = PTester(

    splitter =  ExplicitSplitter(
      splitsets = 1 3 [
        *1->AutoVMatrix( specification = "${train}"
                     inputsize = ${inputsize}
                     targetsize = 1
                     weightsize = 0)
        *1
        AutoVMatrix( specification = "${valid}"
                     inputsize = ${inputsize}
                     targetsize = 1
                     weightsize = 0)
        ]
   )
    statnames = [ 
      "E[test1.E[class_error]]" 
      "E[test2.E[class_error]]" 
      "E[test1.E[reconstruction_error_1]]" 
      "E[test1.E[reconstruction_error_2]]" 
      "E[test1.E[reconstruction_error_3]]"  
      ]
    save_learners = 0
    save_initial_tester = 0
    provide_learner_expdir = 1
    )
  
  option_fields = [ "training_schedule[0]" "training_schedule[1]" "training_schedule[2]" "nstages" ]

  dont_restart_upon_change = [ "training_schedule[0]" "training_schedule[1]" "training_schedule[2]" "nstages" ]

  learner = $INCLUDE{${saa_plearner_file}}
  
  strategy = [
    HyperOptimize(
      which_cost = 2
      oracle =
      EarlyStoppingOracle(
        option = "training_schedule[0]"
        range = [ ${stepsize} ${max_stage} ${stepsize} ]
        relative_min_improvement = ${rmi}
        )
      )

    HyperOptimize(
      which_cost = 3
      oracle =
      EarlyStoppingOracle(
        option = "training_schedule[1]"
        range = [ ${stepsize} ${max_stage} ${stepsize} ]
        relative_min_improvement = ${rmi}
        )
      )

    HyperOptimize(
      which_cost = 4
      oracle =
      EarlyStoppingOracle(
        option = "training_schedule[2]"
        range = [ ${stepsize} ${max_stage} ${stepsize} ]
        relative_min_improvement = ${rmi}
        )
      )

    HyperOptimize(
      which_cost = 1
      oracle =
      EarlyStoppingOracle(
        option = "nstages"
        range = [ ${stepsize} ${max_stage} ${stepsize} ]
        max_degraded_steps = ${mds}
        )
      )
    ]
  )

Here are the different parameters to define in order to use this script:

  • saa_plearner_file is the name of the text file containing the previous StackedAutoassociatorsNet script code
  • train and valid are the train and validation sets
  • stepsize is the number of stages (samples) seen in an epoch
  • max_stage is the maximum number of stages (samples) of the supervised fine-tuning step
  • rmi is the minimum relative improvement in training reconstruction error required for an greedy unsupervised training step to continue

The StackedAutoassociatorsNet class can be found in $PLEARNDIR/plearn_learners/online.

Edit | WYSIWYG | Attach | Printable | Raw View | Backlinks: Web, All Webs | History: r25 < r24 < r23 < r22 < r21 | More topic actions
Public.StackedAutoassociators moved from Neurones.StackedAutoassociators on 21 Jun 2007 - 15:40 by DumitruErhan - put it back
 
Home
This site is powered by the TWiki collaboration platformCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback