View topic | Edit | WYSIWYGAttachPrintable
r21 - 19 Jun 2007 - 15:23:18 - HugoLarochelleYou are here: TWiki >  Public Web  > DeepNetworks? > StackedAutoassociators

Stacked Autoassociators

Description

The Stacked Autoassociators model is a neural network with sigmoidal activation nodes, which is trained in the following way:

  1. Greedy learning: for each layer (starting with the input layer)
    1. add a hidden layer
    2. train this layer so as to minimize a reconstruction error on the previous layer
  2. Fine-tuning: when all layers but the output layer were added
    1. add output layer (with softmax activation function)
    2. train all layers using a supervised signal (NLL of target class)

Here is an illustration of the training process:

saa_model.png

A reconstruction error for the autoassociator can easily be derived for any types of layer units based on a choice of distributions for these units. For example, by assuming Bernoulli distributions over the layer units and setting the parameters of the distributions to be the reconstruction given by the autoassociator for that layer, the corresponding reconstrution error is simply the sum of the cross entropies between the layer units and their reconstruction. This reconstruction error can also be used for units taking values in the [0,1] interval. In this case, the reconstruction cost corresponds to the a weighted likelihood.

Finally, we usually tie the encoding weights of an autoassociator (i.e. the weights going from the input to the hidden layer) to the transpose of the decoding weights (between the hidden layer and the reconstruction layer), just like in a Restricted Boltzmann Machine.

Implementation

The Stacked Autoassociators model was implemented and is available in the PLearn library, a C++ machine learning library. The code corresponds to the StackedAutoassociatorNet class and is highly inspired from the DeepBeliefNet class. It supports the usage of RBMMatrix and RBMLayer classes.

Here is an example of a PLearn script instantiating a 3 hidden layer Stacked Autoassociators model using this class:

StackedAutoassociatorsNet(
    seed = ${seed}
    training_schedule = [ 0 0 0 ]
    nstages = 0
    greedy_learning_rate = ${slr}
    greedy_decrease_ct = ${dc}
    fine_tuning_learning_rate = ${slr_sup}
    fine_tuning_decrease_ct = ${dc_sup}
    layers = [ 
      RBMBinomialLayer( size = ${inputsize};  )
      RBMBinomialLayer( size = ${nhid1}; )
      RBMBinomialLayer( size = ${nhid2}; )
      RBMBinomialLayer( size = ${nhid3}; )
      ]
    connections = [
      *1->RBMMatrixConnection( down_size = ${inputsize}; 
                           up_size = ${nhid1}; )
      *2->RBMMatrixConnection( down_size = ${nhid1}    ; 
                           up_size = ${nhid2}; )
      *3->RBMMatrixConnection( down_size = ${nhid2}    ; 
                           up_size = ${nhid3}; )
      ]
    
    reconstruction_connections = [
      RBMMatrixTransposeConnection( rbm_matrix_connection = *1 )
      RBMMatrixTransposeConnection( rbm_matrix_connection = *2 )
      RBMMatrixTransposeConnection( rbm_matrix_connection = *3 )
      ]
    
    final_module = ModuleStackModule(
      modules = [ 
        GradNNetLayerModule( 
          input_size = ${nhid3}
          output_size = ${noutputs}
          )
        SoftmaxModule( input_size = ${noutputs} )
        ]
      )
    final_cost = CombiningCostsModule(
      cost_weights = [ 1 0 ]
      sub_costs = [ 
        NLLCostModule( input_size = ${noutputs} ) 
        ClassErrorCostModule( input_size = ${noutputs} ) 
        ]
      )
    )

Here are the different parameters to define for this script:

  • slr and dc are the learning rate and decreaste constant of the unsupervised greedy steps
  • slr_sup and dc_sup are the learning rate and decreaste constant of the supervised fine-tuning step
  • inputsize, nhid1, nhid2, nhid3 and noutputs are the sizes of the input, hidden and output layers

In order to train the above Stacked Autoassociator network in a greedy layer-wise and unsupervised fashion (by stopping layer training when the training reconstruction error doesn't sufficiently improves), and also to use early stopping on a validation set classification error when training all layers using the supervised signal, the following script can be used:

HyperLearner(

  tester = PTester(

    splitter =  ExplicitSplitter(
      splitsets = 1 3 [
        *1->AutoVMatrix( specification = "${train}"
                     inputsize = ${inputsize}
                     targetsize = 1
                     weightsize = 0)
        *1
        AutoVMatrix( specification = "${valid}"
                     inputsize = ${inputsize}
                     targetsize = 1
                     weightsize = 0)
        ]
   )
    statnames = [ 
      "E[test1.E[class_error]]" 
      "E[test2.E[class_error]]" 
      "E[test1.E[reconstruction_error_1]]" 
      "E[test1.E[reconstruction_error_2]]" 
      "E[test1.E[reconstruction_error_3]]"  
      ]
    save_learners = 0
    save_initial_tester = 0
    provide_learner_expdir = 1
    )
  
  option_fields = [ "training_schedule[0]" "training_schedule[1]" "training_schedule[2]" "nstages" ]

  dont_restart_upon_change = [ "training_schedule[0]" "training_schedule[1]" "training_schedule[2]" "nstages" ]

  learner = $INCLUDE{${saa_plearner_file}}
  
  strategy = [
    HyperOptimize(
      which_cost = 2
      oracle =
      EarlyStoppingOracle(
        option = "training_schedule[0]"
        range = [ ${stepsize} ${max_stage} ${stepsize} ]
        relative_min_improvement = ${rmi}
        )
      )

    HyperOptimize(
      which_cost = 3
      oracle =
      EarlyStoppingOracle(
        option = "training_schedule[1]"
        range = [ ${stepsize} ${max_stage} ${stepsize} ]
        relative_min_improvement = ${rmi}
        )
      )

    HyperOptimize(
      which_cost = 4
      oracle =
      EarlyStoppingOracle(
        option = "training_schedule[2]"
        range = [ ${stepsize} ${max_stage} ${stepsize} ]
        relative_min_improvement = ${rmi}
        )
      )

    HyperOptimize(
      which_cost = 1
      oracle =
      EarlyStoppingOracle(
        option = "nstages"
        range = [ ${stepsize} ${max_stage} ${stepsize} ]
        max_degraded_steps = ${mds}
        )
      )
    ]
  )

Here are the different parameters to define in order to use this script:

  • saa_plearner_file is the name of the text file containing the previous StackedAutoassociator script code
  • train and valid are the train and validation sets
  • stepsize is the number of stages (samples) seen in an epoch
  • max_stage is the maximum number of stages (samples) of the supervised fine-tuning step
  • rmi is the minimum relative improvement in training reconstruction error required for an greedy unsupervised training step to continue

The StackedAutoassociatorsNet class can be found in $PLEARNDIR/plearn_learners/online.

Conducted Experiments

Here are some of the experiments that we conducted, in order to make some sense out of the Stacked Autoassociators. We used the following data set:

1. MNIST-50K/10K/10K: MNIST data set, with 50000 training examples, 10000 validation examples and 10000 test examples.

We also used early stopping based on the classification error on a validation set in all the experiments. We give the results for several choices of the hyper-parameters, in order to provide information about their influence on the generalization error.

Standard Neural Network

Here are some experiments on MNIST-50K/10K/10K using a standard Neural Network with tanh activation hidden units, in order to make comparisons:

weight decay penalty type learning rate decrease constant nhid1 nhid2 seed train error valid error test error
1e-05 L2_square 0.005 0 250 0 654321 2e-05 0.0179 0.0187
0 L2_square 0.005 0 500 0 333777 0 0.0185 0.0181
0 L1 0.005 0 500 0 333777 0 0.0185 0.0181
1e-07 L2_square 0.005 0 500 0 333777 0 0.0185 0.0181
1e-07 L1 0.005 0 500 0 333777 0 0.0185 0.018
1e-05 L2_square 0.005 0 500 0 333777 2e-05 0.0185 0.0176
1e-05 L2_square 0.005 0 250 0 333777 0 0.0185 0.0207
1e-07 L1 0.005 0 250 0 654321 0 0.0186 0.0185
1e-05 L2_square 0.005 0 500 0 654321 0 0.0187 0.0177
0 L2_square 0.005 0 250 0 654321 2e-05 0.0187 0.0188
0 L1 0.005 0 250 0 654321 2e-05 0.0187 0.0188
1e-07 L2_square 0.005 0 250 0 654321 2e-05 0.0187 0.0189
0 L2_square 0.005 0 500 0 654321 0 0.0189 0.0183
0 L1 0.005 0 500 0 654321 0 0.0189 0.0183
1e-07 L2_square 0.005 0 500 0 654321 0 0.0189 0.0183
1e-07 L1 0.005 0 500 0 654321 0 0.019 0.0185
1e-05 L1 0.005 0 250 0 333777 0.00048 0.019 0.0188
0 L2_square 0.005 0 800 0 1234567 0 0.0191 0.0193
0 L2_square 0.005 1e-08 800 0 1234567 0 0.0191 0.0193
1e-06 L2_square 0.005 0 800 0 1234567 0 0.0192 0.0193
1e-06 L2_square 0.005 1e-08 800 0 1234567 0 0.0192 0.0194
0 L2_square 0.005 1e-08 800 0 654321 0 0.0196 0.0186
0 L2_square 0.005 0 800 0 654321 0 0.0196 0.0187
0 L2_square 0.005 0 500 300 333777 0 0.0196 0.0233
0 L2_square 0.0005 0 800 0 1234567 0.0004 0.0196 0.0205
0 L2_square 0.0005 1e-08 800 0 1234567 0.00044 0.0196 0.0205
1e-06 L2_square 0.005 0 800 0 654321 0 0.0196 0.0185
1e-06 L2_square 0.005 1e-08 800 0 654321 0 0.0196 0.0185
1e-06 L2_square 0.0005 1e-08 800 0 1234567 0.00046 0.0196 0.0205
1e-06 L2_square 0.005 0 800 0 333777 0 0.0197 0.0197
1e-06 L2_square 0.005 1e-08 800 0 333777 0 0.0197 0.0197
0 L2_square 0.005 0 250 0 333777 8e-05 0.0197 0.0214
0 L1 0.005 0 250 0 333777 8e-05 0.0197 0.0214
1e-07 L2_square 0.005 0 250 0 333777 8e-05 0.0197 0.0214

Deep Neural Network, one-shot training

Here are some experiments on MNIST-50K/10K/10K with a neural network having the same architecture as a Stacked Autoassociators model with 3 layers, but where the greedy layer-wise unsupervised procedure is not used (i.e. all layers are inialized randomly):

nhid1 nhid2 nhid3 weight decay penalty type learning rate decrease constant seed train error valid error test error
700 600 500 1e-07 L2_square 0.05 0 333777 4e-05 0.0207 0.024
1000 1000 1000 0 L2_square 0.05 0 333777 2e-05 0.0208 0.0239
700 600 500 1e-05 L2_square 0.05 0 654321 0.00688 0.022 0.0241
1000 1000 1000 1e-07 L2_square 0.05 0 654321 2e-05 0.022 0.0235
1000 1000 1000 1e-07 L1 0.05 0 333777 0.00084 0.0221 0.0258
350 300 250 1e-07 L1 0.05 0 654321 2e-05 0.0222 0.0204
350 300 250 1e-07 L1 0.05 0 333777 2e-05 0.0222 0.0234
700 600 500 1e-07 L1 0.05 0 654321 4e-05 0.0222 0.0221
700 600 500 1e-05 L2_square 0.05 0 333777 0.00536 0.0223 0.0231
1000 1000 1000 1e-07 L2_square 0.05 0 333777 0 0.0223 0.0233
350 300 250 1e-05 L2_square 0.05 0 654321 0.0059 0.0224 0.0238
1000 1000 1000 1e-07 L1 0.05 0 654321 0.00014 0.0226 0.0241
350 300 250 1e-05 L2_square 0.05 0 333777 0.00784 0.0227 0.0258
350 300 250 0 L2_square 0.05 0 654321 0.00046 0.0228 0.0247
350 300 250 0 L1 0.05 0 654321 0.00046 0.0228 0.0247
1000 1000 1000 0 L1 0.05 0 654321 4e-05 0.023 0.0252
350 300 250 1e-07 L2_square 0.05 0 654321 0.00012 0.0234 0.0237
700 600 500 0 L2_square 0.05 0 654321 0.00016 0.0234 0.023
700 600 500 0 L1 0.05 0 654321 0.00016 0.0234 0.023
700 600 500 1e-07 L1 0.05 0 333777 0.0025 0.0236 0.0262
1000 1000 1000 0 L1 0.05 0 333777 0.00172 0.0239 0.0249
1000 1000 1000 1e-05 L2_square 0.05 0 654321 0.00962 0.0246 0.0293
1000 1000 1000 0 L2_square 0.05 0 654321 0.00352 0.0248 0.0278
700 600 500 0 L1 0.05 0 333777 0.0037 0.0249 0.0265
700 600 500 0 L2_square 0.05 0 333777 0.0037 0.0249 0.0265
350 300 250 1e-07 L2_square 0.05 0 333777 0.0019 0.0255 0.0251
1000 1000 1000 1e-05 L2_square 0.05 0 333777 0.0115 0.0266 0.0268
700 600 500 1e-07 L2_square 0.05 0 654321 0.00656 0.0267 0.0277
350 300 250 0 L2_square 0.05 0 333777 0.0038 0.0271 0.0276
350 300 250 0 L1 0.05 0 333777 0.0038 0.0271 0.0276
350 300 250 1e-05 L1 0.05 0 333777 0.89798 0.897 0.899
350 300 250 1e-05 L1 0.05 0 654321 0.89798 0.897 0.899
700 600 500 1e-05 L1 0.05 0 333777 0.89798 0.897 0.899
700 600 500 1e-05 L1 0.05 0 654321 0.89798 0.897 0.899
1000 1000 1000 1e-05 L1 0.05 0 654321 0.89798 0.897 0.899
1000 1000 1000 1e-05 L1 0.05 0 333777 0.89798 0.897 0.899

Deep Neural Network, with greedy supervised stages

Here are some experiments on MNIST-50K/10K/10K with a neural network having the same architecture as a Stacked Autoassociators model with 3 layers, but where the greedy stages use a supervised signal, which is the NLL on the target class probability, under a log-linear model using the last added hidden layer:

nhid1 nhid2 nhid3 weight decay penalty type learning rate decrease constant seed train error valid error test error
700 600 500 0 L2_square 0.05 0 333777 0 0.0174 0.0204
700 600 500 0 L1 0.05 0 333777 0 0.018 0.0211
1000 1000 1000 0 L2_square 0.05 0 654321 0 0.0188 0.0198
1000 1000 1000 0 L1 0.05 0 654321 0 0.0188 0.0198
350 300 250 0 L1 0.05 0 333777 0.0005 0.0194 0.0224
350 300 250 0 L1 0.05 0 654321 0 0.0194 0.0188
350 300 250 0 L2_square 0.05 0 654321 0 0.0202 0.0182
700 600 500 0 L1 0.05 0 654321 0.00054 0.0205 0.0213
700 600 500 0 L2_square 0.05 0 654321 0.00272 0.0206 0.0245
350 300 250 0 L2_square 0.05 0 333777 0.0034 0.0221 0.0258
1000 1000 1000 0 L2_square 0.05 0 333777 0.0056 0.0222 0.0246
1000 1000 1000 0 L1 0.05 0 333777 0.00156 0.0225 0.0232

Deep Neural Network, with greedy unsupervised stages

Here are some experiments on MNIST-50K/10K/10K with Stacked Autoassociators models with 3 layers, where we only validated 3 architectures ( nhids=[700,600,500], [1000,1000,1000] and [500,500,1000]), sometimes with some weight decay:

nhid1 nhid2 nhid3 weight decay penalty type learning rate decrease constant seed tied reconstruction weights train error valid error test error
700 600 500 1e-07 L2_square 0.05 0 333777 1 0 0.0137 0.0138
700 600 500 1e-07 L2_square 0.05 0 654321 1 0 0.0137 0.0148
700 600 500 0 L1 0.05 0 333777 1 0 0.0139 0.0144
1000 1000 1000 0 L2_square 0.05 0 333777 0 2e-05 0.0141 0.0164
1000 1000 1000 0 L1 0.05 0 333777 0 2e-05 0.0141 0.0164
500 500 1000 1e-07 L2_square 0.05 0 654321 1 0 0.0141 0.0154
1000 1000 1000 0 L1 0.05 0 333777 1 0 0.0143 0.015
1000 1000 1000 0 L2_square 0.05 0 333777 1 0 0.0143 0.015
700 600 500 0 L2_square 0.05 0 654321 1 0 0.0144 0.0148
700 600 500 0 L2_square 0.05 0 333777 1 0 0.0144 0.0143
700 600 500 0 L1 0.05 0 654321 1 0 0.0145 0.0147
500 500 1000 1e-07 L1 0.05 0 654321 0 0 0.0145 0.0161
700 600 500 0 L2_square 0.05 0 333777 1 0 0.0145 0.0144
700 600 500 0 L1 0.05 0 333777 0 0 0.0146 0.015
700 600 500 0 L2_square 0.05 0 333777 0 0 0.0146 0.015
500 500 1000 1e-07 L1 0.05 0 654321 1 0 0.0146 0.0155
1000 1000 1000 1e-07 L2_square 0.05 0 333777 1 2e-05 0.0146 0.0147
1000 1000 1000 1e-07 L1 0.05 0 333777 1 0 0.0146 0.0138
500 500 1000 0 L1 0.05 0 333777 0 0 0.0147 0.0176
500 500 1000 0 L2_square 0.05 0 333777 0 0 0.0147 0.0176
1000 1000 1000 1e-07 L1 0.05 0 654321 1 2e-05 0.0147 0.014
1000 1000 1000 1e-07 L1 0.05 0 333777 0 0 0.0149 0.0141
1000 1000 1000 0 L1 0.05 0 654321 1 2e-05 0.0151 0.0145
1000 1000 1000 0 L2_square 0.05 0 654321 1 2e-05 0.0151 0.0145
700 600 500 0 L1 0.05 0 654321 0 0 0.0152 0.0149
700 600 500 0 L2_square 0.05 0 654321 0 0 0.0152 0.0149
700 600 500 1e-07 L1 0.05 0 654321 0 0 0.0152 0.0159
700 600 500 1e-07 L1 0.05 0 333777 1 0 0.0152 0.0154
500 500 1000 1e-07 L1 0.05 0 333777 1 0 0.0152 0.0144
700 600 500 1e-07 L1 0.05 0 654321 1 0 0.0153 0.0155
700 600 500 0 L2_square 0.05 0 333777 0 0 0.0154 0.0159
700 600 500 0 L1 0.05 0 333777 0 0 0.0154 0.0159
700 600 500 0 L2_square 0.05 0 654321 0 0 0.0155 0.0149
700 600 500 0 L1 0.05 0 654321 0 0 0.0155 0.0149
700 600 500 0 L2_square 0.1 1e-05 333777 1 0 0.0156 0.0154
700 600 500 0 L2_square 0.01 1e-09 333777 1 0 0.0156 0.0174
700 600 500 0 L2_square 0.01 0 333777 1 0 0.0156 0.0173
1000 1000 1000 1e-07 L2_square 0.05 0 654321 1 0 0.0156 0.0146

Here are some experiments on MNIST-50K/10K/10K with Stacked Autoassociators models with 3 layers, where we tested many different neural architectures but where we did not use weight decays:

nhid1 nhid2 nhid3 learning rate decrease constant seed tied reconstruction weights reconstruct input train error valid error test error
1000 600 750 0.05 1e-08 654321 1 0 0 0.0124 0.0146
600 1000 750 0.05 0 654321 1 0 0 0.0125 0.0132
1000 500 750 0.05 0 654321 1 0 0 0.0129 0.0151
1000 500 500 0.05 0 654321 1 0 0 0.0131 0.0145
1000 600 750 0.05 0 654321 1 0 0 0.0135 0.0149
700 600 750 0.05 1e-08 654321 1 0 0 0.0135 0.0144
1000 600 500 0.05 0 654321 1 0 0 0.0136 0.0157
600 1000 500 0.05 0 654321 1 0 0 0.0136 0.0142
1000 500 750 0.05 1e-08 654321 1 0 0 0.0137 0.0136
700 600 400 0.05 1e-08 654321 1 0 0 0.0137 0.0147
1000 1000 750 0.05 1e-08 654321 1 0 0 0.0138 0.0146
1000 1000 750 0.05 0 654321 1 0 0 0.0139 0.0143
600 500 500 0.05 0 654321 1 0 0 0.014 0.0145
700 600 500 0.05 1e-08 654321 1 0 0 0.014 0.0142
1000 500 500 0.05 0 654321 1 1 0 0.0141 0.0151
1000 600 400 0.05 0 654321 1 1 0 0.0141 0.0151
1000 600 750 0.05 0 654321 1 1 0 0.0141 0.0159
1000 1000 500 0.05 0 654321 1 0 0 0.0141 0.0142
1000 500 400 0.05 1e-08 654321 1 0 0 0.0141 0.0153
1000 600 400 0.05 0 654321 1 0 0 0.0142 0.0158
700 1000 500 0.05 0 654321 1 0 0 0.0142 0.0142
600 600 500 0.05 0 654321 1 0 0 0.0142 0.0159
600 1000 750 0.05 0 654321 1 1 0 0.0142 0.0161
1000 1000 400 0.05 1e-08 654321 1 0 0 0.0142 0.0158
1000 1000 500 0.05 1e-08 654321 1 1 0 0.0142 0.015
700 500 500 0.05 1e-08 654321 1 0 0 0.0142 0.0139
600 1000 500 0.05 1e-08 654321 1 0 0 0.0142 0.0148
1000 500 400 0.05 0 654321 1 1 0.0003 0.0143 0.017
1000 500 400 0.05 0 654321 1 0 0 0.0143 0.0158
600 500 750 0.05 0 654321 1 0 0 0.0144 0.0142
1000 500 500 0.05 1e-08 654321 1 0 2e-05 0.0144 0.0146
1000 1000 500 0.05 1e-08 654321 1 0 0 0.0144 0.0144
700 1000 750 0.05 1e-08 654321 1 0 0 0.0144 0.0141
1000 600 400 0.05 1e-08 654321 1 0 0 0.0144 0.0137
1000 1000 400 0.05 0 654321 1 0 2e-05 0.0145 0.015
700 600 500 0.05 0 654321 1 0 0 0.0145 0.0147
700 1000 500 0.05 1e-08 654321 1 0 0 0.0145 0.0152
700 500 400 0.05 1e-08 654321 1 1 0 0.0145 0.0152
600 1000 400 0.05 1e-08 654321 0 0 0 0.0146 0.0162
600 600 400 0.05 0 654321 1 0 0 0.0146 0.0161
600 500 750 0.05 0 654321 1 1 0 0.0146 0.017
600 500 400 0.05 0 654321 1 0 0 0.0146 0.0157
1000 600 500 0.05 0 654321 1 1 2e-05 0.0147 0.016
700 600 750 0.05 0 654321 1 0 0 0.0147 0.0143
1000 1000 500 0.05 0 654321 1 1 0 0.0147 0.0153
700 1000 400 0.05 0 654321 1 1 0 0.0147 0.0146
1000 500 750 0.05 1e-08 654321 1 1 0 0.0147 0.0147
700 1000 500 0.05 1e-08 654321 1 1 0 0.0147 0.0162
700 600 500 0.05 0 654321 1 1 2e-05 0.0148 0.0167

Here are some experiments on MNIST-50K/10K/10K with Stacked Autoassociators models with 4 layers, where we only tested 2 architectures ( nhids=[700,600,500,400] and [1000,1000,1000,1000]) with weight decays > 0:

nhid1 nhid2 nhid3 nhid4 learning rate decrease constant weight decay penalty type seed tied reconstruction weights train error valid error test error
1000 1000 1000 1000 0.05 0 1e-07 L2_square 333777 0 0 0.0134 0.0151
1000 1000 1000 1000 0.05 0 1e-07 L1 333777 0 0 0.014 0.0145
1000 1000 1000 1000 0.05 0 1e-07 L2_square 654321 1 0 0.014 0.015
700 600 500 400 0.05 0 1e-07 L1 333777 1 2e-05 0.0141 0.0151
700 600 500 400 0.05 0 1e-07 L2_square 654321 1 0 0.0142 0.0153
700 600 500 400 0.05 0 1e-07 L1 654321 0 0 0.0143 0.0146
700 600 500 400 0.05 0 1e-07 L1 654321 1 0 0.0146 0.0151
1000 1000 1000 1000 0.05 0 1e-07 L2_square 654321 0 0 0.0146 0.0145
700 600 500 400 0.05 0 1e-07 L2_square 333777 0 0 0.0147 0.014
700 600 500 400 0.05 0 1e-07 L1 333777 0 0 0.0147 0.0138
1000 1000 1000 1000 0.05 0 1e-07 L2_square 333777 1 0 0.0147 0.0154
1000 1000 1000 1000 0.05 0 1e-07 L1 654321 1 0 0.0148 0.0158
1000 1000 1000 1000 0.05 0 1e-07 L1 333777 1 0 0.0148 0.0147
1000 1000 1000 1000 0.05 0 1e-07 L1 654321 0 0 0.015 0.0156
700 600 500 400 0.05 0 1e-07 L2_square 654321 0 0 0.0155 0.0143
700 600 500 400 0.05 0 1e-05 L1 333777 1 0.00358 0.0168 0.0165
700 600 500 400 0.05 0 1e-05 L2_square 654321 1 0.00458 0.0176 0.0192
700 600 500 400 0.05 0 1e-05 L2_square 654321 0 0.00502 0.018 0.0186
700 600 500 400 0.05 0 1e-05 L2_square 654321 0 0.00502 0.018 0.0186
700 600 500 400 0.05 0 1e-05 L2_square 654321 0 0.00502 0.018 0.0186
700 600 500 400 0.05 0 1e-05 L2_square 333777 1 0.00456 0.018 0.0188
1000 1000 1000 1000 0.05 0 1e-05 L2_square 654321 0 0.00594 0.0184 0.02
1000 1000 1000 1000 0.05 0 1e-05 L2_square 333777 0 0.0052 0.0185 0.0185
700 600 500 400 0.05 0 1e-05 L1 333777 0 0.0086 0.0188 0.0197
1000 1000 1000 1000 0.05 0 1e-05 L1 333777 1 0.00912 0.019 0.0194
1000 1000 1000 1000 0.05 0 1e-05 L2_square 333777 1 0.00654 0.0192 0.02
700 600 500 400 0.05 0 1e-05 L1 654321 0 0.0088 0.0194 0.0193
700 600 500 400 0.05 0 1e-05 L2_square 333777 0 0.00534 0.0207 0.0199
1000 1000 1000 1000 0.05 0 1e-05 L1 654321 0 0.01152 0.0211 0.021
1000 1000 1000 1000 0.05 0 1e-05 L1 654321 1 0.0155 0.0233 0.0262
1000 1000 1000 1000 0.05 0 1e-05 L1 333777 0 0.01738 0.0249 0.0274

Here are some experiments on MNIST-50K/10K/10K with Stacked Autoassociators models with 4 layers, where we tested many different neural architectures but where we did not use weight decays:

nhid1 nhid2 nhid3 nhid4 learning rate decrease constant seed tied reconstruction weights reconstruct input train error valid error test error
1000 1000 400 750 0.05 1e-08 654321 0 0 0 0.013 0.0149
600 600 750 400 0.05 1e-08 654321 0 0 0 0.0131 0.0144
1000 600 400 750 0.05 1e-08 654321 0 0 0 0.0132 0.0139
700 500 750 400 0.05 0 654321 1 0 0 0.0132 0.016
1000 500 500 300 0.05 0 654321 0 0 0 0.0133 0.0144
1000 1000 400 400 0.05 0 654321 0 0 0 0.0134 0.0144
1000 1000 750 400 0.05 0 654321 0 0 0 0.0134 0.0138
1000 500 750 750 0.05 0 654321 0 0 0 0.0135 0.0141
700 1000 400 400 0.05 0 654321 0 0 0 0.0135 0.0149
600 600 400 400 0.05 0 654321 0 0 0 0.0135 0.0146
1000 600 500 750 0.05 1e-08 654321 0 0 0 0.0135 0.0139
700 500 400 750 0.05 1e-08 654321 0 0 0 0.0135 0.0139
1000 1000 750 300 0.05 0 654321 1 0 0 0.0136 0.0145
600 600 750 300 0.05 1e-08 654321 0 0 0 0.0136 0.0145
700 600 750 750 0.05 0 654321 0 0 0 0.0137 0.0143
1000 600 500 400 0.05 1e-08 654321 0 0 0 0.0137 0.0132
700 600 500 750 0.05 1e-08 654321 0 0 0 0.0137 0.0147
700 600 750 300 0.05 1e-08 654321 0 0 0 0.0137 0.0149
1000 500 750 750 0.05 0 654321 1 0 0 0.0137 0.0155
1000 1000 500 400 0.05 0 654321 0 0 0 0.0138 0.0147
600 600 400 750 0.05 1e-08 654321 0 0 0 0.0138 0.0135
600 500 400 400 0.05 1e-08 654321 0 0 0 0.0138 0.0144
700 600 750 400 0.05 0 654321 0 0 0 0.0139 0.0137
700 500 400 750 0.05 0 654321 0 0 0 0.0139 0.0142
600 1000 500 750 0.05 0 654321 0 0 0 0.0139 0.0139
1000 600 750 300 0.05 1e-08 654321 0 0 0 0.0139 0.0136
1000 1000 750 400 0.05 0 654321 1 0 0 0.0139 0.0137
700 1000 500 300 0.05 0 654321 1 0 0 0.0139 0.0151
700 600 400 300 0.05 0 654321 0 0 0 0.014 0.0137
1000 1000 750 750 0.05 1e-08 654321 0 0 0 0.014 0.0135
1000 600 750 750 0.05 1e-08 654321 0 0 0 0.014 0.0133
1000 1000 750 750 0.05 0 654321 1 0 0 0.014 0.0143
1000 600 500 400 0.05 0 654321 1 0 0 0.014 0.0146
600 600 400 300 0.05 1e-08 654321 0 0 0 0.014 0.0156
600 1000 400 400 0.05 1e-08 654321 0 0 0 0.014 0.0145
700 600 500 400 0.05 0 654321 1 0 4e-05 0.014 0.0138
1000 1000 750 750 0.05 0 654321 0 0 0 0.0141 0.0139
700 1000 400 400 0.05 1e-08 654321 0 0 0 0.0141 0.0138
700 600 500 400 0.05 1e-08 654321 0 0 0 0.0141 0.0138
700 500 400 300 0.05 1e-08 654321 0 0 0 0.0141 0.0153
700 500 400 400 0.05 1e-08 654321 0 0 0 0.0141 0.0145
700 500 750 300 0.05 1e-08 654321 0 0 0 0.0141 0.0147
600 1000 400 750 0.05 1e-08 654321 0 0 0 0.0141 0.0153
600 1000 750 750 0.05 1e-08 654321 0 0 0 0.0141 0.0145
700 600 750 300 0.05 0 654321 1 0 0 0.0141 0.0161

Here are some experiments on MNIST-50K/10K/10K with Stacked Autoassociators models with 2 layers, where we tested many different neural architectures but where we did not use weight decays:

nhid1 nhid2 learning rate decrease constant seed tied reconstruction weights train error valid error test error
600 1000 0.05 0 654321 1 0 0.0149 0.0156
1000 1000 0.05 0 654321 1 2e-05 0.015 0.0166
700 1000 0.05 0 654321 0 0.0002 0.0151 0.0169
700 1000 0.05 0 654321 0 0.0002 0.0151 0.0169
1000 1000 0.05 1e-08 654321 0 4e-05 0.0151 0.0153
1000 600 0.05 0 654321 1 0 0.0152 0.0168
1000 1000 0.05 0 654321 0 0 0.0152 0.0156
600 600 0.05 0 654321 0 0 0.0152 0.0161
600 500 0.05 0 654321 1 0 0.0152 0.0169
600 500 0.05 0 654321 0 0 0.0154 0.0153
1000 500 0.05 1e-08 654321 0 0 0.0156 0.0148
1000 600 0.05 1e-08 654321 0 0 0.0156 0.0152
700 600 0.05 0 654321 1 4e-05 0.0157 0.0155
700 1000 0.05 0 654321 1 0 0.0157 0.0154
1000 500 0.05 0 654321 0 0 0.0158 0.0154
1000 500 0.05 0 654321 1 0 0.0158 0.0166
1000 500 0.05 0 654321 0 0 0.0158 0.0154
700 600 0.05 0 654321 0 0 0.0158 0.0154
1000 500 0.05 0 654321 1 0 0.0158 0.0166
700 500 0.05 0 654321 1 0 0.0158 0.016
700 500 0.05 0 654321 0 0 0.0159 0.0159
600 600 0.05 0 654321 1 0 0.0162 0.016
700 600 0.05 1e-08 654321 0 0 0.0163 0.0173
600 1000 0.05 0 654321 0 0 0.0164 0.0149
1000 600 0.05 0 654321 0 2e-05 0.0165 0.0162
1000 600 0.05 0 654321 0 2e-05 0.0165 0.0162
700 500 0.05 1e-08 654321 0 2e-05 0.0165 0.0173
700 1000 0.05 1e-08 654321 0 0 0.0171 0.0164

Here are some experiments on MNIST-50K/10K/10K with Stacked Autoassociators models with 5 layers, where we tested many different neural architectures but where we did not use weight decays:

nhid1 nhid2 nhid3 nhid4 nhid5 learning rate decrease constant seed tied reconstruction weights reconstruct input train error valid error test error
1000 1000 500 400 750 0.05 0 654321 1 0 0 0.0122 0.0138
1000 600 400 400 300 0.05 0 654321 0 0 0 0.0127 0.0146
1000 1000 400 400 300 0.05 0 654321 1 0 0 0.0129 0.0141
1000 500 750 300 750 0.05 0 654321 1 0 0 0.0129 0.0136
700 1000 400 400 200 0.05 0 654321 0 0 0 0.013 0.0132
1000 1000 750 300 300 0.05 0 654321 1 0 0 0.0132 0.0155
1000 500 500 750 200 0.05 0 654321 1 0 0 0.0132 0.0146
1000 1000 500 300 200 0.05 0 654321 0 0 0 0.0133 0.0147
1000 600 750 400 300 0.05 0 654321 1 0 0 0.0133 0.014
1000 1000 400 300 200 0.05 0 654321 1 0 0 0.0134 0.0143
1000 500 500 750 300 0.05 0 654321 1 0 0 0.0134 0.0154
1000 1000 400 750 200 0.05 0 654321 1 0 0 0.0135 0.015
1000 1000 400 750 750 0.05 0 654321 0 0 0 0.0136 0.0142
1000 600 500 400 750 0.05 0 654321 1 0 8e-05 0.0136 0.0157
1000 1000 500 400 200 0.05 0 654321 1 0 0 0.0136 0.0139
1000 500 500 400 200 0.05 0 654321 1 0 0 0.0136 0.0151
1000 500 400 400 300 0.05 0 654321 0 0 0 0.0137 0.0137
1000 500 500 400 750 0.05 0 654321 1 0 0 0.0137 0.0147
1000 1000 500 300 750 0.05 0 654321 0 0 0 0.0138 0.0131
1000 600 750 300 200 0.05 0 654321 1 0 0 0.0138 0.015
1000 500 750 400 750 0.05 0 654321 1 0 0 0.0138 0.0145
1000 500 400 750 300 0.05 0 654321 0 0 0 0.0138 0.0144
1000 500 500 750 750 0.05 0 654321 1 0 0 0.0138 0.0135
700 1000 750 750 200 0.05 0 654321 0 0 0 0.0138 0.0126
1000 1000 500 400 300 0.05 0 654321 0 0 0 0.0139 0.0145
1000 1000 400 300 300 0.05 0 654321 1 0 0 0.0139 0.0123
1000 1000 500 750 300 0.05 0 654321 1 0 0 0.0139 0.0143
1000 600 500 750 200 0.05 0 654321 0 0 0 0.0139 0.0154
1000 600 500 300 750 0.05 0 654321 1 0 0.00016 0.0139 0.0163
1000 1000 500 300 200 0.05 0 654321 1 0 0 0.014 0.0144
1000 1000 500 750 200 0.05 0 654321 1 0 0 0.014 0.0135
1000 1000 500 400 200 0.05 0 654321 0 0 0 0.014 0.0133
1000 600 750 400 200 0.05 0 654321 1 0 0 0.014 0.0154
1000 1000 750 750 300 0.05 0 654321 1 0 0 0.014 0.0142
1000 1000 750 750 750 0.05 0 654321 1 0 0 0.014 0.0144
1000 1000 500 750 750 0.05 0 654321 1 0 0 0.014 0.0138
1000 600 750 400 750 0.05 0 654321 1 0 0 0.014 0.0158
1000 600 500 750 200 0.05 0 654321 1 0 0 0.014 0.0138
1000 600 750 400 200 0.05 0 654321 0 0 0 0.014 0.0146
1000 600 500 400 300 0.05 0 654321 0 0 0 0.014 0.0144
1000 600 500 750 750 0.05 0 654321 1 0 0 0.0141 0.0141
1000 600 750 300 300 0.05 0 654321 0 0 0 0.0141 0.0139
700 1000 750 300 750 0.05 0 654321 0 0 0 0.0141 0.0138
700 1000 750 300 200 0.05 0 654321 0 0 0 0.0141 0.0141
1000 1000 750 300 750 0.05 0 654321 1 0 0 0.0142 0.0156
1000 1000 400 400 750 0.05 0 654321 0 0 2e-05 0.0142 0.015
1000 600 750 750 300 0.05 0 654321 1 0 0 0.0142 0.0134
1000 1000 500 300 750 0.05 0 654321 1 0 0 0.0142 0.0146
1000 600 500 300 300 0.05 0 654321 1 0 0 0.0142 0.0136
1000 600 750 750 750 0.05 0 654321 0 0 0 0.0142 0.0144
1000 500 400 750 200 0.05 0 654321 0 0 0 0.0142 0.0146
1000 500 500 300 300 0.05 0 654321 0 0 0 0.0142 0.0141
1000 500 750 400 300 0.05 0 654321 1 0 0 0.0142 0.0145
1000 500 500 300 200 0.05 0 654321 0 0 0 0.0142 0.015

Here are some experiments on MNIST-50K/10K/10K with Stacked Autoassociators models with 3 layers, where we tested having different learning rates for the greedy and fine-tuning phases:

nhid1 nhid2 nhid3 learning rate decrease constant supervised learning rate supervised decrease constant seed tied reconstruction weights train error valid error test error
700 600 500 0.05 0 0.05 0 333777 1 0 0.0139 0.0144
700 600 500 0.05 0 0.05 1e-08 654321 1 0 0.0144 0.0148
700 600 500 0.05 0 0.05 1e-08 333777 1 0 0.0144 0.0143
700 600 500 0.05 0 0.05 0 654321 1 0 0.0145 0.0147
700 600 500 0.05 0 0.05 1e-08 333777 0 0 0.0147 0.0148
500 500 1000 0.05 0 0.05 0 654321 1 0 0.0149 0.0139
500 500 1000 0.05 0 0.05 0 654321 0 0 0.0153 0.0167
700 600 500 0.05 0 0.05 1e-08 654321 0 0.0001 0.0153 0.0161
500 500 1000 0.05 0 0.05 1e-08 333777 0 0 0.0154 0.0179
700 600 500 0.05 0 0.05 0 333777 0 0 0.0154 0.0159
700 600 500 0.05 0 0.05 0 654321 0 0 0.0155 0.0149
700 600 500 0.05 0 0.005 0 333777 1 0 0.0156 0.0168
700 600 500 0.05 0 0.005 1e-08 333777 1 0 0.0156 0.0167
700 600 500 0.05 0 0.01 0 654321 1 0 0.0158 0.0178
500 500 1000 0.05 0 0.05 1e-08 333777 1 0 0.0158 0.0139
500 500 1000 0.05 0 0.05 0 333777 1 0 0.0159 0.0157
700 600 500 0.05 0 0.01 1e-08 654321 1 0 0.016 0.0175
500 500 1000 0.05 0 0.05 1e-08 654321 1 8e-05 0.0162 0.0156
700 600 500 0.05 0 0.005 0 654321 1 2e-05 0.0163 0.0187
700 600 500 0.05 0 0.005 1e-08 654321 1 2e-05 0.0163 0.0187
700 600 500 0.05 0 0.01 0 333777 1 0 0.0163 0.0168
500 500 1000 0.05 0 0.05 1e-08 654321 0 0 0.0164 0.0158
500 500 1000 0.05 0 0.05 0 333777 0 0 0.0167 0.0152
500 500 1000 0.05 0 0.01 0 333777 1 0 0.0168 0.0171
700 600 500 0.05 0 0.01 0 654321 0 0 0.017 0.0191
700 600 500 0.05 0 0.01 1e-08 654321 0 0 0.0171 0.0183
500 500 1000 0.05 0 0.01 0 654321 1 0 0.0173 0.0163
500 500 1000 0.05 0 0.01 1e-08 333777 1 0 0.0173 0.0157
700 600 500 0.05 0 0.005 1e-08 654321 0 0 0.0175 0.0197
500 500 1000 0.05 0 0.01 1e-08 654321 0 0 0.0175 0.0176
700 600 500 0.05 0 0.005 0 654321 0 0 0.0175 0.0197
700 600 500 0.05 0 0.01 1e-08 333777 1 0.00036 0.0175 0.0174
500 500 1000 0.05 0 0.005 0 333777 1 0 0.0177 0.0164
500 500 1000 0.05 0 0.005 1e-08 333777 1 0 0.0177 0.0164
500 500 1000 0.05 0 0.005 0 654321 0 0 0.0178 0.0201
500 500 1000 0.05 0 0.005 1e-08 654321 0 0 0.0178 0.0201
700 600 500 0.05 0 0.01 0 333777 0 0 0.018 0.0194
500 500 1000 0.05 0 0.005 1e-08 654321 1 0 0.0181 0.0183
500 500 1000 0.05 0 0.005 0 654321 1 0 0.0181 0.0183
700 600 500 0.05 0 0.005 0 333777 0 0 0.0184 0.0189
700 600 500 0.05 0 0.005 1e-08 333777 0 0 0.0184 0.0189
500 500 1000 0.05 0 0.01 1e-08 333777 0 0 0.0185 0.0197
500 500 1000 0.05 0 0.01 1e-08 654321 1 0 0.0187 0.0179
700 600 500 0.05 0 0.01 1e-08 333777 0 0 0.0187 0.0182
500 500 1000 0.05 0 0.01 0 654321 0 0 0.0192 0.019
500 500 1000 0.05 0 0.005 0 333777 0 0.0026 0.0196 0.0223
500 500 1000 0.05 0 0.005 1e-08 333777 0 0.00262 0.0196 0.0223
500 500 1000 0.05 0 0.01 0 333777 0 0 0.0205 0.0192

Same experiments, but where only the output layer is trained using a supervised signal (other weights are fixed from the unsupervised phase):

nhid1 nhid2 nhid3 learning rate decrease constant supervised learning rate supervised decrease constant seed tied reconstruction weights train error valid error test error
500 500 1000 0.05 0 0.01 0 654321 1 0.0174 0.0279 0.0301
500 500 1000 0.05 0 0.01 0 333777 1 0.0187 0.028 0.0302
500 500 1000 0.05 0 0.01 1e-08 333777 1 0.01682 0.0284 0.0293
700 600 500 0.05 0 0.005 1e-08 333777 1 0.02282 0.0303 0.0334
700 600 500 0.05 0 0.005 0 333777 1 0.0228 0.0304 0.0333
700 600 500 0.05 0 0.01 1e-08 333777 1 0.01984 0.0306 0.0346
500 500 1000 0.05 0 0.01 1e-08 654321 1 0.02108 0.0307 0.0289
500 500 1000 0.05 0 0.005 0 654321 1 0.02248 0.0308 0.0302
500 500 1000 0.05 0 0.005 1e-08 654321 1 0.02246 0.0308 0.0302
700 600 500 0.05 0 0.01 0 333777 1 0.02162 0.0312 0.0346
500 500 1000 0.05 0 0.005 0 333777 1 0.02594 0.0314 0.0313
500 500 1000 0.05 0 0.005 1e-08 333777 1 0.02554 0.0314 0.0313
700 600 500 0.05 0 0.005 1e-08 654321 1 0.02346 0.0316 0.0322
700 600 500 0.05 0 0.005 0 654321 1 0.02338 0.0317 0.032
700 600 500 0.05 0 0.01 0 654321 0 0.02706 0.0321 0.0359
700 600 500 0.05 0 0.005 0 654321 0 0.02644 0.0321 0.0358
700 600 500 0.05 0 0.01 1e-08 654321 1 0.02486 0.0321 0.0338
700 600 500 0.05 0 0.005 1e-08 654321 0 0.02682 0.0322 0.036
700 600 500 0.05 0 0.01 0 654321 1 0.02508 0.0325 0.0339
700 600 500 0.05 0 0.01 1e-08 654321 0 0.02702 0.0328 0.0353
500 500 1000 0.05 0 0.05 0 333777 1 0.0203 0.0339 0.0376
500 500 1000 0.05 0 0.005 0 333777 0 0.02986 0.034 0.0393
500 500 1000 0.05 0 0.005 1e-08 333777 0 0.02976 0.034 0.0394
500 500 1000 0.05 0 0.01 1e-08 333777 0 0.03006 0.0342 0.0398
700 600 500 0.05 0 0.01 1e-08 333777 0 0.02586 0.0342 0.036
700 600 500 0.05 0 0.005 0 333777 0 0.02912 0.0347 0.0378
700 600 500 0.05 0 0.01 0 333777 0 0.03134 0.0349 0.0392
700 600 500 0.05 0 0.005 1e-08 333777 0 0.03 0.0349 0.0379
500 500 1000 0.05 0 0.01 1e-08 654321 0 0.02568 0.0356 0.0382
500 500 1000 0.05 0 0.05 1e-08 333777 1 0.02626 0.0362 0.0355
500 500 1000 0.05 0 0.01 0 333777 0 0.03144 0.0365 0.0378
500 500 1000 0.05 0 0.005 0 654321 0 0.02954 0.0367 0.0384
500 500 1000 0.05 0 0.005 1e-08 654321 0 0.02954 0.0367 0.0383
700 600 500 0.05 0 0.05 0 654321 0 0.03438 0.0369 0.0417
700 600 500 0.05 0 0.05 1e-08 333777 1 0.0329 0.0369 0.0431
500 500 1000 0.05 0 0.05 0 654321 1 0.02768 0.037 0.0383
700 600 500 0.05 0 0.05 1e-08 333777 0 0.02834 0.0378 0.042
500 500 1000 0.05 0 0.05 1e-08 654321 1 0.03192 0.0382 0.039

-- HugoLarochelle - 29 May 2006

View topic | Edit |  | WYSIWYG | Attach | Printable | Raw View | Backlinks: Web, All Webs | History: r25 |r23 < r22 < r21 < r20 | More topic actions...
Neurones.StackedAutoassociators moved from Neurones.GreedyAutoEncoders on 27 Apr 2007 - 16:08 by HugoLarochelle
 
Home
This site is powered by the TWiki collaboration platformCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback