PLearn 0.1
|
Does the same thing as Hinton's deep belief nets. More...
#include <HintonDeepBeliefNet.h>
Public Member Functions | |
HintonDeepBeliefNet () | |
Default constructor. | |
virtual real | density (const Vec &y) const |
Return probability density p(y | x) | |
virtual real | log_density (const Vec &y) const |
Return log of probability density log(p(y | x)). | |
virtual real | survival_fn (const Vec &y) const |
Return survival function: P(Y>y | x). | |
virtual real | cdf (const Vec &y) const |
Return cdf: P(Y<y | x). | |
virtual void | expectation (Vec &mu) const |
Return E[Y | x]. | |
virtual void | variance (Mat &cov) const |
Return Var[Y | x]. | |
virtual void | generate (Vec &y) const |
Return a pseudo-random sample generated from the conditional distribution, of density p(y | x). | |
virtual bool | setPredictorPredictedSizes (int the_predictor_size, int the_predicted_size, bool call_parent=true) |
Generates a pseudo-random sample x from the reversed conditional distribution, of density p(x | y) (and NOT p(y | x)). | |
virtual void | setPredictor (const Vec &predictor, bool call_parent=true) const |
Set the value for the predictor part of a conditional probability. | |
virtual void | forget () |
(Re-)initializes the PDistribution in its fresh state (that state may depend on the 'seed' option). | |
virtual void | train () |
The role of the train method is to bring the learner up to stage == nstages, updating the train_stats collector with training costs measured on-line in the process. | |
virtual void | computeCostsFromOutputs (const Vec &input, const Vec &output, const Vec &target, Vec &costs) const |
Compute a cost, depending on the type of the first output : if it is the density or the log-density: NLL if it is the expectation: NLL and class error. | |
virtual TVec< string > | getTestCostNames () const |
Return [ "NLL" ] (the only cost computed by a PDistribution). | |
virtual TVec< string > | getTrainCostNames () const |
Return [ ]. | |
virtual string | classname () const |
virtual OptionList & | getOptionList () const |
virtual OptionMap & | getOptionMap () const |
virtual RemoteMethodMap & | getRemoteMethodMap () const |
virtual HintonDeepBeliefNet * | deepCopy (CopiesMap &copies) const |
virtual void | build () |
Simply calls inherited::build() then build_(). | |
virtual void | makeDeepCopyFromShallowCopy (CopiesMap &copies) |
Transforms a shallow copy into a deep copy. | |
Static Public Member Functions | |
static string | _classname_ () |
REDEFINE test FOR PARALLELIZATION OF THE TEST. | |
static OptionList & | _getOptionList_ () |
static RemoteMethodMap & | _getRemoteMethodMap_ () |
static Object * | _new_instance_for_typemap_ () |
static bool | _isa_ (const Object *o) |
static void | _static_initialize_ () |
static const PPath & | declaringFile () |
Public Attributes | |
real | learning_rate |
The learning rate used during greedy learning. | |
real | fine_tuning_learning_rate |
The learning rate used during the gradient descent. | |
real | fine_tuning_decrease_ct |
real | weight_decay |
The weight decay. | |
string | initialization_method |
The method used to initialize the weights: | |
int | n_layers |
Number of layers, including input layer and last layer, but not target layer. | |
TVec< PP< RBMLayer > > | layers |
Layers that learn representations of the input, layers[0] is input layer, layers[n_layers-1] is last layer. | |
PP< RBMLayer > | last_layer |
Last layer, learning joint representations of input and target. | |
PP< RBMMultinomialLayer > | target_layer |
Target (or label) layer. | |
PP< RBMMixedLayer > | joint_layer |
Concatenation of target_layer and layers[n_layers-2]. | |
TVec< PP< RBMLLParameters > > | params |
RBMParameters linking the unsupervised layers. | |
PP< RBMLLParameters > | target_params |
Parameters linking target_layer and last_layer. | |
PP< RBMJointLLParameters > | joint_params |
Parameters linking joint_layer and last_layer. | |
bool | sum_parallel_contributions |
only used when USING_MPI for parallelization: sum or average the delta-w contributions from different processes? | |
TVec< int > | training_schedule |
Number of examples to use during each of the different greedy steps of the training phase. | |
TVec< int > | use_sample_or_expectation |
Vector providing information on which information to use during the contrastive divergence step: | |
PP< PTimer > | ptimer |
Static Public Attributes | |
static StaticInitializer | _static_initializer_ |
Protected Member Functions | |
virtual void | contrastiveDivergenceStep (const PP< RBMLayer > &down_layer, const PP< RBMParameters > ¶meters, const PP< RBMLayer > &up_layer) |
virtual void | greedyStep (const Vec &predictor, int params_index) |
virtual void | jointGreedyStep (const Vec &input) |
virtual void | fineTuneByGradientDescent (const Vec &input, const Vec &train_costs) |
Static Protected Member Functions | |
static void | declareOptions (OptionList &ol) |
Declares the class options. | |
Protected Attributes | |
TVec< Vec > | activation_gradients |
gradients of cost wrt the activations (output of params) | |
TVec< Vec > | expectation_gradients |
gradients of cost wrt the expectations (output of layers) | |
Vec | output_gradient |
gradient wrt output activations | |
Vec | pos_down_values |
Vec | pos_up_values |
Private Types | |
typedef PDistribution | inherited |
Private Member Functions | |
void | build_ () |
This does the actual building. | |
void | build_layers () |
Build the layers. | |
void | build_params () |
Build the parameters if needed. |
Does the same thing as Hinton's deep belief nets.
Definition at line 62 of file HintonDeepBeliefNet.h.
typedef PDistribution PLearn::HintonDeepBeliefNet::inherited [private] |
Reimplemented from PLearn::PDistribution.
Reimplemented in PLearn::UnfrozenDeepBeliefNet.
Definition at line 64 of file HintonDeepBeliefNet.h.
PLearn::HintonDeepBeliefNet::HintonDeepBeliefNet | ( | ) |
Default constructor.
Definition at line 66 of file HintonDeepBeliefNet.cc.
References ptimer, PLearn::PLearner::random_gen, and use_sample_or_expectation.
: learning_rate(0.), fine_tuning_learning_rate(-1.), fine_tuning_decrease_ct(0.), weight_decay(0.), sum_parallel_contributions(0), use_sample_or_expectation(4) { use_sample_or_expectation[0] = 0; use_sample_or_expectation[1] = 1; use_sample_or_expectation[2] = 2; use_sample_or_expectation[3] = 0; random_gen = new PRandom(); ptimer = new PTimer(); ptimer->newTimer("training_time"); ptimer->newTimer("test_time"); }
string PLearn::HintonDeepBeliefNet::_classname_ | ( | ) | [static] |
REDEFINE test FOR PARALLELIZATION OF THE TEST.
Reimplemented from PLearn::PDistribution.
Reimplemented in PLearn::UnfrozenDeepBeliefNet.
Definition at line 61 of file HintonDeepBeliefNet.cc.
OptionList & PLearn::HintonDeepBeliefNet::_getOptionList_ | ( | ) | [static] |
Reimplemented from PLearn::PDistribution.
Reimplemented in PLearn::UnfrozenDeepBeliefNet.
Definition at line 61 of file HintonDeepBeliefNet.cc.
RemoteMethodMap & PLearn::HintonDeepBeliefNet::_getRemoteMethodMap_ | ( | ) | [static] |
Reimplemented from PLearn::PDistribution.
Reimplemented in PLearn::UnfrozenDeepBeliefNet.
Definition at line 61 of file HintonDeepBeliefNet.cc.
Reimplemented from PLearn::PDistribution.
Reimplemented in PLearn::UnfrozenDeepBeliefNet.
Definition at line 61 of file HintonDeepBeliefNet.cc.
Object * PLearn::HintonDeepBeliefNet::_new_instance_for_typemap_ | ( | ) | [static] |
Reimplemented from PLearn::PDistribution.
Reimplemented in PLearn::UnfrozenDeepBeliefNet.
Definition at line 61 of file HintonDeepBeliefNet.cc.
StaticInitializer HintonDeepBeliefNet::_static_initializer_ & PLearn::HintonDeepBeliefNet::_static_initialize_ | ( | ) | [static] |
Reimplemented from PLearn::PDistribution.
Reimplemented in PLearn::UnfrozenDeepBeliefNet.
Definition at line 61 of file HintonDeepBeliefNet.cc.
void PLearn::HintonDeepBeliefNet::build | ( | ) | [virtual] |
Simply calls inherited::build() then build_().
Reimplemented from PLearn::PDistribution.
Reimplemented in PLearn::UnfrozenDeepBeliefNet.
Definition at line 197 of file HintonDeepBeliefNet.cc.
References PLearn::PDistribution::build(), and build_().
Referenced by PLearn::UnfrozenDeepBeliefNet::build().
{ // ### Nothing to add here, simply calls build_(). inherited::build(); build_(); }
void PLearn::HintonDeepBeliefNet::build_ | ( | ) | [private] |
This does the actual building.
Reimplemented from PLearn::PDistribution.
Reimplemented in PLearn::UnfrozenDeepBeliefNet.
Definition at line 207 of file HintonDeepBeliefNet.cc.
References build_layers(), build_params(), PLearn::endl(), fine_tuning_learning_rate, initialization_method, layers, learning_rate, PLearn::TVec< T >::length(), PLearn::lowerstring(), n_layers, PLERROR, and training_schedule.
Referenced by build().
{ MODULE_LOG << "build_() called" << endl; n_layers = layers.length(); if( n_layers <= 1 ) return; if( fine_tuning_learning_rate < 0. ) fine_tuning_learning_rate = learning_rate; // check value of initialization_method string im = lowerstring( initialization_method ); if( im == "" || im == "uniform_sqrt" ) initialization_method = "uniform_sqrt"; else if( im == "uniform_linear" ) initialization_method = im; else if( im == "zero" ) initialization_method = im; else PLERROR( "RBMParameters::build_ - initialization_method\n" "\"%s\" unknown.\n", initialization_method.c_str() ); MODULE_LOG << " initialization_method = \"" << initialization_method << "\"" << endl; //TODO: build structure to store gradients during gradient descent if( training_schedule.length() != n_layers-1 ) training_schedule = TVec<int>( n_layers-1 ); MODULE_LOG << " training_schedule = " << training_schedule << endl; MODULE_LOG << endl; build_layers(); build_params(); }
void PLearn::HintonDeepBeliefNet::build_layers | ( | ) | [private] |
Build the layers.
Definition at line 242 of file HintonDeepBeliefNet.cc.
References PLearn::endl(), i, PLearn::PLearner::inputsize(), PLearn::PLearner::inputsize_, joint_layer, last_layer, layers, n_layers, PLearn::PDistribution::n_predicted, PLearn::PDistribution::n_predictor, PLASSERT, PLearn::PLearner::random_gen, setPredictorPredictedSizes(), and target_layer.
Referenced by build_().
{ MODULE_LOG << "build_layers() called" << endl; if( inputsize_ >= 0 ) { PLASSERT( layers[0]->size + target_layer->size == inputsize() ); setPredictorPredictedSizes( layers[0]->size, target_layer->size, false ); MODULE_LOG << " n_predictor = " << n_predictor << endl; MODULE_LOG << " n_predicted = " << n_predicted << endl; } for( int i=0 ; i<n_layers ; i++ ) layers[i]->random_gen = random_gen; target_layer->random_gen = random_gen; last_layer = layers[n_layers-1]; // concatenate target_layer and layers[n_layers-2] into joint_layer, // if it is not already done if( !joint_layer || joint_layer->sub_layers.size() !=2 || joint_layer->sub_layers[0] != target_layer || joint_layer->sub_layers[1] != layers[n_layers-2] ) { TVec< PP<RBMLayer> > the_sub_layers( 2 ); the_sub_layers[0] = target_layer; the_sub_layers[1] = layers[n_layers-2]; joint_layer = new RBMMixedLayer( the_sub_layers ); } joint_layer->random_gen = random_gen; }
void PLearn::HintonDeepBeliefNet::build_params | ( | ) | [private] |
Build the parameters if needed.
Definition at line 275 of file HintonDeepBeliefNet.cc.
References activation_gradients, PLearn::endl(), expectation_gradients, i, initialization_method, joint_params, last_layer, layers, PLearn::TVec< T >::length(), n_layers, PLearn::PDistribution::n_predicted, output_gradient, params, PLERROR, PLearn::PLearner::random_gen, PLearn::TVec< T >::resize(), target_layer, and target_params.
Referenced by build_().
{ MODULE_LOG << "build_params() called" << endl; if( params.length() == 0 ) { params.resize( n_layers-1 ); for( int i=0 ; i<n_layers-1 ; i++ ) params[i] = new RBMLLParameters(); } else if( params.length() != n_layers-1 ) PLERROR( "HintonDeepBeliefNet::build_params - params.length() should\n" "be equal to layers.length()-1 (%d != %d).\n", params.length(), n_layers-1 ); activation_gradients.resize( n_layers-1 ); expectation_gradients.resize( n_layers-1 ); output_gradient.resize( n_predicted ); for( int i=0 ; i<n_layers-1 ; i++ ) { //TODO: call changeOptions instead params[i]->down_units_types = layers[i]->units_types; params[i]->up_units_types = layers[i+1]->units_types; params[i]->initialization_method = initialization_method; params[i]->random_gen = random_gen; params[i]->build(); activation_gradients[i].resize( params[i]->down_layer_size ); expectation_gradients[i].resize( params[i]->down_layer_size ); } if( target_layer && !target_params ) target_params = new RBMLLParameters(); //TODO: call changeOptions instead target_params->down_units_types = target_layer->units_types; target_params->up_units_types = last_layer->units_types; target_params->initialization_method = initialization_method; target_params->random_gen = random_gen; target_params->build(); // build joint_params from params[n_layers-1] and target_params // if it is not already done if( !joint_params || joint_params->target_params != target_params || joint_params->cond_params != params[n_layers-2] ) { joint_params = new RBMJointLLParameters( target_params, params[n_layers-2] ); } joint_params->random_gen = random_gen; // share the biases for( int i=0 ; i<n_layers-2 ; i++ ) params[i]->up_units_bias = params[i+1]->down_units_bias; }
Return cdf: P(Y<y | x).
Reimplemented from PLearn::PDistribution.
Definition at line 372 of file HintonDeepBeliefNet.cc.
References PLERROR.
{ PLERROR("cdf not implemented for HintonDeepBeliefNet"); return 0; }
string PLearn::HintonDeepBeliefNet::classname | ( | ) | const [virtual] |
Reimplemented from PLearn::PDistribution.
Reimplemented in PLearn::UnfrozenDeepBeliefNet.
Definition at line 61 of file HintonDeepBeliefNet.cc.
Referenced by train().
void PLearn::HintonDeepBeliefNet::computeCostsFromOutputs | ( | const Vec & | input, |
const Vec & | output, | ||
const Vec & | target, | ||
Vec & | costs | ||
) | const [virtual] |
Compute a cost, depending on the type of the first output : if it is the density or the log-density: NLL if it is the expectation: NLL and class error.
Reimplemented from PLearn::PDistribution.
Definition at line 919 of file HintonDeepBeliefNet.cc.
References PLearn::argmax(), c, PLearn::PDistribution::computeCostsFromOutputs(), i, PLearn::is_equal(), PLearn::PDistribution::n_predicted, PLearn::PDistribution::outputs_def, pl_log, PLASSERT, PLearn::PDistribution::predicted_part, PLearn::TVec< T >::resize(), PLearn::PDistribution::splitCond(), and PLearn::square().
{ char c = outputs_def[0]; if( c == 'l' || c == 'd' ) inherited::computeCostsFromOutputs(input, output, target, costs); else if( c == 'e' ) { costs.resize( 3 ); splitCond(input); // actual_index is the actual 'target' int actual_index = argmax(predicted_part); #ifdef BOUNDCHECK for( int i=0 ; i<n_predicted ; i++ ) PLASSERT( is_equal( predicted_part[i], 0. ) || i == actual_index && is_equal( predicted_part[i], 1. ) ); #endif costs[0] = -pl_log( output[actual_index] ); // predicted_index is the most probable predicted class int predicted_index = argmax(output); if( predicted_index == actual_index ) costs[1] = 0; else costs[1] = 1; real expected_output = .0 ; real expected_teacher = .0 ; for(int i=0 ; i<n_predicted ; ++i) { expected_output += output[i] * i; expected_teacher += predicted_part[i] * i ; } costs[2] = square(expected_output - expected_teacher) ; } }
void PLearn::HintonDeepBeliefNet::contrastiveDivergenceStep | ( | const PP< RBMLayer > & | down_layer, |
const PP< RBMParameters > & | parameters, | ||
const PP< RBMLayer > & | up_layer | ||
) | [protected, virtual] |
Definition at line 765 of file HintonDeepBeliefNet.cc.
References pos_down_values, pos_up_values, PLearn::TVec< T >::resize(), and use_sample_or_expectation.
Referenced by greedyStep(), and jointGreedyStep().
{ // positive phase if( use_sample_or_expectation[0] == 0 ) parameters->setAsDownInput( down_layer->expectation ); else { down_layer->generateSample(); parameters->setAsDownInput( down_layer->sample ); } up_layer->getAllActivations( parameters ); up_layer->computeExpectation(); up_layer->generateSample(); // accumulate stats using the right vector (sample or expectation) // we store a copy of positive phase values pos_down_values.resize( down_layer->size ); pos_up_values.resize( up_layer->size ); if( use_sample_or_expectation[0] == 2 ) pos_down_values << down_layer->sample; else pos_down_values << down_layer->expectation; if( use_sample_or_expectation[1] == 2 ) pos_up_values << up_layer->sample; else pos_up_values << up_layer->expectation; // down propagation if( use_sample_or_expectation[1] == 0 ) parameters->setAsUpInput( up_layer->expectation ); else parameters->setAsUpInput( up_layer->sample ); down_layer->getAllActivations( parameters ); down_layer->computeExpectation(); down_layer->generateSample(); // negative phase if( use_sample_or_expectation[2] == 0 ) parameters->setAsDownInput( down_layer->expectation ); else parameters->setAsDownInput( down_layer->sample ); up_layer->getAllActivations( parameters ); up_layer->computeExpectation(); // accumulate stats using the right vector (sample or expectation) // no need to copy because the values won't change before update Vec neg_down_values; Vec neg_up_values; if( use_sample_or_expectation[2] == 2 ) neg_down_values = down_layer->sample; else neg_down_values = down_layer->expectation; if( use_sample_or_expectation[3] == 2 ) neg_up_values = up_layer->sample; else neg_up_values = up_layer->expectation; // update parameters->update(pos_down_values, pos_up_values, neg_down_values, neg_up_values); }
void PLearn::HintonDeepBeliefNet::declareOptions | ( | OptionList & | ol | ) | [static, protected] |
Declares the class options.
Reimplemented from PLearn::PDistribution.
Reimplemented in PLearn::UnfrozenDeepBeliefNet.
Definition at line 87 of file HintonDeepBeliefNet.cc.
References PLearn::OptionBase::buildoption, PLearn::declareOption(), PLearn::PDistribution::declareOptions(), fine_tuning_decrease_ct, fine_tuning_learning_rate, initialization_method, joint_layer, joint_params, last_layer, layers, learning_rate, PLearn::OptionBase::learntoption, n_layers, PLearn::OptionBase::nosave, params, sum_parallel_contributions, target_layer, target_params, training_schedule, use_sample_or_expectation, and weight_decay.
Referenced by PLearn::UnfrozenDeepBeliefNet::declareOptions().
{ declareOption(ol, "learning_rate", &HintonDeepBeliefNet::learning_rate, OptionBase::buildoption, "Learning rate used during greedy learning"); declareOption(ol, "fine_tuning_learning_rate", &HintonDeepBeliefNet::fine_tuning_learning_rate, OptionBase::buildoption, "Learning rate used during the gradient descent"); declareOption(ol, "fine_tuning_decrease_ct", &HintonDeepBeliefNet::fine_tuning_decrease_ct, OptionBase::buildoption, "Decrease constant used during the gradient descent\n" "(in fact, it will only be updated only once every epoch.\n"); declareOption(ol, "weight_decay", &HintonDeepBeliefNet::weight_decay, OptionBase::buildoption, "Weight decay"); declareOption(ol, "initialization_method", &HintonDeepBeliefNet::initialization_method, OptionBase::buildoption, "The method used to initialize the weights:\n" " - \"uniform_linear\" = a uniform law in [-1/d, 1/d]\n" " - \"uniform_sqrt\" = a uniform law in [-1/sqrt(d)," " 1/sqrt(d)]\n" " - \"zero\" = all weights are set to 0,\n" "where d = max( up_layer_size, down_layer_size ).\n"); declareOption(ol, "training_schedule", &HintonDeepBeliefNet::training_schedule, OptionBase::buildoption, "Total number of examples that should be seen until each" " layer\n" "have been greedily trained.\n" "We should always have training_schedule[i] <" " training_schedule[i+1].\n"); declareOption(ol, "layers", &HintonDeepBeliefNet::layers, OptionBase::buildoption, "Layers that learn representations of the input," " unsupervisedly.\n" "layers[0] is input layer.\n"); declareOption(ol, "target_layer", &HintonDeepBeliefNet::target_layer, OptionBase::buildoption, "Target (or label) layer"); declareOption(ol, "params", &HintonDeepBeliefNet::params, OptionBase::buildoption, "RBMParameters linking the unsupervised layers.\n" "params[i] links layers[i] and layers[i+1], except for" "params[n_layers-1],\n" "that links layers[n_layers-1] and last_layer.\n"); declareOption(ol, "target_params", &HintonDeepBeliefNet::target_params, OptionBase::buildoption, "Parameters linking target_layer and last_layer"); declareOption(ol, "use_sample_or_expectation", &HintonDeepBeliefNet::use_sample_or_expectation, OptionBase::buildoption, "Vector providing information on which information to use" " during the\n" "contrastive divergence step:\n" " - 0 means that we use the expectation only,\n" " - 1 means that we sample (for the next step), but we use" " the\n" " expectation in the CD update formula,\n" " - 2 means that we use the sample only.\n" "The order of the arguments matches the steps of CD:\n" " - visible unit during positive phase (you should keep it" " to 0),\n" " - hidden unit during positive phase,\n" " - visible unit during negative phase,\n" " - hidden unit during negative phase (you should keep it" " to 0).\n"); declareOption(ol, "sum_parallel_contributions", &HintonDeepBeliefNet::sum_parallel_contributions, OptionBase::buildoption, "Only used when USING_MPI for parallelization\n" "sum or average the delta-w contributions from different processes?\n"); declareOption(ol, "n_layers", &HintonDeepBeliefNet::n_layers, OptionBase::learntoption, "Number of unsupervised layers, including input layer"); declareOption(ol, "last_layer", &HintonDeepBeliefNet::last_layer, OptionBase::learntoption, "Last layer, learning joint representations of input and" " target"); declareOption(ol, "joint_layer", &HintonDeepBeliefNet::joint_layer, OptionBase::nosave, "Concatenation of target_layer and layers[n_layers-1]"); declareOption(ol, "joint_params", &HintonDeepBeliefNet::joint_params, OptionBase::nosave, "Parameters linking joint_layer and last_layer"); // Now call the parent class' declareOptions(). inherited::declareOptions(ol); }
static const PPath& PLearn::HintonDeepBeliefNet::declaringFile | ( | ) | [inline, static] |
Reimplemented from PLearn::PDistribution.
Reimplemented in PLearn::UnfrozenDeepBeliefNet.
Definition at line 265 of file HintonDeepBeliefNet.h.
:
//##### Protected Options ###############################################
HintonDeepBeliefNet * PLearn::HintonDeepBeliefNet::deepCopy | ( | CopiesMap & | copies | ) | const [virtual] |
Reimplemented from PLearn::PDistribution.
Reimplemented in PLearn::UnfrozenDeepBeliefNet.
Definition at line 61 of file HintonDeepBeliefNet.cc.
Return probability density p(y | x)
Reimplemented from PLearn::PDistribution.
Definition at line 407 of file HintonDeepBeliefNet.cc.
References PLearn::argmax(), expectation(), i, PLearn::is_equal(), PLearn::PDistribution::n_predicted, PLASSERT, PLearn::TVec< T >::size(), and PLearn::PDistribution::store_expect.
Referenced by log_density().
{ PLASSERT( y.size() == n_predicted ); // TODO: 'y'[0] devrait plutot etre l'entier "index" lui-meme! int index = argmax( y ); // If y != onehot( index ), then density is 0 if( !is_equal( y[index], 1. ) ) return 0; for( int i=0 ; i<n_predicted ; i++ ) if( !is_equal( y[i], 0 ) && i != index ) return 0; expectation( store_expect ); return store_expect[index]; }
void PLearn::HintonDeepBeliefNet::expectation | ( | Vec & | mu | ) | const [virtual] |
Return E[Y | x].
Reimplemented from PLearn::PDistribution.
Definition at line 380 of file HintonDeepBeliefNet.cc.
References i, joint_params, layers, n_layers, params, PLearn::PDistribution::predicted_size, PLearn::PDistribution::predictor_part, PLearn::TVec< T >::resize(), and target_layer.
Referenced by density(), fineTuneByGradientDescent(), greedyStep(), jointGreedyStep(), and PLearn::UnfrozenDeepBeliefNet::train().
{ mu.resize( predicted_size ); // Propagate input (predictor_part) until penultimate layer layers[0]->expectation << predictor_part; for( int i=0 ; i<n_layers-2 ; i++ ) { params[i]->setAsDownInput( layers[i]->expectation ); layers[i+1]->getAllActivations( (RBMLLParameters*) params[i] ); layers[i+1]->computeExpectation(); } // Set layers[n_layers-2]->expectation (penultimate) as conditionning input // of joint_params joint_params->setAsCondInput( layers[n_layers-2]->expectation ); // Get all activations on target_layer from target_params target_layer->getAllActivations( (RBMLLParameters*) joint_params ); target_layer->computeExpectation(); mu << target_layer->expectation; }
void PLearn::HintonDeepBeliefNet::fineTuneByGradientDescent | ( | const Vec & | input, |
const Vec & | train_costs | ||
) | [protected, virtual] |
Definition at line 873 of file HintonDeepBeliefNet.cc.
References activation_gradients, PLearn::argmax(), expectation(), expectation_gradients, i, PLearn::is_equal(), joint_params, layers, n_layers, PLearn::PDistribution::n_predicted, output_gradient, params, pl_log, PLASSERT, PLearn::PDistribution::predicted_part, PLearn::PDistribution::splitCond(), and target_layer.
Referenced by train().
{ // split input in predictor_part and predicted_part splitCond(input); // compute predicted_part expectation, conditioned on predictor_part // (forward pass) expectation( output_gradient ); int actual_index = argmax(predicted_part); // update train_costs #ifdef BOUNDCHECK for( int i=0 ; i<n_predicted ; i++ ) PLASSERT( is_equal( predicted_part[i], 0. ) || i == actual_index && is_equal( predicted_part[i], 1. ) ); #endif train_costs[0] = -pl_log( target_layer->expectation[actual_index] ); int predicted_index = argmax( target_layer->expectation ); if( predicted_index == actual_index ) train_costs[1] = 0; else train_costs[1] = 1; // output gradient output_gradient[actual_index] -= 1.; joint_params->bpropUpdate( layers[n_layers-2]->expectation, target_layer->expectation, expectation_gradients[n_layers-2], output_gradient ); for( int i=n_layers-2 ; i>0 ; i-- ) { layers[i]->bpropUpdate( layers[i]->activations, layers[i]->expectation, activation_gradients[i], expectation_gradients[i] ); params[i-1]->bpropUpdate( layers[i-1]->expectation, layers[i]->activations, expectation_gradients[i-1], activation_gradients[i] ); } }
void PLearn::HintonDeepBeliefNet::forget | ( | ) | [virtual] |
(Re-)initializes the PDistribution in its fresh state (that state may depend on the 'seed' option).
And sets 'stage' back to 0 (this is the stage of a fresh learner!). ### You may remove this method if your distribution does not ### implement it.
A typical forget() method should do the following:
Reimplemented from PLearn::PDistribution.
Definition at line 335 of file HintonDeepBeliefNet.cc.
References PLearn::endl(), i, layers, n_layers, params, ptimer, PLearn::PDistribution::resetGenerator(), PLearn::TVec< T >::resize(), PLearn::PLearner::seed_, PLearn::PLearner::stage, target_layer, and target_params.
{ MODULE_LOG << "forget() called" << endl; ptimer->resetAllTimers(); resetGenerator(seed_); for( int i=0 ; i<n_layers-1 ; i++ ) params[i]->forget(); for( int i=0 ; i<n_layers ; i++ ) layers[i]->reset(); #if USING_MPI global_params.resize(0); #endif target_params->forget(); target_layer->reset(); stage = 0; }
void PLearn::HintonDeepBeliefNet::generate | ( | Vec & | y | ) | const [virtual] |
Return a pseudo-random sample generated from the conditional distribution, of density p(y | x).
Reimplemented from PLearn::PDistribution.
Definition at line 364 of file HintonDeepBeliefNet.cc.
References PLERROR.
{ PLERROR("generate not implemented for HintonDeepBeliefNet"); }
OptionList & PLearn::HintonDeepBeliefNet::getOptionList | ( | ) | const [virtual] |
Reimplemented from PLearn::PDistribution.
Reimplemented in PLearn::UnfrozenDeepBeliefNet.
Definition at line 61 of file HintonDeepBeliefNet.cc.
OptionMap & PLearn::HintonDeepBeliefNet::getOptionMap | ( | ) | const [virtual] |
Reimplemented from PLearn::PDistribution.
Reimplemented in PLearn::UnfrozenDeepBeliefNet.
Definition at line 61 of file HintonDeepBeliefNet.cc.
RemoteMethodMap & PLearn::HintonDeepBeliefNet::getRemoteMethodMap | ( | ) | const [virtual] |
Reimplemented from PLearn::PDistribution.
Reimplemented in PLearn::UnfrozenDeepBeliefNet.
Definition at line 61 of file HintonDeepBeliefNet.cc.
TVec< string > PLearn::HintonDeepBeliefNet::getTestCostNames | ( | ) | const [virtual] |
Return [ "NLL" ] (the only cost computed by a PDistribution).
Reimplemented from PLearn::PDistribution.
Definition at line 959 of file HintonDeepBeliefNet.cc.
References PLearn::TVec< T >::append(), c, and PLearn::PDistribution::outputs_def.
Referenced by getTrainCostNames().
{ char c = outputs_def[0]; TVec<string> result; if( c == 'l' || c == 'd' ) result.append( "NLL" ); else if( c == 'e' ) { result.append( "NLL" ); result.append( "class_error" ); result.append( "WMSE" ); } result.append("time"); return result; }
TVec< string > PLearn::HintonDeepBeliefNet::getTrainCostNames | ( | ) | const [virtual] |
Return [ ].
Reimplemented from PLearn::PDistribution.
Definition at line 975 of file HintonDeepBeliefNet.cc.
References getTestCostNames().
{ return getTestCostNames(); }
void PLearn::HintonDeepBeliefNet::greedyStep | ( | const Vec & | predictor, |
int | params_index | ||
) | [protected, virtual] |
Definition at line 836 of file HintonDeepBeliefNet.cc.
References contrastiveDivergenceStep(), expectation(), i, layers, and params.
Referenced by train().
{ // deterministic propagation until we reach index layers[0]->expectation << predictor; for( int i=0 ; i<index ; i++ ) { params[i]->setAsDownInput( layers[i]->expectation ); layers[i+1]->getAllActivations( (RBMLLParameters*) params[i] ); layers[i+1]->computeExpectation(); } // perform one step of CD contrastiveDivergenceStep( layers[index], (RBMLLParameters*) params[index], layers[index+1] ); }
void PLearn::HintonDeepBeliefNet::jointGreedyStep | ( | const Vec & | input | ) | [protected, virtual] |
Definition at line 853 of file HintonDeepBeliefNet.cc.
References contrastiveDivergenceStep(), expectation(), i, joint_layer, joint_params, last_layer, layers, n_layers, PLearn::PDistribution::n_predicted, PLearn::PDistribution::n_predictor, params, PLearn::TVec< T >::subVec(), and target_layer.
Referenced by train().
{ // deterministic propagation until we reach n_layers-2, setting the input // of the "input" part of joint_layer layers[0]->expectation << input.subVec( 0, n_predictor ); for( int i=0 ; i<n_layers-2 ; i++ ) { params[i]->setAsDownInput( layers[i]->expectation ); layers[i+1]->getAllActivations( (RBMLLParameters*) params[i] ); layers[i+1]->computeExpectation(); } // now fill the "target" part of joint_layer target_layer->expectation << input.subVec( n_predictor, n_predicted ); contrastiveDivergenceStep( (RBMLayer *) joint_layer, (RBMLLParameters *) joint_params, last_layer ); }
Return log of probability density log(p(y | x)).
Reimplemented from PLearn::PDistribution.
Definition at line 429 of file HintonDeepBeliefNet.cc.
References density(), and pl_log.
void PLearn::HintonDeepBeliefNet::makeDeepCopyFromShallowCopy | ( | CopiesMap & | copies | ) | [virtual] |
Transforms a shallow copy into a deep copy.
Reimplemented from PLearn::PDistribution.
Reimplemented in PLearn::UnfrozenDeepBeliefNet.
Definition at line 453 of file HintonDeepBeliefNet.cc.
References PLearn::deepCopyField(), joint_layer, joint_params, last_layer, layers, PLearn::PDistribution::makeDeepCopyFromShallowCopy(), params, ptimer, target_layer, target_params, and training_schedule.
Referenced by PLearn::UnfrozenDeepBeliefNet::makeDeepCopyFromShallowCopy().
{ inherited::makeDeepCopyFromShallowCopy(copies); deepCopyField(ptimer, copies); deepCopyField(layers, copies); deepCopyField(last_layer, copies); deepCopyField(target_layer, copies); deepCopyField(joint_layer, copies); deepCopyField(params, copies); deepCopyField(joint_params, copies); deepCopyField(target_params, copies); deepCopyField(training_schedule, copies); }
void PLearn::HintonDeepBeliefNet::setPredictor | ( | const Vec & | predictor, |
bool | call_parent = true |
||
) | const [virtual] |
Set the value for the predictor part of a conditional probability.
Reimplemented from PLearn::PDistribution.
Definition at line 471 of file HintonDeepBeliefNet.cc.
References PLearn::PDistribution::setPredictor().
{ if (call_parent) inherited::setPredictor(predictor, true); // ### Add here any specific code required by your subclass. }
bool PLearn::HintonDeepBeliefNet::setPredictorPredictedSizes | ( | int | the_predictor_size, |
int | the_predicted_size, | ||
bool | call_parent = true |
||
) | [virtual] |
Generates a pseudo-random sample x from the reversed conditional distribution, of density p(x | y) (and NOT p(y | x)).
i.e., generates a "predictor" part given a "predicted" part, regardless of any previously set predictor. Set the 'predictor' and 'predicted' sizes for this distribution.
Reimplemented from PLearn::PDistribution.
Definition at line 482 of file HintonDeepBeliefNet.cc.
References layers, PLearn::PDistribution::n_predicted, PLearn::PDistribution::n_predictor, PLERROR, PLearn::PDistribution::setPredictorPredictedSizes(), PLearn::TVec< T >::size(), and target_layer.
Referenced by build_layers().
{ bool sizes_have_changed = false; if (call_parent) sizes_have_changed = inherited::setPredictorPredictedSizes( the_predictor_size, the_predicted_size, true); // ### Add here any specific code required by your subclass. if( the_predictor_size >= 0 && the_predictor_size != layers[0]->size || the_predicted_size >= 0 && the_predicted_size != target_layer->size ) PLERROR( "HintonDeepBeliefNet::setPredictorPredictedSizes - \n" "n_predictor should be equal to layer[0]->size (%d)\n" "n_predicted should be equal to target_layer->size (%d).\n", layers[0]->size, target_layer->size ); n_predictor = layers[0]->size; n_predicted = target_layer->size; // Returned value. return sizes_have_changed; }
Return survival function: P(Y>y | x).
Reimplemented from PLearn::PDistribution.
Definition at line 437 of file HintonDeepBeliefNet.cc.
References PLERROR.
{ PLERROR("survival_fn not implemented for HintonDeepBeliefNet"); return 0; }
void PLearn::HintonDeepBeliefNet::train | ( | ) | [virtual] |
The role of the train method is to bring the learner up to stage == nstages, updating the train_stats collector with training costs measured on-line in the process.
Reimplemented from PLearn::PDistribution.
Reimplemented in PLearn::UnfrozenDeepBeliefNet.
Definition at line 510 of file HintonDeepBeliefNet.cc.
References classname(), PLearn::endl(), PLearn::fast_exact_is_equal(), fine_tuning_decrease_ct, fine_tuning_learning_rate, fineTuneByGradientDescent(), PLearn::VMat::getExample(), greedyStep(), i, PLearn::PLearner::initTrain(), PLearn::PLearner::inputsize(), joint_params, jointGreedyStep(), learning_rate, PLearn::TVec< T >::length(), PLearn::VMat::length(), PLearn::min(), n_layers, PLearn::PDistribution::n_predictor, PLearn::PLearner::nstages, params, PLERROR, ptimer, PLearn::PLMPI::rank, PLearn::PLearner::report_progress, PLearn::TVec< T >::resize(), PLearn::sample(), PLearn::PLMPI::size, PLearn::PLearner::stage, PLearn::TVec< T >::subVec(), target_params, PLearn::PLearner::targetsize(), PLearn::tostring(), PLearn::PLearner::train_set, PLearn::PLearner::train_stats, training_schedule, and PLearn::ProgressBar::update().
{ MODULE_LOG << "train() called " << endl; // The role of the train method is to bring the learner up to // stage==nstages, updating train_stats with training costs measured // on-line in the process. /* TYPICAL CODE: static Vec input; // static so we don't reallocate memory each time... static Vec target; // (but be careful that static means shared!) input.resize(inputsize()); // the train_set's inputsize() target.resize(targetsize()); // the train_set's targetsize() real weight; // This generic PLearner method does a number of standard stuff useful for // (almost) any learner, and return 'false' if no training should take // place. See PLearner.h for more details. if (!initTrain()) return; while(stage<nstages) { // clear statistics of previous epoch train_stats->forget(); //... train for 1 stage, and update train_stats, // using train_set->getExample(input, target, weight) // and train_stats->update(train_costs) ++stage; train_stats->finalize(); // finalize statistics for this epoch } */ Vec input( inputsize() ); Vec target( targetsize() ); // unused real weight; // unused Vec train_costs(3); int nsamples = train_set->length(); ptimer->startTimer("training_time"); #if USING_MPI // initialize global parameters for allowing to easily share them across // multiple CPUs // wait until we can attach a gdb process //pout << "START WAITING..." << endl; //sleep(20); //pout << "DONE WAITING!" << endl; MPI_Barrier(MPI_COMM_WORLD); //int total_bsize=minibatch_size*PLMPI::size; int total_bsize=PLMPI::size; // forget(); // DEBUGGING TO GET REPRODUCIBLE RESULTS if (global_params.size()==0) { int n_params = joint_params->nParameters(1,1); for (int i=0;i<params.length()-1;i++) n_params += params[i]->nParameters(0,1); global_params.resize(n_params); previous_global_params.resize(n_params); Vec p=global_params; for (int i=0;i<params.length()-1;i++) p=params[i]->makeParametersPointHere(p,0,1); p=joint_params->makeParametersPointHere(p,1,1); if (p.length()!=0) PLERROR("HintonDeepBeliefNet: Inconsistencies between nParameters and makeParametersPointHere!"); } #endif MODULE_LOG << " nsamples = " << nsamples << endl; MODULE_LOG << " initial stage = " << stage << endl; MODULE_LOG << " objective: nstages = " << nstages << endl; if( !initTrain() ) { MODULE_LOG << "train() aborted" << endl; return; } ProgressBar* pb = 0; // clear stats of previous epoch train_stats->forget(); /***** initial greedy training *****/ for( int layer=0 ; layer < n_layers-2 ; layer++ ) { MODULE_LOG << "Training parameters between layers " << layer << " and " << layer+1 << endl; int end_stage = min( training_schedule[layer], nstages ); MODULE_LOG << " stage = " << stage << endl; MODULE_LOG << " end_stage = " << end_stage << endl; if( report_progress && stage < end_stage ) { pb = new ProgressBar( "Training layer "+tostring(layer) +" of "+classname(), end_stage - stage ); } params[layer]->learning_rate = learning_rate; #if USING_MPI // make a copy of the parameters as they were at the beginning of // the minibatch previous_global_params << global_params; #endif for( ; stage<end_stage ; stage++ ) { #if USING_MPI // only look at some of the examples, associated with this process // number (rank) if (stage%PLMPI::size==PLMPI::rank) { #endif // resetGenerator(1); // DEBUGGING HACK TO MAKE SURE RESULTS ARE INDEPENDENT OF PARALLELIZATION int sample = stage % nsamples; train_set->getExample(sample, input, target, weight); greedyStep( input.subVec(0, n_predictor), layer ); if( pb ) { if( layer == 0 ) pb->update( stage + 1 ); else pb->update( stage - training_schedule[layer-1] + 1 ); } #if USING_MPI } // time to share among processors if (stage%total_bsize==0 || stage==end_stage-1) shareParamsMPI(); #endif } if( pb ) { delete pb; pb = 0; } } /***** joint training *****/ MODULE_LOG << "Training joint parameters, between target," << " penultimate (" << n_layers-2 << ")," << endl << "and last (" << n_layers-1 << ") layers." << endl; int end_stage = min( training_schedule[n_layers-2], nstages ); MODULE_LOG << " stage = " << stage << endl; MODULE_LOG << " end_stage = " << end_stage << endl; if( report_progress && stage < end_stage ) pb = new ProgressBar( "Training joint layer (target and " +tostring(n_layers-2)+") of "+classname(), end_stage - stage ); joint_params->learning_rate = learning_rate; // target_params->learning_rate = learning_rate; int previous_stage = (n_layers < 3) ? 0 : training_schedule[n_layers-3]; int last = min(training_schedule[n_layers-2],nstages); for( ; stage<last ; stage++ ) { #if USING_MPI // only look at some of the examples, associated with this process number (rank) if (stage%PLMPI::size==PLMPI::rank) { #endif int sample = stage % nsamples; train_set->getExample(sample, input, target, weight); jointGreedyStep( input ); if( pb ) pb->update( stage - previous_stage + 1 ); #if USING_MPI } // time to share among processors if (stage%total_bsize==0 || stage==last-1) shareParamsMPI(); #endif } if( pb ) { delete pb; pb = 0; } /***** fine-tuning *****/ MODULE_LOG << "Fine-tuning all parameters, by gradient descent" << endl; int init_stage = stage; if( report_progress && stage < nstages ) pb = new ProgressBar( "Fine-tuning parameters of all layers of " +classname(), nstages - init_stage ); MODULE_LOG << " fine_tuning_learning_rate = " << fine_tuning_learning_rate << endl; for( int i=0 ; i<n_layers-1 ; i++ ) params[i]->learning_rate = fine_tuning_learning_rate; joint_params->learning_rate = fine_tuning_learning_rate; target_params->learning_rate = fine_tuning_learning_rate; int begin_sample = stage % nsamples; for( ; stage<nstages ; stage++ ) { #if USING_MPI // only look at some of the examples, associated with this process number (rank) if (stage%PLMPI::size==PLMPI::rank) { #endif int sample = stage % nsamples; if( sample == begin_sample ) train_stats->forget(); if( !fast_exact_is_equal( fine_tuning_learning_rate, 0. ) ) { real cur_learning_rate = fine_tuning_learning_rate / (1. + fine_tuning_decrease_ct*(stage-init_stage) ); for( int i=0 ; i<n_layers-1 ; i++ ) params[i]->learning_rate = cur_learning_rate; joint_params->learning_rate = cur_learning_rate; target_params->learning_rate = cur_learning_rate; } train_set->getExample(sample, input, target, weight); fineTuneByGradientDescent( input, train_costs ); train_stats->update( train_costs ); if( pb ) pb->update( stage - init_stage + 1 ); #if USING_MPI } // time to share among processors if (stage%total_bsize==0 || stage==nstages-1) shareParamsMPI(); #endif } if( pb ) delete pb; ptimer->stopTimer("training_time"); real training_time = ptimer->getTimer("training_time"); train_costs[2] = training_time; train_stats->update(train_costs); MODULE_LOG << "Training finished in " << endl << training_time << " seconds." << endl; train_stats->finalize(); // finalize statistics }
void PLearn::HintonDeepBeliefNet::variance | ( | Mat & | cov | ) | const [virtual] |
Reimplemented from PLearn::PDistribution.
Definition at line 445 of file HintonDeepBeliefNet.cc.
References PLERROR.
{ PLERROR("variance not implemented for HintonDeepBeliefNet"); }
Reimplemented from PLearn::PDistribution.
Reimplemented in PLearn::UnfrozenDeepBeliefNet.
Definition at line 265 of file HintonDeepBeliefNet.h.
TVec< Vec > PLearn::HintonDeepBeliefNet::activation_gradients [mutable, protected] |
gradients of cost wrt the activations (output of params)
Definition at line 281 of file HintonDeepBeliefNet.h.
Referenced by build_params(), and fineTuneByGradientDescent().
TVec< Vec > PLearn::HintonDeepBeliefNet::expectation_gradients [mutable, protected] |
gradients of cost wrt the expectations (output of layers)
Definition at line 284 of file HintonDeepBeliefNet.h.
Referenced by build_params(), and fineTuneByGradientDescent().
Definition at line 76 of file HintonDeepBeliefNet.h.
Referenced by declareOptions(), and train().
The learning rate used during the gradient descent.
Definition at line 73 of file HintonDeepBeliefNet.h.
Referenced by build_(), declareOptions(), and train().
The method used to initialize the weights:
Definition at line 86 of file HintonDeepBeliefNet.h.
Referenced by build_(), build_params(), and declareOptions().
Concatenation of target_layer and layers[n_layers-2].
Definition at line 103 of file HintonDeepBeliefNet.h.
Referenced by build_layers(), declareOptions(), jointGreedyStep(), makeDeepCopyFromShallowCopy(), and PLearn::UnfrozenDeepBeliefNet::train().
Parameters linking joint_layer and last_layer.
Contains params[n_layers-2] and target_params.
Definition at line 114 of file HintonDeepBeliefNet.h.
Referenced by PLearn::UnfrozenDeepBeliefNet::build_(), build_params(), declareOptions(), expectation(), fineTuneByGradientDescent(), jointGreedyStep(), makeDeepCopyFromShallowCopy(), train(), and PLearn::UnfrozenDeepBeliefNet::train().
Last layer, learning joint representations of input and target.
Definition at line 97 of file HintonDeepBeliefNet.h.
Referenced by build_layers(), build_params(), declareOptions(), jointGreedyStep(), makeDeepCopyFromShallowCopy(), and PLearn::UnfrozenDeepBeliefNet::train().
Layers that learn representations of the input, layers[0] is input layer, layers[n_layers-1] is last layer.
Definition at line 94 of file HintonDeepBeliefNet.h.
Referenced by build_(), build_layers(), build_params(), declareOptions(), expectation(), fineTuneByGradientDescent(), forget(), greedyStep(), jointGreedyStep(), makeDeepCopyFromShallowCopy(), setPredictorPredictedSizes(), and PLearn::UnfrozenDeepBeliefNet::train().
The learning rate used during greedy learning.
Definition at line 70 of file HintonDeepBeliefNet.h.
Referenced by build_(), PLearn::UnfrozenDeepBeliefNet::build_(), declareOptions(), PLearn::UnfrozenDeepBeliefNet::declareOptions(), and train().
Number of layers, including input layer and last layer, but not target layer.
Definition at line 90 of file HintonDeepBeliefNet.h.
Referenced by build_(), PLearn::UnfrozenDeepBeliefNet::build_(), build_layers(), build_params(), declareOptions(), expectation(), fineTuneByGradientDescent(), forget(), jointGreedyStep(), train(), and PLearn::UnfrozenDeepBeliefNet::train().
Vec PLearn::HintonDeepBeliefNet::output_gradient [mutable, protected] |
gradient wrt output activations
Definition at line 287 of file HintonDeepBeliefNet.h.
Referenced by build_params(), and fineTuneByGradientDescent().
RBMParameters linking the unsupervised layers.
params[i] links layers[i] and layers[i+1]
Definition at line 107 of file HintonDeepBeliefNet.h.
Referenced by PLearn::UnfrozenDeepBeliefNet::build_(), build_params(), declareOptions(), expectation(), fineTuneByGradientDescent(), forget(), greedyStep(), jointGreedyStep(), makeDeepCopyFromShallowCopy(), train(), and PLearn::UnfrozenDeepBeliefNet::train().
Vec PLearn::HintonDeepBeliefNet::pos_down_values [mutable, protected] |
Definition at line 290 of file HintonDeepBeliefNet.h.
Referenced by contrastiveDivergenceStep().
Vec PLearn::HintonDeepBeliefNet::pos_up_values [mutable, protected] |
Definition at line 291 of file HintonDeepBeliefNet.h.
Referenced by contrastiveDivergenceStep().
Definition at line 138 of file HintonDeepBeliefNet.h.
Referenced by forget(), HintonDeepBeliefNet(), makeDeepCopyFromShallowCopy(), and train().
only used when USING_MPI for parallelization: sum or average the delta-w contributions from different processes?
Definition at line 118 of file HintonDeepBeliefNet.h.
Referenced by declareOptions().
Target (or label) layer.
Definition at line 100 of file HintonDeepBeliefNet.h.
Referenced by build_layers(), build_params(), declareOptions(), expectation(), fineTuneByGradientDescent(), forget(), jointGreedyStep(), makeDeepCopyFromShallowCopy(), setPredictorPredictedSizes(), and PLearn::UnfrozenDeepBeliefNet::train().
Parameters linking target_layer and last_layer.
Definition at line 110 of file HintonDeepBeliefNet.h.
Referenced by build_params(), declareOptions(), forget(), makeDeepCopyFromShallowCopy(), and train().
Number of examples to use during each of the different greedy steps of the training phase.
Definition at line 122 of file HintonDeepBeliefNet.h.
Referenced by build_(), declareOptions(), PLearn::UnfrozenDeepBeliefNet::declareOptions(), makeDeepCopyFromShallowCopy(), and train().
Vector providing information on which information to use during the contrastive divergence step:
Definition at line 135 of file HintonDeepBeliefNet.h.
Referenced by contrastiveDivergenceStep(), declareOptions(), and HintonDeepBeliefNet().
The weight decay.
Definition at line 79 of file HintonDeepBeliefNet.h.
Referenced by declareOptions().