PLearn 0.1
|
Does the same thing as Hinton's deep belief nets. More...
#include <GaussianDBNClassification.h>
Public Member Functions | |
GaussianDBNClassification () | |
Default constructor. | |
virtual real | density (const Vec &y) const |
Return probability density p(y | x) | |
virtual real | log_density (const Vec &y) const |
Return log of probability density log(p(y | x)). | |
virtual real | survival_fn (const Vec &y) const |
Return survival function: P(Y>y | x). | |
virtual real | cdf (const Vec &y) const |
Return cdf: P(Y<y | x). | |
virtual void | expectation (Vec &mu) const |
Return E[Y | x]. | |
virtual void | variance (Mat &cov) const |
Return Var[Y | x]. | |
virtual void | generate (Vec &y) const |
Return a pseudo-random sample generated from the conditional distribution, of density p(y | x). | |
virtual bool | setPredictorPredictedSizes (int the_predictor_size, int the_predicted_size, bool call_parent=true) |
Generates a pseudo-random sample x from the reversed conditional distribution, of density p(x | y) (and NOT p(y | x)). | |
virtual void | setPredictor (const Vec &predictor, bool call_parent=true) const |
Set the value for the predictor part of a conditional probability. | |
virtual void | forget () |
(Re-)initializes the PDistribution in its fresh state (that state may depend on the 'seed' option). | |
virtual void | train () |
The role of the train method is to bring the learner up to stage == nstages, updating the train_stats collector with training costs measured on-line in the process. | |
virtual void | computeCostsFromOutputs (const Vec &input, const Vec &output, const Vec &target, Vec &costs) const |
Compute a cost, depending on the type of the first output : if it is the density or the log-density: NLL if it is the expectation: NLL and class error. | |
virtual TVec< string > | getTestCostNames () const |
Return [ "NLL" ] (the only cost computed by a PDistribution). | |
virtual string | classname () const |
virtual OptionList & | getOptionList () const |
virtual OptionMap & | getOptionMap () const |
virtual RemoteMethodMap & | getRemoteMethodMap () const |
virtual GaussianDBNClassification * | deepCopy (CopiesMap &copies) const |
virtual void | build () |
Simply calls inherited::build() then build_(). | |
virtual void | makeDeepCopyFromShallowCopy (CopiesMap &copies) |
Transforms a shallow copy into a deep copy. | |
Static Public Member Functions | |
static string | _classname_ () |
static OptionList & | _getOptionList_ () |
static RemoteMethodMap & | _getRemoteMethodMap_ () |
static Object * | _new_instance_for_typemap_ () |
static bool | _isa_ (const Object *o) |
static void | _static_initialize_ () |
static const PPath & | declaringFile () |
Public Attributes | |
real | learning_rate |
### declare public option fields (such as build options) here Start your comments with Doxygen-compatible comments such as //! | |
real | weight_decay |
The weight decay. | |
string | initialization_method |
The method used to initialize the weights: | |
int | n_layers |
Number of layers, including input layer and last layer, but not target layer. | |
TVec< PP< RBMLayer > > | layers |
Layers that learn representations of the input, layers[0] is input layer, layers[n_layers-1] is last layer. | |
PP< RBMLayer > | last_layer |
Last layer, learning joint representations of input and target. | |
PP< RBMMultinomialLayer > | target_layer |
Target (or label) layer. | |
PP< RBMMixedLayer > | joint_layer |
Concatenation of target_layer and layers[n_layers-2]. | |
TVec< PP< RBMLLParameters > > | params |
RBMParameters linking the unsupervised layers. | |
PP< RBMQLParameters > | input_params |
PP< RBMLLParameters > | target_params |
Parameters linking target_layer and last_layer. | |
PP< RBMJointLLParameters > | joint_params |
Parameters linking joint_layer and last_layer. | |
TVec< int > | training_schedule |
Number of examples to use during each of the different greedy steps of the training phase. | |
string | fine_tuning_method |
Method for fine-tuning the whole network after greedy learning. | |
bool | use_sample_rather_than_expectation_in_positive_phase_statistics |
Static Public Attributes | |
static StaticInitializer | _static_initializer_ |
Protected Member Functions | |
virtual void | greedyStep (const Vec &predictor, int params_index) |
virtual void | jointGreedyStep (const Vec &input) |
virtual void | fineTuneByGradientDescent (const Vec &input) |
Static Protected Member Functions | |
static void | declareOptions (OptionList &ol) |
Declares the class options. | |
Protected Attributes | |
TVec< Vec > | activation_gradients |
gradients of cost wrt the activations (output of params) | |
TVec< Vec > | expectation_gradients |
gradients of cost wrt the expectations (output of layers) | |
Vec | output_gradient |
gradient wrt output activations | |
Private Types | |
typedef PDistribution | inherited |
Private Member Functions | |
void | build_ () |
This does the actual building. | |
void | build_layers () |
Build the layers. | |
void | build_params () |
Build the parameters if needed. |
Does the same thing as Hinton's deep belief nets.
Definition at line 67 of file GaussianDBNClassification.h.
typedef PDistribution PLearn::GaussianDBNClassification::inherited [private] |
Reimplemented from PLearn::PDistribution.
Definition at line 69 of file GaussianDBNClassification.h.
PLearn::GaussianDBNClassification::GaussianDBNClassification | ( | ) |
Default constructor.
Definition at line 63 of file GaussianDBNClassification.cc.
References PLearn::PLearner::random_gen.
: learning_rate(0.), weight_decay(0.), use_sample_rather_than_expectation_in_positive_phase_statistics(false) { random_gen = new PRandom(); }
string PLearn::GaussianDBNClassification::_classname_ | ( | ) | [static] |
Reimplemented from PLearn::PDistribution.
Definition at line 58 of file GaussianDBNClassification.cc.
OptionList & PLearn::GaussianDBNClassification::_getOptionList_ | ( | ) | [static] |
Reimplemented from PLearn::PDistribution.
Definition at line 58 of file GaussianDBNClassification.cc.
RemoteMethodMap & PLearn::GaussianDBNClassification::_getRemoteMethodMap_ | ( | ) | [static] |
Reimplemented from PLearn::PDistribution.
Definition at line 58 of file GaussianDBNClassification.cc.
Reimplemented from PLearn::PDistribution.
Definition at line 58 of file GaussianDBNClassification.cc.
Object * PLearn::GaussianDBNClassification::_new_instance_for_typemap_ | ( | ) | [static] |
Reimplemented from PLearn::PDistribution.
Definition at line 58 of file GaussianDBNClassification.cc.
StaticInitializer GaussianDBNClassification::_static_initializer_ & PLearn::GaussianDBNClassification::_static_initialize_ | ( | ) | [static] |
Reimplemented from PLearn::PDistribution.
Definition at line 58 of file GaussianDBNClassification.cc.
void PLearn::GaussianDBNClassification::build | ( | ) | [virtual] |
Simply calls inherited::build() then build_().
Reimplemented from PLearn::PDistribution.
Definition at line 168 of file GaussianDBNClassification.cc.
References PLearn::PDistribution::build(), and build_().
{ // ### Nothing to add here, simply calls build_(). inherited::build(); build_(); }
void PLearn::GaussianDBNClassification::build_ | ( | ) | [private] |
This does the actual building.
Reimplemented from PLearn::PDistribution.
Definition at line 178 of file GaussianDBNClassification.cc.
References build_layers(), build_params(), PLearn::endl(), fine_tuning_method, initialization_method, layers, PLearn::TVec< T >::length(), PLearn::lowerstring(), n_layers, PLERROR, and training_schedule.
Referenced by build().
{ MODULE_LOG << "build_() called" << endl; n_layers = layers.length(); if( n_layers <= 1 ) return; // check value of initialization_method string im = lowerstring( initialization_method ); if( im == "" || im == "uniform_sqrt" ) initialization_method = "uniform_sqrt"; else if( im == "uniform_linear" ) initialization_method = im; else if( im == "zero" ) initialization_method = im; else PLERROR( "RBMParameters::build_ - initialization_method\n" "\"%s\" unknown.\n", initialization_method.c_str() ); MODULE_LOG << " initialization_method = \"" << initialization_method << "\"" << endl; // check value of fine_tuning_method string ftm = lowerstring( fine_tuning_method ); if( ftm == "" | ftm == "none" ) fine_tuning_method = ""; else if( ftm == "cd" | ftm == "contrastive_divergence" ) fine_tuning_method = "CD"; else if( ftm == "egd" | ftm == "error_gradient_descent" ) fine_tuning_method = "EGD"; else if( ftm == "ws" | ftm == "wake_sleep" ) fine_tuning_method = "WS"; else PLERROR( "GaussianDBNClassification::build_ - fine_tuning_method \"%s\"\n" "is unknown.\n", fine_tuning_method.c_str() ); MODULE_LOG << " fine_tuning_method = \"" << fine_tuning_method << "\"" << endl; //TODO: build structure to store gradients during gradient descent if( training_schedule.length() != n_layers ) training_schedule = TVec<int>( n_layers, 1000000 ); MODULE_LOG << " training_schedule = " << training_schedule << endl; MODULE_LOG << endl; build_layers(); build_params(); }
void PLearn::GaussianDBNClassification::build_layers | ( | ) | [private] |
Build the layers.
Definition at line 225 of file GaussianDBNClassification.cc.
References PLearn::endl(), i, PLearn::PLearner::inputsize(), PLearn::PLearner::inputsize_, joint_layer, last_layer, layers, n_layers, PLearn::PDistribution::n_predicted, PLearn::PDistribution::n_predictor, PLASSERT, PLearn::PLearner::random_gen, setPredictorPredictedSizes(), and target_layer.
Referenced by build_().
{ MODULE_LOG << "build_layers() called" << endl; if( inputsize_ >= 0 ) { PLASSERT( layers[0]->size + target_layer->size == inputsize() ); setPredictorPredictedSizes( layers[0]->size, target_layer->size, false ); MODULE_LOG << " n_predictor = " << n_predictor << endl; MODULE_LOG << " n_predicted = " << n_predicted << endl; } for( int i=0 ; i<n_layers ; i++ ) layers[i]->random_gen = random_gen; target_layer->random_gen = random_gen; last_layer = layers[n_layers-1]; // concatenate target_layer and layers[n_layers-2] into joint_layer TVec< PP<RBMLayer> > the_sub_layers( 2 ); the_sub_layers[0] = target_layer; the_sub_layers[1] = layers[n_layers-2]; joint_layer = new RBMMixedLayer( the_sub_layers ); joint_layer->random_gen = random_gen; }
void PLearn::GaussianDBNClassification::build_params | ( | ) | [private] |
Build the parameters if needed.
Definition at line 251 of file GaussianDBNClassification.cc.
References activation_gradients, PLearn::endl(), expectation_gradients, i, initialization_method, input_params, joint_params, last_layer, layers, learning_rate, PLearn::TVec< T >::length(), n_layers, PLearn::PDistribution::n_predicted, output_gradient, params, PLERROR, PLearn::PLearner::random_gen, PLearn::TVec< T >::resize(), target_layer, and target_params.
Referenced by build_().
{ MODULE_LOG << "build_params() called" << endl; if( params.length() == 0 ) { input_params = new RBMQLParameters() ; params.resize( n_layers-1 ); for( int i=1 ; i<n_layers-1 ; i++ ) params[i] = new RBMLLParameters(); // params[0] is not being using, it is not being created } else if( params.length() != n_layers-1 ) PLERROR( "GaussianDBNClassification::build_params - params.length() should\n" "be equal to layers.length()-1 (%d != %d).\n", params.length(), n_layers-1 ); activation_gradients.resize( n_layers-1 ); expectation_gradients.resize( n_layers-1 ); output_gradient.resize( n_predicted ); input_params->down_units_types = layers[0]->units_types; input_params->up_units_types = layers[1]->units_types; input_params->learning_rate = learning_rate; input_params->initialization_method = initialization_method; input_params->random_gen = random_gen; input_params->build(); activation_gradients[0].resize( input_params->down_layer_size ); expectation_gradients[0].resize( input_params->down_layer_size ); for( int i=1 ; i<n_layers-1 ; i++ ) { //TODO: call changeOptions instead params[i]->down_units_types = layers[i]->units_types; params[i]->up_units_types = layers[i+1]->units_types; params[i]->learning_rate = learning_rate; params[i]->initialization_method = initialization_method; params[i]->random_gen = random_gen; params[i]->build(); activation_gradients[i].resize( params[i]->down_layer_size ); expectation_gradients[i].resize( params[i]->down_layer_size ); } if( target_layer && !target_params ) target_params = new RBMLLParameters(); //TODO: call changeOptions instead target_params->down_units_types = target_layer->units_types; target_params->up_units_types = last_layer->units_types; target_params->learning_rate = learning_rate; target_params->initialization_method = initialization_method; target_params->random_gen = random_gen; target_params->build(); // build joint_params from params[n_layers-1] and target_params joint_params = new RBMJointLLParameters( target_params, params[n_layers-2] ); joint_params->learning_rate = learning_rate; joint_params->random_gen = random_gen; }
Return cdf: P(Y<y | x).
Reimplemented from PLearn::PDistribution.
Definition at line 352 of file GaussianDBNClassification.cc.
References PLERROR.
{ PLERROR("cdf not implemented for GaussianDBNClassification"); return 0; }
string PLearn::GaussianDBNClassification::classname | ( | ) | const [virtual] |
Reimplemented from PLearn::PDistribution.
Definition at line 58 of file GaussianDBNClassification.cc.
Referenced by train().
void PLearn::GaussianDBNClassification::computeCostsFromOutputs | ( | const Vec & | input, |
const Vec & | output, | ||
const Vec & | target, | ||
Vec & | costs | ||
) | const [virtual] |
Compute a cost, depending on the type of the first output : if it is the density or the log-density: NLL if it is the expectation: NLL and class error.
Reimplemented from PLearn::PDistribution.
Definition at line 827 of file GaussianDBNClassification.cc.
References PLearn::argmax(), c, PLearn::PDistribution::computeCostsFromOutputs(), i, PLearn::is_equal(), PLearn::PDistribution::n_predicted, PLearn::PDistribution::outputs_def, pl_log, PLASSERT, PLearn::PDistribution::predicted_part, PLearn::TVec< T >::resize(), and PLearn::PDistribution::splitCond().
{ char c = outputs_def[0]; if( c == 'l' || c == 'd' ) inherited::computeCostsFromOutputs(input, output, target, costs); else if( c == 'e' ) { costs.resize( 2 ); splitCond(input); // actual_index is the actual 'target' int actual_index = argmax(predicted_part); #ifdef BOUNDCHECK for( int i=0 ; i<n_predicted ; i++ ) PLASSERT( is_equal( predicted_part[i], 0. ) || i == actual_index && is_equal( predicted_part[i], 1. ) ); #endif costs[0] = -pl_log( output[actual_index] ); // predicted_index is the most probable predicted class int predicted_index = argmax(output); if( predicted_index == actual_index ) costs[1] = 0; else costs[1] = 1; } }
void PLearn::GaussianDBNClassification::declareOptions | ( | OptionList & | ol | ) | [static, protected] |
Declares the class options.
Reimplemented from PLearn::PDistribution.
Definition at line 74 of file GaussianDBNClassification.cc.
References PLearn::OptionBase::buildoption, PLearn::declareOption(), PLearn::PDistribution::declareOptions(), fine_tuning_method, initialization_method, input_params, joint_layer, joint_params, last_layer, layers, learning_rate, PLearn::OptionBase::learntoption, n_layers, params, target_layer, target_params, training_schedule, use_sample_rather_than_expectation_in_positive_phase_statistics, and weight_decay.
{ declareOption(ol, "learning_rate", &GaussianDBNClassification::learning_rate, OptionBase::buildoption, "Learning rate"); declareOption(ol, "weight_decay", &GaussianDBNClassification::weight_decay, OptionBase::buildoption, "Weight decay"); declareOption(ol, "initialization_method", &GaussianDBNClassification::initialization_method, OptionBase::buildoption, "The method used to initialize the weights:\n" " - \"uniform_linear\" = a uniform law in [-1/d, 1/d]\n" " - \"uniform_sqrt\" = a uniform law in [-1/sqrt(d)," " 1/sqrt(d)]\n" " - \"zero\" = all weights are set to 0,\n" "where d = max( up_layer_size, down_layer_size ).\n"); declareOption(ol, "training_schedule", &GaussianDBNClassification::training_schedule, OptionBase::buildoption, "Number of examples to use during each of the different" " greedy\n" "steps of the training phase.\n"); declareOption(ol, "fine_tuning_method", &GaussianDBNClassification::fine_tuning_method, OptionBase::buildoption, "Method for fine-tuning the whole network after greedy" " learning.\n" "One of:\n" " - \"none\"\n" " - \"CD\" or \"contrastive_divergence\"\n" " - \"EGD\" or \"error_gradient_descent\"\n" " - \"WS\" or \"wake_sleep\".\n"); declareOption(ol, "layers", &GaussianDBNClassification::layers, OptionBase::buildoption, "Layers that learn representations of the input," " unsupervisedly.\n" "layers[0] is input layer.\n"); declareOption(ol, "target_layer", &GaussianDBNClassification::target_layer, OptionBase::buildoption, "Target (or label) layer"); declareOption(ol, "params", &GaussianDBNClassification::params, OptionBase::buildoption, "RBMParameters linking the unsupervised layers.\n" "params[i] links layers[i] and layers[i+1], except for" "params[n_layers-1],\n" "that links layers[n_layers-1] and last_layer.\n"); declareOption(ol, "target_params", &GaussianDBNClassification::target_params, OptionBase::buildoption, "Parameters linking target_layer and last_layer"); declareOption(ol, "input_params", &GaussianDBNClassification::input_params, OptionBase::buildoption, "Parameters linking layer[0] and layer[1]"); declareOption(ol, "use_sample_rather_than_expectation_in_positive_phase_statistics", &GaussianDBNClassification::use_sample_rather_than_expectation_in_positive_phase_statistics, OptionBase::buildoption, "In positive phase statistics use output->sample * input\n" "rather than output->expectation * input.\n"); declareOption(ol, "n_layers", &GaussianDBNClassification::n_layers, OptionBase::learntoption, "Number of unsupervised layers, including input layer"); declareOption(ol, "last_layer", &GaussianDBNClassification::last_layer, OptionBase::learntoption, "Last layer, learning joint representations of input and" " target"); declareOption(ol, "joint_layer", &GaussianDBNClassification::joint_layer, OptionBase::learntoption, "Concatenation of target_layer and layers[n_layers-1]"); declareOption(ol, "joint_params", &GaussianDBNClassification::joint_params, OptionBase::learntoption, "Parameters linking joint_layer and last_layer"); // Now call the parent class' declareOptions(). inherited::declareOptions(ol); }
static const PPath& PLearn::GaussianDBNClassification::declaringFile | ( | ) | [inline, static] |
Reimplemented from PLearn::PDistribution.
Definition at line 245 of file GaussianDBNClassification.h.
:
//##### Protected Options ###############################################
GaussianDBNClassification * PLearn::GaussianDBNClassification::deepCopy | ( | CopiesMap & | copies | ) | const [virtual] |
Reimplemented from PLearn::PDistribution.
Definition at line 58 of file GaussianDBNClassification.cc.
Return probability density p(y | x)
Reimplemented from PLearn::PDistribution.
Definition at line 391 of file GaussianDBNClassification.cc.
References PLearn::argmax(), expectation(), i, PLearn::is_equal(), PLearn::PDistribution::n_predicted, PLASSERT, PLearn::TVec< T >::size(), and PLearn::PDistribution::store_expect.
Referenced by log_density().
{ PLASSERT( y.size() == n_predicted ); // TODO: 'y'[0] devrait plutot etre l'entier "index" lui-meme! int index = argmax( y ); // If y != onehot( index ), then density is 0 if( !is_equal( y[index], 1. ) ) return 0; for( int i=0 ; i<n_predicted ; i++ ) if( !is_equal( y[i], 0 ) && i != index ) return 0; expectation( store_expect ); return store_expect[index]; }
void PLearn::GaussianDBNClassification::expectation | ( | Vec & | mu | ) | const [virtual] |
Return E[Y | x].
Reimplemented from PLearn::PDistribution.
Definition at line 360 of file GaussianDBNClassification.cc.
References i, input_params, joint_params, layers, n_layers, params, PLearn::PDistribution::predicted_size, PLearn::PDistribution::predictor_part, PLearn::TVec< T >::resize(), and target_layer.
Referenced by density(), fineTuneByGradientDescent(), greedyStep(), and jointGreedyStep().
{ mu.resize( predicted_size ); // Propagate input (predictor_part) until penultimate layer layers[0]->expectation << predictor_part; input_params->setAsDownInput(layers[0]->expectation) ; layers[1]->getAllActivations( (RBMQLParameters*) input_params ); layers[1]->computeExpectation(); for( int i=1 ; i<n_layers-2 ; i++ ) { params[i]->setAsDownInput( layers[i]->expectation ); layers[i+1]->getAllActivations( (RBMLLParameters*) params[i] ); layers[i+1]->computeExpectation(); } // Set layers[n_layers-2]->expectation (penultimate) as conditionning input // of joint_params joint_params->setAsCondInput( layers[n_layers-2]->expectation ); // Get all activations on target_layer from target_params target_layer->getAllActivations( (RBMLLParameters*) joint_params ); target_layer->computeExpectation(); mu << target_layer->expectation; }
void PLearn::GaussianDBNClassification::fineTuneByGradientDescent | ( | const Vec & | input | ) | [protected, virtual] |
Definition at line 797 of file GaussianDBNClassification.cc.
References activation_gradients, PLearn::argmax(), expectation(), expectation_gradients, i, joint_params, layers, n_layers, output_gradient, params, PLearn::PDistribution::predicted_part, PLearn::PDistribution::splitCond(), and target_layer.
Referenced by train().
{ // split input in predictor_part and predicted_part splitCond(input); // compute predicted_part expectation, conditioned on predictor_part // (forward pass) expectation( output_gradient ); int actual_index = argmax(predicted_part); output_gradient[actual_index] -= 1.; joint_params->bpropUpdate( layers[n_layers-2]->expectation, target_layer->expectation, expectation_gradients[n_layers-2], output_gradient ); for( int i=n_layers-2 ; i>0 ; i-- ) { layers[i]->bpropUpdate( layers[i]->activations, layers[i]->expectation, activation_gradients[i], expectation_gradients[i] ); params[i-1]->bpropUpdate( layers[i-1]->expectation, layers[i]->activations, expectation_gradients[i-1], activation_gradients[i] ); } }
void PLearn::GaussianDBNClassification::forget | ( | ) | [virtual] |
(Re-)initializes the PDistribution in its fresh state (that state may depend on the 'seed' option).
And sets 'stage' back to 0 (this is the stage of a fresh learner!). ### You may remove this method if your distribution does not ### implement it.
A typical forget() method should do the following:
Reimplemented from PLearn::PDistribution.
Definition at line 318 of file GaussianDBNClassification.cc.
References PLearn::endl(), i, input_params, layers, n_layers, params, PLearn::PDistribution::resetGenerator(), PLearn::PLearner::seed_, PLearn::PLearner::stage, target_layer, and target_params.
{ MODULE_LOG << "forget() called" << endl; resetGenerator(seed_); input_params->forget() ; for( int i=1 ; i<n_layers-1 ; i++ ) params[i]->forget(); for( int i=0 ; i<n_layers ; i++ ) layers[i]->reset(); target_params->forget(); target_layer->reset(); stage = 0; }
void PLearn::GaussianDBNClassification::generate | ( | Vec & | y | ) | const [virtual] |
Return a pseudo-random sample generated from the conditional distribution, of density p(y | x).
Reimplemented from PLearn::PDistribution.
Definition at line 344 of file GaussianDBNClassification.cc.
References PLERROR.
{ PLERROR("generate not implemented for GaussianDBNClassification"); }
OptionList & PLearn::GaussianDBNClassification::getOptionList | ( | ) | const [virtual] |
Reimplemented from PLearn::PDistribution.
Definition at line 58 of file GaussianDBNClassification.cc.
OptionMap & PLearn::GaussianDBNClassification::getOptionMap | ( | ) | const [virtual] |
Reimplemented from PLearn::PDistribution.
Definition at line 58 of file GaussianDBNClassification.cc.
RemoteMethodMap & PLearn::GaussianDBNClassification::getRemoteMethodMap | ( | ) | const [virtual] |
Reimplemented from PLearn::PDistribution.
Definition at line 58 of file GaussianDBNClassification.cc.
TVec< string > PLearn::GaussianDBNClassification::getTestCostNames | ( | ) | const [virtual] |
Return [ "NLL" ] (the only cost computed by a PDistribution).
Reimplemented from PLearn::PDistribution.
Definition at line 858 of file GaussianDBNClassification.cc.
References PLearn::TVec< T >::append(), c, and PLearn::PDistribution::outputs_def.
{ char c = outputs_def[0]; TVec<string> result; if( c == 'l' || c == 'd' ) result.append( "NLL" ); else if( c == 'e' ) { result.append( "NLL" ); result.append( "class_error" ); } return result; }
void PLearn::GaussianDBNClassification::greedyStep | ( | const Vec & | predictor, |
int | params_index | ||
) | [protected, virtual] |
Definition at line 672 of file GaussianDBNClassification.cc.
References PLearn::RBMQLParameters::accumulateNegStats(), PLearn::RBMQLParameters::accumulatePosStats(), expectation(), i, input_params, layers, params, PLearn::sample(), PLearn::RBMParameters::setAsDownInput(), PLearn::RBMParameters::setAsUpInput(), PLearn::RBMQLParameters::update(), and use_sample_rather_than_expectation_in_positive_phase_statistics.
Referenced by train().
{ // deterministic propagation until we reach index layers[0]->expectation << predictor; input_params->setAsDownInput( layers[0]->expectation ); layers[1]->getAllActivations( (RBMQLParameters*) input_params ); layers[1]->computeExpectation(); for( int i=1 ; i<index ; i++ ) { params[i]->setAsDownInput( layers[i]->expectation ); layers[i+1]->getAllActivations( (RBMLLParameters*) params[i] ); layers[i+1]->computeExpectation(); } // positive phase if (index == 0) { input_params->setAsDownInput( layers[index]->expectation ); layers[index+1]->getAllActivations((RBMQLParameters*) input_params); layers[index+1]->computeExpectation(); layers[index+1]->generateSample(); if (use_sample_rather_than_expectation_in_positive_phase_statistics) input_params->accumulatePosStats(layers[index]->expectation, layers[index+1]->sample ); else input_params->accumulatePosStats(layers[index]->expectation, layers[index+1]->expectation ); // down propagation input_params->setAsUpInput( layers[index+1]->sample ); layers[index]->getAllActivations( (RBMQLParameters*) input_params ); // negative phase layers[index]->generateSample(); input_params->setAsDownInput( layers[index]->sample ); layers[index+1]->getAllActivations((RBMQLParameters*) input_params); layers[index+1]->computeExpectation(); input_params->accumulateNegStats( layers[index]->sample, layers[index+1]->expectation ); // update input_params->update(); } else { params[index]->setAsDownInput( layers[index]->expectation ); layers[index+1]->getAllActivations((RBMLLParameters*) params[index]); layers[index+1]->computeExpectation(); layers[index+1]->generateSample(); if (use_sample_rather_than_expectation_in_positive_phase_statistics) params[index]->accumulatePosStats(layers[index]->expectation, layers[index+1]->sample ); else params[index]->accumulatePosStats(layers[index]->expectation, layers[index+1]->expectation ); // down propagation params[index]->setAsUpInput( layers[index+1]->sample ); layers[index]->getAllActivations( (RBMLLParameters*) params[index] ); // negative phase layers[index]->generateSample(); params[index]->setAsDownInput( layers[index]->sample ); layers[index+1]->getAllActivations((RBMLLParameters*) params[index]); layers[index+1]->computeExpectation(); params[index]->accumulateNegStats( layers[index]->sample, layers[index+1]->expectation ); // update params[index]->update(); } }
void PLearn::GaussianDBNClassification::jointGreedyStep | ( | const Vec & | input | ) | [protected, virtual] |
Definition at line 749 of file GaussianDBNClassification.cc.
References PLearn::RBMLLParameters::accumulateNegStats(), PLearn::RBMLLParameters::accumulatePosStats(), expectation(), i, input_params, joint_layer, joint_params, last_layer, layers, n_layers, PLearn::PDistribution::n_predicted, PLearn::PDistribution::n_predictor, params, PLearn::RBMParameters::setAsDownInput(), PLearn::RBMParameters::setAsUpInput(), PLearn::TVec< T >::subVec(), target_layer, PLearn::RBMLLParameters::update(), and use_sample_rather_than_expectation_in_positive_phase_statistics.
Referenced by train().
{ // deterministic propagation until we reach n_layers-2, setting the input // of the "input" part of joint_layer layers[0]->expectation << input.subVec( 0, n_predictor ); input_params->setAsDownInput( layers[0]->expectation ); layers[1]->getAllActivations( (RBMQLParameters*) input_params ); layers[1]->computeExpectation(); for( int i=1 ; i<n_layers-2 ; i++ ) { params[i]->setAsDownInput( layers[i]->expectation ); layers[i+1]->getAllActivations( (RBMLLParameters*) params[i] ); layers[i+1]->computeExpectation(); } // now fill the "target" part of joint_layer target_layer->expectation << input.subVec( n_predictor, n_predicted ); // positive phase joint_params->setAsDownInput( joint_layer->expectation ); last_layer->getAllActivations( (RBMLLParameters*) joint_params ); last_layer->computeExpectation(); last_layer->generateSample(); if (use_sample_rather_than_expectation_in_positive_phase_statistics) joint_params->accumulatePosStats( joint_layer->expectation, last_layer->sample ); else joint_params->accumulatePosStats( joint_layer->expectation, last_layer->expectation ); // down propagation joint_params->setAsUpInput( last_layer->sample ); joint_layer->getAllActivations( (RBMLLParameters*) joint_params ); // negative phase joint_layer->generateSample(); joint_params->setAsDownInput( joint_layer->sample ); last_layer->getAllActivations( (RBMLLParameters*) joint_params ); last_layer->computeExpectation(); joint_params->accumulateNegStats( joint_layer->sample, last_layer->expectation ); // update joint_params->update(); }
Return log of probability density log(p(y | x)).
Reimplemented from PLearn::PDistribution.
Definition at line 413 of file GaussianDBNClassification.cc.
References density(), and pl_log.
void PLearn::GaussianDBNClassification::makeDeepCopyFromShallowCopy | ( | CopiesMap & | copies | ) | [virtual] |
Transforms a shallow copy into a deep copy.
Reimplemented from PLearn::PDistribution.
Definition at line 437 of file GaussianDBNClassification.cc.
References PLearn::deepCopyField(), input_params, joint_layer, joint_params, last_layer, layers, PLearn::PDistribution::makeDeepCopyFromShallowCopy(), params, target_layer, target_params, and training_schedule.
{ inherited::makeDeepCopyFromShallowCopy(copies); deepCopyField(layers, copies); deepCopyField(last_layer, copies); deepCopyField(target_layer, copies); deepCopyField(joint_layer, copies); deepCopyField(params, copies); deepCopyField(joint_params, copies); deepCopyField(input_params, copies); deepCopyField(target_params, copies); deepCopyField(training_schedule, copies); }
void PLearn::GaussianDBNClassification::setPredictor | ( | const Vec & | predictor, |
bool | call_parent = true |
||
) | const [virtual] |
Set the value for the predictor part of a conditional probability.
Reimplemented from PLearn::PDistribution.
Definition at line 455 of file GaussianDBNClassification.cc.
References PLearn::PDistribution::setPredictor().
{ if (call_parent) inherited::setPredictor(predictor, true); // ### Add here any specific code required by your subclass. }
bool PLearn::GaussianDBNClassification::setPredictorPredictedSizes | ( | int | the_predictor_size, |
int | the_predicted_size, | ||
bool | call_parent = true |
||
) | [virtual] |
Generates a pseudo-random sample x from the reversed conditional distribution, of density p(x | y) (and NOT p(y | x)).
i.e., generates a "predictor" part given a "predicted" part, regardless of any previously set predictor. Set the 'predictor' and 'predicted' sizes for this distribution.
Reimplemented from PLearn::PDistribution.
Definition at line 466 of file GaussianDBNClassification.cc.
References layers, PLearn::PDistribution::n_predicted, PLearn::PDistribution::n_predictor, PLERROR, PLearn::PDistribution::setPredictorPredictedSizes(), PLearn::TVec< T >::size(), and target_layer.
Referenced by build_layers().
{ bool sizes_have_changed = false; if (call_parent) sizes_have_changed = inherited::setPredictorPredictedSizes( the_predictor_size, the_predicted_size, true); // ### Add here any specific code required by your subclass. if( the_predictor_size >= 0 && the_predictor_size != layers[0]->size || the_predicted_size >= 0 && the_predicted_size != target_layer->size ) PLERROR( "GaussianDBNClassification::setPredictorPredictedSizes - \n" "n_predictor should be equal to layer[0]->size (%d)\n" "n_predicted should be equal to target_layer->size (%d).\n", layers[0]->size, target_layer->size ); n_predictor = layers[0]->size; n_predicted = target_layer->size; // Returned value. return sizes_have_changed; }
Return survival function: P(Y>y | x).
Reimplemented from PLearn::PDistribution.
Definition at line 421 of file GaussianDBNClassification.cc.
References PLERROR.
{ PLERROR("survival_fn not implemented for GaussianDBNClassification"); return 0; }
void PLearn::GaussianDBNClassification::train | ( | ) | [virtual] |
The role of the train method is to bring the learner up to stage == nstages, updating the train_stats collector with training costs measured on-line in the process.
Reimplemented from PLearn::PDistribution.
Definition at line 494 of file GaussianDBNClassification.cc.
References classname(), PLearn::endl(), fine_tuning_method, fineTuneByGradientDescent(), PLearn::VMat::getExample(), greedyStep(), i, PLearn::PLearner::initTrain(), PLearn::PLearner::inputsize(), jointGreedyStep(), PLearn::VMat::length(), n_layers, PLearn::PDistribution::n_predictor, PLearn::PLearner::nstages, PLERROR, PLearn::PLearner::report_progress, PLearn::sample(), PLearn::PLearner::stage, PLearn::TVec< T >::subVec(), PLearn::PLearner::targetsize(), PLearn::tostring(), PLearn::PLearner::train_set, PLearn::PLearner::train_stats, training_schedule, and PLearn::ProgressBar::update().
{ MODULE_LOG << "train() called" << endl; // The role of the train method is to bring the learner up to // stage==nstages, updating train_stats with training costs measured // on-line in the process. /* TYPICAL CODE: static Vec input; // static so we don't reallocate memory each time... static Vec target; // (but be careful that static means shared!) input.resize(inputsize()); // the train_set's inputsize() target.resize(targetsize()); // the train_set's targetsize() real weight; // This generic PLearner method does a number of standard stuff useful for // (almost) any learner, and return 'false' if no training should take // place. See PLearner.h for more details. if (!initTrain()) return; while(stage<nstages) { // clear statistics of previous epoch train_stats->forget(); //... train for 1 stage, and update train_stats, // using train_set->getExample(input, target, weight) // and train_stats->update(train_costs) ++stage; train_stats->finalize(); // finalize statistics for this epoch } */ Vec input( inputsize() ); Vec target( targetsize() ); // unused real weight; // unused if( !initTrain() ) { MODULE_LOG << "train() aborted" << endl; return; } int nsamples = train_set->length(); int sample = 0; MODULE_LOG << " nsamples = " << nsamples << endl; // Let's define stage and nstages: // - 0: fresh state, nothing is done // - 1..n_layers-2: params[stage-1] is trained // - n_layers-1: joint_params is trained (including params[n_layers-2]) // - n_layers: after the fine tuning MODULE_LOG << "initial stage = " << stage << endl; MODULE_LOG << "objective: nstages = " << nstages << endl; for( ; stage < nstages ; stage++ ) { // clear stats of previous epoch train_stats->forget(); // loops over the training set, until training_schedule[stage] examples // have been seen. // TODO: modify the training set used? int layer = stage; int n_samples_to_see = training_schedule[stage]; // this progress bar shows the number of loops through the whole // training set ProgressBar* pb = 0; if( stage < n_layers-2 ) { MODULE_LOG << "Training parameters between layers " << stage << " and " << stage+1 << endl; if( report_progress ) pb = new ProgressBar( "Training " + classname() + " parameters between layers " + tostring(stage) + " and " + tostring(stage+1), n_samples_to_see ); int begin_sample = sample; int end_sample = begin_sample + n_samples_to_see; for( ; sample < end_sample ; sample++ ) { // sample is the index in the training set int i = sample % train_set->length(); train_set->getExample(i, input, target, weight); greedyStep( input.subVec(0, n_predictor), layer ); if( pb ) pb->update( sample - begin_sample + 1 ); } } else if( stage == n_layers-2 ) { MODULE_LOG << "Training joint parameters, between target," << " penultimate (" << n_layers-2 << ")," << endl << "and last (" << n_layers-1 << ") layers." << endl; if( report_progress ) pb = new ProgressBar( "Training " + classname() + " parameters between target, " + tostring(stage) + " and " + tostring(stage+1) + " layers", n_samples_to_see ); int begin_sample = sample; int end_sample = begin_sample + n_samples_to_see; for( ; sample < end_sample ; sample++ ) { // sample is the index in the training set int i = sample % train_set->length(); train_set->getExample(i, input, target, weight); jointGreedyStep( input ); if( pb ) pb->update( sample - begin_sample + 1 ); } } else if( stage == n_layers-1 ) { MODULE_LOG << "Fine-tuning all parameters, using method " << fine_tuning_method << endl; if( fine_tuning_method == "" ) // do nothing sample += n_samples_to_see; else if( fine_tuning_method == "EGD" ) { if( report_progress ) pb = new ProgressBar( "Training all " + classname() + " parameters by fine tuning", n_samples_to_see ); /* pout << "==================" << endl << "Before update:" << endl << "up: " << joint_params->up_units_params << endl << "weights: " << endl << joint_params->weights << endl << "down: " << joint_params->down_units_params << endl << endl; // */ int begin_sample = sample; int end_sample = begin_sample + n_samples_to_see; for( ; sample < end_sample ; sample++ ) { // sample is the index in the training set int i = sample % train_set->length(); train_set->getExample(i, input, target, weight); fineTuneByGradientDescent( input ); if( pb ) pb->update( sample - begin_sample + 1 ); } /* pout << "-------" << endl << "After update:" << endl << "up: " << joint_params->up_units_params << endl << "weights: " << endl << joint_params->weights << endl << "down: " << joint_params->down_units_params << endl << endl; // */ } else PLERROR( "Fine-tuning methods other than \"EGD\" are not" " implemented yet." ); } train_stats->finalize(); // finalize statistics for this epoch } MODULE_LOG << endl; }
void PLearn::GaussianDBNClassification::variance | ( | Mat & | cov | ) | const [virtual] |
Reimplemented from PLearn::PDistribution.
Definition at line 429 of file GaussianDBNClassification.cc.
References PLERROR.
{ PLERROR("variance not implemented for GaussianDBNClassification"); }
Reimplemented from PLearn::PDistribution.
Definition at line 245 of file GaussianDBNClassification.h.
TVec< Vec > PLearn::GaussianDBNClassification::activation_gradients [mutable, protected] |
gradients of cost wrt the activations (output of params)
Definition at line 261 of file GaussianDBNClassification.h.
Referenced by build_params(), and fineTuneByGradientDescent().
TVec< Vec > PLearn::GaussianDBNClassification::expectation_gradients [mutable, protected] |
gradients of cost wrt the expectations (output of layers)
Definition at line 264 of file GaussianDBNClassification.h.
Referenced by build_params(), and fineTuneByGradientDescent().
Method for fine-tuning the whole network after greedy learning.
One of:
Definition at line 130 of file GaussianDBNClassification.h.
Referenced by build_(), declareOptions(), and train().
The method used to initialize the weights:
Definition at line 88 of file GaussianDBNClassification.h.
Referenced by build_(), build_params(), and declareOptions().
Definition at line 111 of file GaussianDBNClassification.h.
Referenced by build_params(), declareOptions(), expectation(), forget(), greedyStep(), jointGreedyStep(), and makeDeepCopyFromShallowCopy().
Concatenation of target_layer and layers[n_layers-2].
Definition at line 105 of file GaussianDBNClassification.h.
Referenced by build_layers(), declareOptions(), jointGreedyStep(), and makeDeepCopyFromShallowCopy().
Parameters linking joint_layer and last_layer.
Contains params[n_layers-2] and target_params.
Definition at line 118 of file GaussianDBNClassification.h.
Referenced by build_params(), declareOptions(), expectation(), fineTuneByGradientDescent(), jointGreedyStep(), and makeDeepCopyFromShallowCopy().
Last layer, learning joint representations of input and target.
Definition at line 99 of file GaussianDBNClassification.h.
Referenced by build_layers(), build_params(), declareOptions(), jointGreedyStep(), and makeDeepCopyFromShallowCopy().
Layers that learn representations of the input, layers[0] is input layer, layers[n_layers-1] is last layer.
Definition at line 96 of file GaussianDBNClassification.h.
Referenced by build_(), build_layers(), build_params(), declareOptions(), expectation(), fineTuneByGradientDescent(), forget(), greedyStep(), jointGreedyStep(), makeDeepCopyFromShallowCopy(), and setPredictorPredictedSizes().
### declare public option fields (such as build options) here Start your comments with Doxygen-compatible comments such as //!
The learning rate
Definition at line 78 of file GaussianDBNClassification.h.
Referenced by build_params(), and declareOptions().
Number of layers, including input layer and last layer, but not target layer.
Definition at line 92 of file GaussianDBNClassification.h.
Referenced by build_(), build_layers(), build_params(), declareOptions(), expectation(), fineTuneByGradientDescent(), forget(), jointGreedyStep(), and train().
Vec PLearn::GaussianDBNClassification::output_gradient [mutable, protected] |
gradient wrt output activations
Definition at line 267 of file GaussianDBNClassification.h.
Referenced by build_params(), and fineTuneByGradientDescent().
RBMParameters linking the unsupervised layers.
params[i] links layers[i] and layers[i+1], i>0
Definition at line 109 of file GaussianDBNClassification.h.
Referenced by build_params(), declareOptions(), expectation(), fineTuneByGradientDescent(), forget(), greedyStep(), jointGreedyStep(), and makeDeepCopyFromShallowCopy().
Target (or label) layer.
Definition at line 102 of file GaussianDBNClassification.h.
Referenced by build_layers(), build_params(), declareOptions(), expectation(), fineTuneByGradientDescent(), forget(), jointGreedyStep(), makeDeepCopyFromShallowCopy(), and setPredictorPredictedSizes().
Parameters linking target_layer and last_layer.
Definition at line 114 of file GaussianDBNClassification.h.
Referenced by build_params(), declareOptions(), forget(), and makeDeepCopyFromShallowCopy().
Number of examples to use during each of the different greedy steps of the training phase.
Definition at line 122 of file GaussianDBNClassification.h.
Referenced by build_(), declareOptions(), makeDeepCopyFromShallowCopy(), and train().
bool PLearn::GaussianDBNClassification::use_sample_rather_than_expectation_in_positive_phase_statistics |
Definition at line 132 of file GaussianDBNClassification.h.
Referenced by declareOptions(), greedyStep(), and jointGreedyStep().
The weight decay.
Definition at line 81 of file GaussianDBNClassification.h.
Referenced by declareOptions().