PLearn 0.1
|
Neural net, trained layer-wise in a greedy but focused fashion using autoassociators/RBMs and a supervised non-parametric gradient. More...
#include <StackedFocusedAutoassociatorsNet.h>
Public Member Functions | |
StackedFocusedAutoassociatorsNet () | |
Default constructor. | |
virtual int | outputsize () const |
Returns the size of this learner's output, (which typically may depend on its inputsize(), targetsize() and set options). | |
virtual void | forget () |
(Re-)initializes the PLearner in its fresh state (that state may depend on the 'seed' option) and sets 'stage' back to 0 (this is the stage of a fresh learner!). | |
virtual void | train () |
The role of the train method is to bring the learner up to stage==nstages, updating the train_stats collector with training costs measured on-line in the process. | |
virtual void | computeOutput (const Vec &input, Vec &output) const |
Computes the output from the input. | |
virtual void | computeCostsFromOutputs (const Vec &input, const Vec &output, const Vec &target, Vec &costs) const |
Computes the costs from already computed output. | |
virtual void | updateTrainSetRepresentations () const |
Precomputes the representations of the training set examples, to speed up nearest neighbors searches in that space. | |
virtual TVec< std::string > | getTestCostNames () const |
Returns the names of the costs computed by computeCostsFromOutpus (and thus the test method). | |
virtual TVec< std::string > | getTrainCostNames () const |
Returns the names of the objective costs that the train method computes and for which it updates the VecStatsCollector train_stats. | |
virtual void | setTrainingSet (VMat training_set, bool call_forget=true) |
Declares the training set. | |
void | greedyStep (const Vec &input, const Vec &target, int index, Vec train_costs, int stage, Vec similar_example, Vec dissimilar_example) |
void | fineTuningStep (const Vec &input, const Vec &target, Vec &train_costs, Vec similar_example, Vec dissimilar_example) |
void | computeRepresentation (const Vec &input, Vec &representation, int layer) const |
virtual string | classname () const |
virtual OptionList & | getOptionList () const |
virtual OptionMap & | getOptionMap () const |
virtual RemoteMethodMap & | getRemoteMethodMap () const |
virtual StackedFocusedAutoassociatorsNet * | deepCopy (CopiesMap &copies) const |
virtual void | build () |
Finish building the object; just call inherited::build followed by build_() | |
virtual void | makeDeepCopyFromShallowCopy (CopiesMap &copies) |
Transforms a shallow copy into a deep copy. | |
Static Public Member Functions | |
static string | _classname_ () |
static OptionList & | _getOptionList_ () |
static RemoteMethodMap & | _getRemoteMethodMap_ () |
static Object * | _new_instance_for_typemap_ () |
static bool | _isa_ (const Object *o) |
static void | _static_initialize_ () |
static const PPath & | declaringFile () |
Public Attributes | |
real | cd_learning_rate |
Contrastive divergence learning rate. | |
real | cd_decrease_ct |
Contrastive divergence decrease constant. | |
real | greedy_learning_rate |
The learning rate used during the autoassociator gradient descent training. | |
real | greedy_decrease_ct |
The decrease constant of the learning rate used during the autoassociator gradient descent training. | |
real | supervised_greedy_learning_rate |
Supervised, non-parametric, greedy learning rate. | |
real | supervised_greedy_decrease_ct |
Supervised, non-parametric, greedy decrease constant. | |
real | fine_tuning_learning_rate |
The learning rate used during the fine tuning gradient descent. | |
real | fine_tuning_decrease_ct |
The decrease constant of the learning rate used during fine tuning gradient descent. | |
TVec< int > | training_schedule |
Number of examples to use during each phase of greedy pre-training. | |
TVec< PP< RBMLayer > > | layers |
The layers of units in the network. | |
TVec< PP< RBMConnection > > | connections |
The weights of the connections between the layers. | |
TVec< PP< RBMConnection > > | reconstruction_connections |
The reconstruction weights of the autoassociators. | |
TVec< PP< RBMLayer > > | unsupervised_layers |
Additional units for greedy unsupervised learning. | |
TVec< PP< RBMConnection > > | unsupervised_connections |
Additional connections for greedy unsupervised learning. | |
int | k_neighbors |
Number of good nearest neighbors to attract and bad nearest neighbors to repel. | |
int | n_classes |
Number of classes. | |
real | dissimilar_example_cost_precision |
Parameter that constrols the importance of the dissimilar example cost. | |
bool | do_not_use_knn_classifier |
Use standard neural net architecture, not the nearest neighbor model. | |
real | output_weights_l1_penalty_factor |
Output weights l1_penalty_factor. | |
real | output_weights_l2_penalty_factor |
Output weights l2_penalty_factor. | |
int | n_layers |
Number of layers. | |
Static Public Attributes | |
static StaticInitializer | _static_initializer_ |
Static Protected Member Functions | |
static void | declareOptions (OptionList &ol) |
Declares the class options. | |
Protected Attributes | |
TVec< Vec > | activations |
Stores the activations of the input and hidden layers (at the input of the layers) | |
TVec< Vec > | expectations |
Stores the expectations of the input and hidden layers (at the output of the layers) | |
TVec< Vec > | activation_gradients |
Stores the gradient of the cost wrt the activations of the input and hidden layers (at the input of the layers) | |
TVec< Vec > | expectation_gradients |
Stores the gradient of the cost wrt the expectations of the input and hidden layers (at the output of the layers) | |
Vec | greedy_activation |
Stores the activation of the trained hidden layer during a greedy step. | |
Vec | greedy_expectation |
Stores the expectation of the trained hidden layer during a greedy step. | |
Vec | greedy_activation_gradient |
Stores the activation gradient of the trained hidden layer during a greedy step. | |
Vec | greedy_expectation_gradient |
Stores the expectation gradient of the trained hidden layer during a greedy step. | |
Vec | reconstruction_activations |
Reconstruction activations. | |
Vec | reconstruction_activation_gradients |
Reconstruction activation gradients. | |
Vec | reconstruction_expectation_gradients |
Reconstruction expectation gradients. | |
TVec< PP< RBMLayer > > | greedy_layers |
Layers used for greedy learning. | |
TVec< PP< RBMConnection > > | greedy_connections |
Connections used for greedy learning. | |
Vec | similar_example_representation |
Similar example representation. | |
Vec | dissimilar_example_representation |
Dissimilar example representation. | |
Vec | input_representation |
Example representation. | |
Vec | previous_input_representation |
Example representation at the previous layer, in a greedy step. | |
Vec | dissimilar_gradient_contribution |
Dissimilar gradient contribution. | |
Vec | pos_down_val |
Positive down statistic. | |
Vec | pos_up_val |
Positive up statistic. | |
Vec | neg_down_val |
Negative down statistic. | |
Vec | neg_up_val |
Negative up statistic. | |
Vec | final_cost_input |
Input of cost function. | |
Vec | final_cost_value |
Cost value. | |
Vec | final_cost_gradient |
Cost gradient on output layer. | |
TVec< PP< ClassSubsetVMatrix > > | class_datasets |
Datasets for each class. | |
Mat | other_classes_proportions |
Proportions of examples from the other classes (columns), for each class (rows) | |
TMat< int > | nearest_neighbors_indices |
Nearest neighbors for each training example. | |
TVec< int > | test_nearest_neighbors_indices |
Nearest neighbors for each test example. | |
TVec< int > | test_votes |
Nearest neighbor votes for test example. | |
Mat | train_set_representations |
Data set mapped to last hidden layer space. | |
VMat | train_set_representations_vmat |
TVec< int > | train_set_targets |
bool | train_set_representations_up_to_date |
Indication that train_set_representations is up to date. | |
TVec< int > | greedy_stages |
Stages of the different greedy phases. | |
int | currently_trained_layer |
Currently trained layer (1 means the first hidden layer, n_layers means the output layer) | |
PP< OnlineLearningModule > | final_module |
Output layer of neural net. | |
PP< CostModule > | final_cost |
Cost on output layer of neural net. | |
Private Types | |
typedef PLearner | inherited |
Private Member Functions | |
void | build_ () |
This does the actual building. | |
void | build_layers_and_connections () |
void | build_output_layer_and_cost () |
void | setLearningRate (real the_learning_rate) |
Neural net, trained layer-wise in a greedy but focused fashion using autoassociators/RBMs and a supervised non-parametric gradient.
It is highly inspired by the StackedAutoassociators class, and can use use the same RBMLayer and RBMConnection components.
Definition at line 67 of file StackedFocusedAutoassociatorsNet.h.
typedef PLearner PLearn::StackedFocusedAutoassociatorsNet::inherited [private] |
Reimplemented from PLearn::PLearner.
Definition at line 69 of file StackedFocusedAutoassociatorsNet.h.
PLearn::StackedFocusedAutoassociatorsNet::StackedFocusedAutoassociatorsNet | ( | ) |
Default constructor.
Definition at line 59 of file StackedFocusedAutoassociatorsNet.cc.
References PLearn::PLearner::nstages, and PLearn::PLearner::random_gen.
: cd_learning_rate( 0. ), cd_decrease_ct( 0. ), greedy_learning_rate( 0. ), greedy_decrease_ct( 0. ), supervised_greedy_learning_rate( 0. ), supervised_greedy_decrease_ct( 0. ), fine_tuning_learning_rate( 0. ), fine_tuning_decrease_ct( 0. ), k_neighbors( 1 ), n_classes( -1 ), dissimilar_example_cost_precision(2.77), // Value taken from original paper do_not_use_knn_classifier(false), output_weights_l1_penalty_factor(0), output_weights_l2_penalty_factor(0), n_layers( 0 ), train_set_representations_up_to_date(false), currently_trained_layer( 0 ) { // random_gen will be initialized in PLearner::build_() random_gen = new PRandom(); nstages = 0; }
string PLearn::StackedFocusedAutoassociatorsNet::_classname_ | ( | ) | [static] |
Reimplemented from PLearn::PLearner.
Definition at line 57 of file StackedFocusedAutoassociatorsNet.cc.
OptionList & PLearn::StackedFocusedAutoassociatorsNet::_getOptionList_ | ( | ) | [static] |
Reimplemented from PLearn::PLearner.
Definition at line 57 of file StackedFocusedAutoassociatorsNet.cc.
RemoteMethodMap & PLearn::StackedFocusedAutoassociatorsNet::_getRemoteMethodMap_ | ( | ) | [static] |
Reimplemented from PLearn::PLearner.
Definition at line 57 of file StackedFocusedAutoassociatorsNet.cc.
Reimplemented from PLearn::PLearner.
Definition at line 57 of file StackedFocusedAutoassociatorsNet.cc.
Object * PLearn::StackedFocusedAutoassociatorsNet::_new_instance_for_typemap_ | ( | ) | [static] |
Reimplemented from PLearn::Object.
Definition at line 57 of file StackedFocusedAutoassociatorsNet.cc.
StaticInitializer StackedFocusedAutoassociatorsNet::_static_initializer_ & PLearn::StackedFocusedAutoassociatorsNet::_static_initialize_ | ( | ) | [static] |
Reimplemented from PLearn::PLearner.
Definition at line 57 of file StackedFocusedAutoassociatorsNet.cc.
void PLearn::StackedFocusedAutoassociatorsNet::build | ( | ) | [virtual] |
Finish building the object; just call inherited::build followed by build_()
Reimplemented from PLearn::PLearner.
Definition at line 524 of file StackedFocusedAutoassociatorsNet.cc.
References PLearn::PLearner::build(), and build_().
{ inherited::build(); build_(); }
void PLearn::StackedFocusedAutoassociatorsNet::build_ | ( | ) | [private] |
This does the actual building.
Reimplemented from PLearn::PLearner.
Definition at line 220 of file StackedFocusedAutoassociatorsNet.cc.
References build_layers_and_connections(), build_output_layer_and_cost(), PLearn::TVec< T >::clear(), currently_trained_layer, do_not_use_knn_classifier, PLearn::endl(), final_cost, final_module, greedy_stages, PLearn::PLearner::inputsize_, k_neighbors, layers, PLearn::TVec< T >::length(), n_classes, n_layers, PLERROR, PLearn::TVec< T >::resize(), PLearn::PLearner::stage, PLearn::PLearner::targetsize_, test_nearest_neighbors_indices, test_votes, train_set_representations_up_to_date, training_schedule, and PLearn::PLearner::weightsize_.
Referenced by build().
{ // ### This method should do the real building of the object, // ### according to set 'options', in *any* situation. // ### Typical situations include: // ### - Initial building of an object from a few user-specified options // ### - Building of a "reloaded" object: i.e. from the complete set of // ### all serialised options. // ### - Updating or "re-building" of an object after a few "tuning" // ### options have been modified. // ### You should assume that the parent class' build_() has already been // ### called. MODULE_LOG << "build_() called" << endl; if(inputsize_ > 0 && targetsize_ > 0) { // Initialize some learnt variables n_layers = layers.length(); train_set_representations_up_to_date = false; if( n_classes <= 0 ) PLERROR("StackedFocusedAutoassociatorsNet::build_() - \n" "n_classes should be > 0.\n"); test_votes.resize(n_classes); if( k_neighbors <= 0 ) PLERROR("StackedFocusedAutoassociatorsNet::build_() - \n" "k_neighbors should be > 0.\n"); test_nearest_neighbors_indices.resize(k_neighbors); if( weightsize_ > 0 ) PLERROR("StackedFocusedAutoassociatorsNet::build_() - \n" "usage of weighted samples (weight size > 0) is not\n" "implemented yet.\n"); if( training_schedule.length() != n_layers-1 ) PLERROR("StackedFocusedAutoassociatorsNet::build_() - \n" "training_schedule should have %d elements.\n", n_layers-1); if(greedy_stages.length() == 0) { greedy_stages.resize(n_layers-1); greedy_stages.clear(); } if(stage > 0) currently_trained_layer = n_layers; else { currently_trained_layer = n_layers-1; while(currently_trained_layer>1 && greedy_stages[currently_trained_layer-1] <= 0) currently_trained_layer--; } build_layers_and_connections(); if( do_not_use_knn_classifier & (!final_module || !final_cost) ) build_output_layer_and_cost(); } }
void PLearn::StackedFocusedAutoassociatorsNet::build_layers_and_connections | ( | ) | [private] |
Definition at line 333 of file StackedFocusedAutoassociatorsNet.cc.
References activation_gradients, activations, connections, PLearn::endl(), expectation_gradients, expectations, PLearn::fast_exact_is_equal(), greedy_connections, greedy_layers, greedy_learning_rate, i, PLearn::PLearner::inputsize_, layers, PLearn::TVec< T >::length(), n_layers, PLERROR, PLearn::PLearner::random_gen, reconstruction_connections, PLearn::TVec< T >::resize(), PLearn::TVec< T >::size(), unsupervised_connections, and unsupervised_layers.
Referenced by build_().
{ MODULE_LOG << "build_layers_and_connections() called" << endl; if( connections.length() != n_layers-1 ) PLERROR("StackedFocusedAutoassociatorsNet::build_layers_and_connections() - \n" "there should be %d connections.\n", n_layers-1); if( !fast_exact_is_equal( greedy_learning_rate, 0 ) && reconstruction_connections.length() != n_layers-1 ) PLERROR("StackedFocusedAutoassociatorsNet::build_layers_and_connections() - \n" "there should be %d reconstruction connections.\n", n_layers-1); if( !( reconstruction_connections.length() == 0 || reconstruction_connections.length() == n_layers-1 ) ) PLERROR("StackedFocusedAutoassociatorsNet::build_layers_and_connections() - \n" "there should be either 0 or %d reconstruction connections.\n", n_layers-1); if(unsupervised_layers.length() != n_layers-1 && unsupervised_layers.length() != 0) PLERROR("StackedFocusedAutoassociatorsNet::build_layers_and_connections() - \n" "there should be either 0 of %d unsupervised_layers.\n", n_layers-1); if(unsupervised_connections.length() != n_layers-1 && unsupervised_connections.length() != 0) PLERROR("StackedFocusedAutoassociatorsNet::build_layers_and_connections() - \n" "there should be either 0 of %d unsupervised_connections.\n", n_layers-1); if(unsupervised_connections.length() != unsupervised_layers.length()) PLERROR("StackedFocusedAutoassociatorsNet::build_layers_and_connections() - \n" "there should be as many unsupervised_connections and " "unsupervised_layers.\n"); if(layers[0]->size != inputsize_) PLERROR("StackedFocusedAutoassociatorsNet::build_layers_and_connections() - \n" "layers[0] should have a size of %d.\n", inputsize_); activations.resize( n_layers ); expectations.resize( n_layers ); activation_gradients.resize( n_layers ); expectation_gradients.resize( n_layers ); greedy_layers.resize(n_layers-1); greedy_connections.resize(n_layers-1); for( int i=0 ; i<n_layers-1 ; i++ ) { if( layers[i]->size != connections[i]->down_size ) PLERROR("StackedFocusedAutoassociatorsNet::build_layers_and_connections() " "- \n" "connections[%i] should have a down_size of %d.\n", i, layers[i]->size); if( connections[i]->up_size != layers[i+1]->size ) PLERROR("StackedFocusedAutoassociatorsNet::build_layers_and_connections() " "- \n" "connections[%i] should have a up_size of %d.\n", i, layers[i+1]->size); if(unsupervised_layers.length() != 0 && unsupervised_connections.length() != 0 && unsupervised_layers[i] && unsupervised_connections[i]) { if( layers[i]->size != unsupervised_connections[i]->down_size ) PLERROR("StackedFocusedAutoassociatorsNet::build_layers_and_connections() " "- \n" "connections[%i] should have a down_size of %d.\n", i, unsupervised_layers[i]->size); if( unsupervised_connections[i]->up_size != unsupervised_layers[i]->size ) PLERROR("StackedFocusedAutoassociatorsNet::build_layers_and_connections() " "- \n" "connections[%i] should have a up_size of %d.\n", i, unsupervised_layers[i+1]->size); if( reconstruction_connections.length() != 0 ) { if( layers[i+1]->size + unsupervised_layers[i]->size != reconstruction_connections[i]->down_size ) PLERROR("StackedFocusedAutoassociatorsNet::build_layers_and_connections() " "- \n" "recontruction_connections[%i] should have a down_size of " "%d.\n", i, layers[i+1]->size + unsupervised_layers[i]->size); if( reconstruction_connections[i]->up_size != layers[i]->size ) PLERROR("StackedFocusedAutoassociatorsNet::build_layers_and_connections() " "- \n" "recontruction_connections[%i] should have a up_size of " "%d.\n", i, layers[i]->size); } if( !(unsupervised_layers[i]->random_gen) ) { unsupervised_layers[i]->random_gen = random_gen; unsupervised_layers[i]->forget(); } if( !(unsupervised_connections[i]->random_gen) ) { unsupervised_connections[i]->random_gen = random_gen; unsupervised_connections[i]->forget(); } PP<RBMMixedLayer> greedy_layer = new RBMMixedLayer(); greedy_layer->sub_layers.resize(2); greedy_layer->sub_layers[0] = layers[i+1]; greedy_layer->sub_layers[1] = unsupervised_layers[i]; greedy_layer->size = layers[i+1]->size + unsupervised_layers[i]->size; greedy_layer->build(); PP<RBMMixedConnection> greedy_connection = new RBMMixedConnection(); greedy_connection->sub_connections.resize(2,1); greedy_connection->sub_connections(0,0) = connections[i]; greedy_connection->sub_connections(1,0) = unsupervised_connections[i]; greedy_connection->build(); greedy_layers[i] = greedy_layer; greedy_connections[i] = greedy_connection; } else { if( reconstruction_connections.length() != 0 ) { if( layers[i+1]->size != reconstruction_connections[i]->down_size ) PLERROR("StackedFocusedAutoassociatorsNet::build_layers_and_connections() " "- \n" "recontruction_connections[%i] should have a down_size of " "%d.\n", i, layers[i+1]->size); if( reconstruction_connections[i]->up_size != layers[i]->size ) PLERROR("StackedFocusedAutoassociatorsNet::build_layers_and_connections() " "- \n" "recontruction_connections[%i] should have a up_size of " "%d.\n", i, layers[i]->size); } greedy_layers[i] = layers[i+1]; greedy_connections[i] = connections[i]; } if( !(layers[i]->random_gen) ) { layers[i]->random_gen = random_gen; layers[i]->forget(); } if( !(connections[i]->random_gen) ) { connections[i]->random_gen = random_gen; connections[i]->forget(); } if( reconstruction_connections.length() != 0 && !(reconstruction_connections[i]->random_gen) ) { reconstruction_connections[i]->random_gen = random_gen; reconstruction_connections[i]->forget(); } activations[i].resize( layers[i]->size ); expectations[i].resize( layers[i]->size ); activation_gradients[i].resize( layers[i]->size ); expectation_gradients[i].resize( layers[i]->size ); } if( !(layers[n_layers-1]->random_gen) ) { layers[n_layers-1]->random_gen = random_gen; layers[n_layers-1]->forget(); } activations[n_layers-1].resize( layers[n_layers-1]->size ); expectations[n_layers-1].resize( layers[n_layers-1]->size ); activation_gradients[n_layers-1].resize( layers[n_layers-1]->size ); expectation_gradients[n_layers-1].resize( layers[n_layers-1]->size ); }
void PLearn::StackedFocusedAutoassociatorsNet::build_output_layer_and_cost | ( | ) | [private] |
Definition at line 285 of file StackedFocusedAutoassociatorsNet.cc.
References PLearn::GradNNetLayerModule::build(), PLearn::SoftmaxModule::build(), PLearn::ModuleStackModule::build(), PLearn::ClassErrorCostModule::build(), PLearn::CombiningCostsModule::build(), PLearn::NLLCostModule::build(), PLearn::class_error(), PLearn::CombiningCostsModule::cost_weights, final_cost, final_module, PLearn::OnlineLearningModule::input_size, PLearn::GradNNetLayerModule::L1_penalty_factor, PLearn::GradNNetLayerModule::L2_penalty_factor, layers, PLearn::ModuleStackModule::modules, n_classes, n_layers, PLearn::OnlineLearningModule::output_size, output_weights_l1_penalty_factor, output_weights_l2_penalty_factor, PLearn::OnlineLearningModule::random_gen, PLearn::PLearner::random_gen, PLearn::TVec< T >::resize(), PLearn::TVec< T >::size(), and PLearn::CombiningCostsModule::sub_costs.
Referenced by build_(), and forget().
{ GradNNetLayerModule* gnl = new GradNNetLayerModule(); gnl->input_size = layers[n_layers-1]->size; gnl->output_size = n_classes; gnl->L1_penalty_factor = output_weights_l1_penalty_factor; gnl->L2_penalty_factor = output_weights_l2_penalty_factor; gnl->random_gen = random_gen; gnl->build(); SoftmaxModule* sm = new SoftmaxModule(); sm->input_size = n_classes; sm->random_gen = random_gen; sm->build(); ModuleStackModule* msm = new ModuleStackModule(); msm->modules.resize(2); msm->modules[0] = gnl; msm->modules[1] = sm; msm->random_gen = random_gen; msm->build(); final_module = msm; final_module->forget(); NLLCostModule* nll = new NLLCostModule(); nll->input_size = n_classes; nll->random_gen = random_gen; nll->build(); ClassErrorCostModule* class_error = new ClassErrorCostModule(); class_error->input_size = n_classes; class_error->random_gen = random_gen; class_error->build(); CombiningCostsModule* comb_costs = new CombiningCostsModule(); comb_costs->cost_weights.resize(2); comb_costs->cost_weights[0] = 1; comb_costs->cost_weights[1] = 0; comb_costs->sub_costs.resize(2); comb_costs->sub_costs[0] = nll; comb_costs->sub_costs[1] = class_error; comb_costs->build(); final_cost = comb_costs; final_cost->forget(); }
string PLearn::StackedFocusedAutoassociatorsNet::classname | ( | ) | const [virtual] |
Reimplemented from PLearn::Object.
Definition at line 57 of file StackedFocusedAutoassociatorsNet.cc.
Referenced by train().
void PLearn::StackedFocusedAutoassociatorsNet::computeCostsFromOutputs | ( | const Vec & | input, |
const Vec & | output, | ||
const Vec & | target, | ||
Vec & | costs | ||
) | const [virtual] |
Computes the costs from already computed output.
Implements PLearn::PLearner.
Definition at line 1189 of file StackedFocusedAutoassociatorsNet.cc.
References currently_trained_layer, expectations, PLearn::TVec< T >::fill(), getTestCostNames(), greedy_activation, greedy_connections, greedy_expectation, greedy_layers, layers, PLearn::TVec< T >::length(), MISSING_VALUE, n_layers, reconstruction_activations, reconstruction_connections, and PLearn::TVec< T >::resize().
{ //Assumes that computeOutput has been called costs.resize( getTestCostNames().length() ); costs.fill( MISSING_VALUE ); if( currently_trained_layer<n_layers && reconstruction_connections.length() != 0 ) { greedy_connections[currently_trained_layer-1]->fprop( expectations[currently_trained_layer-1], greedy_activation); greedy_layers[currently_trained_layer-1]->fprop(greedy_activation, greedy_expectation); reconstruction_connections[ currently_trained_layer-1 ]->fprop( greedy_expectation, reconstruction_activations); layers[ currently_trained_layer-1 ]->fprop( reconstruction_activations, layers[ currently_trained_layer-1 ]->expectation); layers[ currently_trained_layer-1 ]->activation << reconstruction_activations; layers[ currently_trained_layer-1 ]->setExpectationByRef( layers[ currently_trained_layer-1 ]->expectation); costs[ currently_trained_layer-1 ] = layers[ currently_trained_layer-1 ]->fpropNLL( expectations[currently_trained_layer-1]); } if( ((int)round(output[0])) == ((int)round(target[0])) ) costs[n_layers-1] = 0; else costs[n_layers-1] = 1; }
void PLearn::StackedFocusedAutoassociatorsNet::computeOutput | ( | const Vec & | input, |
Vec & | output | ||
) | const [virtual] |
Computes the output from the input.
Reimplemented from PLearn::PLearner.
Definition at line 1162 of file StackedFocusedAutoassociatorsNet.cc.
References PLearn::argmax(), PLearn::TVec< T >::clear(), PLearn::computeNearestNeighbors(), computeRepresentation(), currently_trained_layer, do_not_use_knn_classifier, final_cost_input, final_module, i, input_representation, PLearn::TVec< T >::length(), PLearn::min(), n_layers, test_nearest_neighbors_indices, test_votes, train_set_representations_vmat, train_set_targets, and updateTrainSetRepresentations().
{ if( do_not_use_knn_classifier & currently_trained_layer>n_layers-1 ) { computeRepresentation(input,input_representation, min(currently_trained_layer,n_layers-1)); final_module->fprop( input_representation, final_cost_input ); output[0] = argmax(final_cost_input); } else { updateTrainSetRepresentations(); computeRepresentation(input,input_representation, min(currently_trained_layer,n_layers-1)); computeNearestNeighbors(train_set_representations_vmat,input_representation, test_nearest_neighbors_indices); test_votes.clear(); for(int i=0; i<test_nearest_neighbors_indices.length(); i++) test_votes[train_set_targets[test_nearest_neighbors_indices[i]]]++; output[0] = argmax(test_votes); } }
void PLearn::StackedFocusedAutoassociatorsNet::computeRepresentation | ( | const Vec & | input, |
Vec & | representation, | ||
int | layer | ||
) | const |
Definition at line 1140 of file StackedFocusedAutoassociatorsNet.cc.
References activations, connections, expectations, i, layers, PLearn::TVec< T >::length(), and PLearn::TVec< T >::resize().
Referenced by computeOutput(), fineTuningStep(), greedyStep(), and updateTrainSetRepresentations().
{ if(layer == 0) { representation.resize(input.length()); expectations[0] << input; representation << input; return; } expectations[0] << input; for( int i=0 ; i<layer; i++ ) { connections[i]->fprop( expectations[i], activations[i+1] ); layers[i+1]->fprop(activations[i+1],expectations[i+1]); } representation.resize(expectations[layer].length()); representation << expectations[layer]; }
void PLearn::StackedFocusedAutoassociatorsNet::declareOptions | ( | OptionList & | ol | ) | [static, protected] |
Declares the class options.
Reimplemented from PLearn::PLearner.
Definition at line 83 of file StackedFocusedAutoassociatorsNet.cc.
References PLearn::OptionBase::buildoption, cd_decrease_ct, cd_learning_rate, connections, PLearn::declareOption(), PLearn::PLearner::declareOptions(), dissimilar_example_cost_precision, do_not_use_knn_classifier, final_cost, final_module, fine_tuning_decrease_ct, fine_tuning_learning_rate, greedy_decrease_ct, greedy_learning_rate, greedy_stages, k_neighbors, layers, PLearn::OptionBase::learntoption, n_classes, n_layers, reconstruction_connections, supervised_greedy_decrease_ct, supervised_greedy_learning_rate, training_schedule, unsupervised_connections, and unsupervised_layers.
{ declareOption(ol, "cd_learning_rate", &StackedFocusedAutoassociatorsNet::cd_learning_rate, OptionBase::buildoption, "The learning rate used during the RBM " "contrastive divergence training"); declareOption(ol, "cd_decrease_ct", &StackedFocusedAutoassociatorsNet::cd_decrease_ct, OptionBase::buildoption, "The decrease constant of the learning rate used during " "the RBMs contrastive\n" "divergence training. When a hidden layer has finished " "its training,\n" "the learning rate is reset to it's initial value.\n"); declareOption(ol, "greedy_learning_rate", &StackedFocusedAutoassociatorsNet::greedy_learning_rate, OptionBase::buildoption, "The learning rate used during the autoassociator " "gradient descent training"); declareOption(ol, "greedy_decrease_ct", &StackedFocusedAutoassociatorsNet::greedy_decrease_ct, OptionBase::buildoption, "The decrease constant of the learning rate used during " "the autoassociator\n" "gradient descent training. When a hidden layer has finished " "its training,\n" "the learning rate is reset to it's initial value.\n"); declareOption(ol, "supervised_greedy_learning_rate", &StackedFocusedAutoassociatorsNet::supervised_greedy_learning_rate, OptionBase::buildoption, "Supervised, non-parametric, greedy learning rate"); declareOption(ol, "supervised_greedy_decrease_ct", &StackedFocusedAutoassociatorsNet::supervised_greedy_decrease_ct, OptionBase::buildoption, "Supervised, non-parametric, greedy decrease constant"); declareOption(ol, "fine_tuning_learning_rate", &StackedFocusedAutoassociatorsNet::fine_tuning_learning_rate, OptionBase::buildoption, "The learning rate used during the fine tuning gradient descent"); declareOption(ol, "fine_tuning_decrease_ct", &StackedFocusedAutoassociatorsNet::fine_tuning_decrease_ct, OptionBase::buildoption, "The decrease constant of the learning rate used during " "fine tuning\n" "gradient descent.\n"); declareOption(ol, "training_schedule", &StackedFocusedAutoassociatorsNet::training_schedule, OptionBase::buildoption, "Number of examples to use during each phase of greedy pre-training.\n" "The number of fine-tunig steps is defined by nstages.\n" ); declareOption(ol, "layers", &StackedFocusedAutoassociatorsNet::layers, OptionBase::buildoption, "The layers of units in the network. The first element\n" "of this vector should be the input layer and the\n" "subsequent elements should be the hidden layers. The\n" "output layer should not be included in layers.\n"); declareOption(ol, "connections", &StackedFocusedAutoassociatorsNet::connections, OptionBase::buildoption, "The weights of the connections between the layers"); declareOption(ol, "reconstruction_connections", &StackedFocusedAutoassociatorsNet::reconstruction_connections, OptionBase::buildoption, "The reconstruction weights of the autoassociators"); declareOption(ol, "unsupervised_layers", &StackedFocusedAutoassociatorsNet::unsupervised_layers, OptionBase::buildoption, "Additional units for greedy unsupervised learning"); declareOption(ol, "unsupervised_connections", &StackedFocusedAutoassociatorsNet::unsupervised_connections, OptionBase::buildoption, "Additional connections for greedy unsupervised learning"); declareOption(ol, "k_neighbors", &StackedFocusedAutoassociatorsNet::k_neighbors, OptionBase::buildoption, "Number of good nearest neighbors to attract and bad nearest " "neighbors to repel."); declareOption(ol, "n_classes", &StackedFocusedAutoassociatorsNet::n_classes, OptionBase::buildoption, "Number of classes."); declareOption(ol, "dissimilar_example_cost_precision", &StackedFocusedAutoassociatorsNet::dissimilar_example_cost_precision, OptionBase::buildoption, "Parameter that constrols the importance of the dissimilar example cost."); declareOption(ol, "do_not_use_knn_classifier", &StackedFocusedAutoassociatorsNet::do_not_use_knn_classifier, OptionBase::buildoption, "Use standard neural net architecture, not the nearest " "neighbor model."); declareOption(ol, "greedy_stages", &StackedFocusedAutoassociatorsNet::greedy_stages, OptionBase::learntoption, "Number of training samples seen in the different greedy " "phases.\n" ); declareOption(ol, "n_layers", &StackedFocusedAutoassociatorsNet::n_layers, OptionBase::learntoption, "Number of layers" ); declareOption(ol, "final_module", &StackedFocusedAutoassociatorsNet::final_module, OptionBase::learntoption, "Output layer of neural net" ); declareOption(ol, "final_cost", &StackedFocusedAutoassociatorsNet::final_cost, OptionBase::learntoption, "Cost on output layer of neural net" ); // Now call the parent class' declareOptions inherited::declareOptions(ol); }
static const PPath& PLearn::StackedFocusedAutoassociatorsNet::declaringFile | ( | ) | [inline, static] |
Reimplemented from PLearn::PLearner.
Definition at line 213 of file StackedFocusedAutoassociatorsNet.h.
:
//##### Not Options #####################################################
StackedFocusedAutoassociatorsNet * PLearn::StackedFocusedAutoassociatorsNet::deepCopy | ( | CopiesMap & | copies | ) | const [virtual] |
Reimplemented from PLearn::PLearner.
Definition at line 57 of file StackedFocusedAutoassociatorsNet.cc.
void PLearn::StackedFocusedAutoassociatorsNet::fineTuningStep | ( | const Vec & | input, |
const Vec & | target, | ||
Vec & | train_costs, | ||
Vec | similar_example, | ||
Vec | dissimilar_example | ||
) |
Definition at line 1044 of file StackedFocusedAutoassociatorsNet.cc.
References activation_gradients, activations, computeRepresentation(), connections, dissimilar_example_cost_precision, dissimilar_example_representation, dissimilar_gradient_contribution, PLearn::dist(), do_not_use_knn_classifier, expectation_gradients, expectations, final_cost, final_cost_gradient, final_cost_input, final_cost_value, final_module, i, PLearn::TVec< T >::last(), layers, PLearn::TVec< T >::length(), n_layers, PLearn::powdistance(), previous_input_representation, PLearn::safeexp(), similar_example_representation, PLearn::sqrt(), PLearn::substract(), and train_set_representations_up_to_date.
Referenced by train().
{ train_set_representations_up_to_date = false; if( !do_not_use_knn_classifier ) { // Get similar example representation computeRepresentation(similar_example, similar_example_representation, n_layers-1); // Get dissimilar example representation computeRepresentation(dissimilar_example, dissimilar_example_representation, n_layers-1); } // Get example representation computeRepresentation(input, previous_input_representation, n_layers-1); // Compute supervised gradient if( !do_not_use_knn_classifier ) { // Similar example contribution substract(previous_input_representation,similar_example_representation, expectation_gradients[n_layers-1]); expectation_gradients[n_layers-1] *= 4/sqrt((real)layers[n_layers-1]->size); train_costs[train_costs.length()-3] = 2 * sqrt(powdistance(previous_input_representation, similar_example_representation, 2)) / sqrt((real)layers[n_layers-1]->size); // Dissimilar example contribution real dist = sqrt(powdistance(previous_input_representation, dissimilar_example_representation, 2)); train_costs[train_costs.length()-2] = 2 * sqrt((real)layers[n_layers-1]->size) * safeexp( -dissimilar_example_cost_precision *dist/sqrt((real)layers[n_layers-1]->size)); train_costs.last() = train_costs[train_costs.length()-3] + train_costs[train_costs.length()-2]; //if( dist == 0 ) // PLWARNING("StackedFocusedAutoassociatorsNet::fineTuningStep(): dissimilar" // " example representation is exactly the sample as the" // " input example. Gradient would be infinite! Skipping this" // " example..."); //else //{ substract(previous_input_representation, dissimilar_example_representation, dissimilar_gradient_contribution); dissimilar_gradient_contribution *= -2 * dissimilar_example_cost_precision* safeexp(-dissimilar_example_cost_precision*dist/sqrt((real)layers[n_layers-1]->size)); expectation_gradients[n_layers-1] += dissimilar_gradient_contribution; //} } else { final_module->fprop( previous_input_representation, final_cost_input ); final_cost->fprop( final_cost_input, target, final_cost_value ); final_cost->bpropUpdate( final_cost_input, target, final_cost_value[0], final_cost_gradient ); final_module->bpropUpdate( previous_input_representation, final_cost_input, expectation_gradients[ n_layers-1 ], final_cost_gradient ); } for( int i=n_layers-1 ; i>0 ; i-- ) { layers[i]->bpropUpdate( activations[i], expectations[i], activation_gradients[i], expectation_gradients[i] ); connections[i-1]->bpropUpdate( expectations[i-1], activations[i], expectation_gradients[i-1], activation_gradients[i] ); } }
void PLearn::StackedFocusedAutoassociatorsNet::forget | ( | ) | [virtual] |
(Re-)initializes the PLearner in its fresh state (that state may depend on the 'seed' option) and sets 'stage' back to 0 (this is the stage of a fresh learner!).
(Re-)initialize the PLearner in its fresh state (that state may depend on the 'seed' option) and sets 'stage' back to 0 (this is the stage of a fresh learner!)
A typical forget() method should do the following:
Reimplemented from PLearn::PLearner.
Definition at line 593 of file StackedFocusedAutoassociatorsNet.cc.
References build_output_layer_and_cost(), PLearn::TVec< T >::clear(), connections, do_not_use_knn_classifier, PLearn::PLearner::forget(), greedy_stages, i, layers, PLearn::TVec< T >::length(), n_layers, reconstruction_connections, PLearn::PLearner::stage, train_set_representations_up_to_date, unsupervised_connections, and unsupervised_layers.
{ inherited::forget(); train_set_representations_up_to_date = false; for( int i=0 ; i<n_layers ; i++ ) layers[i]->forget(); for( int i=0 ; i<n_layers-1 ; i++ ) connections[i]->forget(); if(unsupervised_layers.length() != 0) for( int i=0 ; i<n_layers-1 ; i++ ) unsupervised_layers[i]->forget(); if(unsupervised_connections.length() != 0) for( int i=0 ; i<n_layers-1 ; i++ ) unsupervised_connections[i]->forget(); for( int i=0; i<reconstruction_connections.length(); i++) reconstruction_connections[i]->forget(); if( do_not_use_knn_classifier ) build_output_layer_and_cost(); stage = 0; greedy_stages.clear(); }
OptionList & PLearn::StackedFocusedAutoassociatorsNet::getOptionList | ( | ) | const [virtual] |
Reimplemented from PLearn::Object.
Definition at line 57 of file StackedFocusedAutoassociatorsNet.cc.
OptionMap & PLearn::StackedFocusedAutoassociatorsNet::getOptionMap | ( | ) | const [virtual] |
Reimplemented from PLearn::Object.
Definition at line 57 of file StackedFocusedAutoassociatorsNet.cc.
RemoteMethodMap & PLearn::StackedFocusedAutoassociatorsNet::getRemoteMethodMap | ( | ) | const [virtual] |
Reimplemented from PLearn::Object.
Definition at line 57 of file StackedFocusedAutoassociatorsNet.cc.
TVec< string > PLearn::StackedFocusedAutoassociatorsNet::getTestCostNames | ( | ) | const [virtual] |
Returns the names of the costs computed by computeCostsFromOutpus (and thus the test method).
Implements PLearn::PLearner.
Definition at line 1260 of file StackedFocusedAutoassociatorsNet.cc.
References PLearn::TVec< T >::append(), i, layers, PLearn::TVec< T >::push_back(), PLearn::TVec< T >::size(), and PLearn::tostring().
Referenced by computeCostsFromOutputs(), and getTrainCostNames().
{ // Return the names of the costs computed by computeCostsFromOutputs // (these may or may not be exactly the same as what's returned by // getTrainCostNames). TVec<string> cost_names(0); for( int i=0; i<layers.size()-1; i++) cost_names.push_back("reconstruction_error_" + tostring(i+1)); cost_names.append( "class_error" ); return cost_names; }
TVec< string > PLearn::StackedFocusedAutoassociatorsNet::getTrainCostNames | ( | ) | const [virtual] |
Returns the names of the objective costs that the train method computes and for which it updates the VecStatsCollector train_stats.
Implements PLearn::PLearner.
Definition at line 1276 of file StackedFocusedAutoassociatorsNet.cc.
References getTestCostNames(), and PLearn::TVec< T >::push_back().
Referenced by train().
{ TVec<string> cost_names = getTestCostNames(); cost_names.push_back("similarity_cost"); cost_names.push_back("dissimilarity_cost"); cost_names.push_back("metric_cost"); return cost_names; }
void PLearn::StackedFocusedAutoassociatorsNet::greedyStep | ( | const Vec & | input, |
const Vec & | target, | ||
int | index, | ||
Vec | train_costs, | ||
int | stage, | ||
Vec | similar_example, | ||
Vec | dissimilar_example | ||
) |
Definition at line 852 of file StackedFocusedAutoassociatorsNet.cc.
References activation_gradients, cd_decrease_ct, cd_learning_rate, computeRepresentation(), connections, dissimilar_example_cost_precision, dissimilar_example_representation, dissimilar_gradient_contribution, PLearn::dist(), expectation_gradients, expectations, PLearn::fast_exact_is_equal(), greedy_activation, greedy_connections, greedy_decrease_ct, greedy_expectation, greedy_layers, greedy_learning_rate, input_representation, layers, n_layers, neg_down_val, neg_up_val, PLASSERT, pos_down_val, pos_up_val, PLearn::powdistance(), previous_input_representation, reconstruction_activation_gradients, reconstruction_activations, reconstruction_connections, reconstruction_expectation_gradients, PLearn::safeexp(), PLearn::sample(), similar_example_representation, PLearn::TVec< T >::size(), PLearn::sqrt(), PLearn::substract(), PLearn::TVec< T >::subVec(), supervised_greedy_decrease_ct, supervised_greedy_learning_rate, and train_set_representations_up_to_date.
Referenced by train().
{ PLASSERT( index < n_layers ); real lr; train_set_representations_up_to_date = false; if( !fast_exact_is_equal( supervised_greedy_learning_rate, 0 ) ) { // Get similar example representation computeRepresentation(similar_example, similar_example_representation, index+1); // Get dissimilar example representation computeRepresentation(dissimilar_example, dissimilar_example_representation, index+1); } // Get example representation computeRepresentation(input, previous_input_representation, index); greedy_connections[index]->fprop(previous_input_representation, greedy_activation); greedy_layers[index]->fprop(greedy_activation, greedy_expectation); input_representation << greedy_expectation.subVec(0,layers[index+1]->size); // Autoassociator learning if( !fast_exact_is_equal( greedy_learning_rate, 0 ) ) { if( !fast_exact_is_equal( greedy_decrease_ct , 0 ) ) lr = greedy_learning_rate/(1 + greedy_decrease_ct * this_stage); else lr = greedy_learning_rate; layers[index]->setLearningRate( lr ); greedy_connections[index]->setLearningRate( lr ); reconstruction_connections[index]->setLearningRate( lr ); greedy_layers[index]->setLearningRate( lr ); reconstruction_connections[ index ]->fprop( greedy_expectation, reconstruction_activations); layers[ index ]->fprop( reconstruction_activations, layers[ index ]->expectation); layers[ index ]->activation << reconstruction_activations; layers[ index ]->setExpectationByRef(layers[ index ]->expectation); real rec_err = layers[ index ]->fpropNLL(previous_input_representation); train_costs[index] = rec_err; layers[ index ]->bpropNLL(previous_input_representation, rec_err, reconstruction_activation_gradients); } if( !fast_exact_is_equal( supervised_greedy_learning_rate, 0 ) ) { // Compute supervised gradient // Similar example contribution substract(input_representation,similar_example_representation, expectation_gradients[index+1]); expectation_gradients[index+1] *= 4/sqrt((real)layers[index+1]->size); // Dissimilar example contribution real dist = sqrt(powdistance(input_representation, dissimilar_example_representation, 2)); //if( dist == 0 ) // PLWARNING("StackedFocusedAutoassociatorsNet::fineTuningStep(): dissimilar" // " example representation is exactly the sample as the" // " input example. Gradient would be infinite! Skipping this" // " example..."); //else //{ substract(input_representation,dissimilar_example_representation, dissimilar_gradient_contribution); dissimilar_gradient_contribution *= -2* dissimilar_example_cost_precision* safeexp(-dissimilar_example_cost_precision*dist/sqrt((real)layers[index+1]->size)); expectation_gradients[index+1] += dissimilar_gradient_contribution; //} } // RBM learning if( !fast_exact_is_equal( cd_learning_rate, 0 ) ) { greedy_layers[index]->setExpectation( greedy_expectation ); greedy_layers[index]->generateSample(); // accumulate positive stats using the expectation // we deep-copy because the value will change during negative phase pos_down_val = expectations[index]; pos_up_val << greedy_layers[index]->expectation; // down propagation, starting from a sample of layers[index+1] greedy_connections[index]->setAsUpInput( greedy_layers[index]->sample ); layers[index]->getAllActivations( greedy_connections[index] ); layers[index]->computeExpectation(); layers[index]->generateSample(); // negative phase greedy_connections[index]->setAsDownInput( layers[index]->sample ); greedy_layers[index]->getAllActivations( greedy_connections[index] ); greedy_layers[index]->computeExpectation(); // accumulate negative stats // no need to deep-copy because the values won't change before update neg_down_val = layers[index]->sample; neg_up_val = greedy_layers[index]->expectation; } // Update hidden layer bias and weights if( !fast_exact_is_equal( greedy_learning_rate, 0 ) ) { layers[ index ]->update(reconstruction_activation_gradients); reconstruction_connections[ index ]->bpropUpdate( greedy_expectation, reconstruction_activations, reconstruction_expectation_gradients, reconstruction_activation_gradients); greedy_layers[ index ]->bpropUpdate( greedy_activation, greedy_expectation, // reused reconstruction_activation_gradients, reconstruction_expectation_gradients); greedy_connections[ index ]->bpropUpdate( previous_input_representation, greedy_activation, reconstruction_expectation_gradients, //reused reconstruction_activation_gradients); } if( !fast_exact_is_equal( supervised_greedy_learning_rate, 0 ) ) { if( !fast_exact_is_equal( supervised_greedy_decrease_ct , 0 ) ) lr = supervised_greedy_learning_rate/(1 + supervised_greedy_decrease_ct * this_stage); else lr = supervised_greedy_learning_rate; layers[index]->setLearningRate( lr ); connections[index]->setLearningRate( lr ); layers[index+1]->setLearningRate( lr ); layers[ index+1 ]->bpropUpdate( greedy_activation.subVec(0,layers[index+1]->size), greedy_expectation.subVec(0,layers[index+1]->size), activation_gradients[index+1], expectation_gradients[index+1]); connections[ index ]->bpropUpdate( previous_input_representation, greedy_activation.subVec(0,layers[index+1]->size), expectation_gradients[index], activation_gradients[index+1]); } // RBM updates if( !fast_exact_is_equal( cd_learning_rate, 0 ) ) { if( !fast_exact_is_equal( cd_decrease_ct , 0 ) ) lr = cd_learning_rate/(1 + cd_decrease_ct * this_stage); else lr = cd_learning_rate; layers[index]->setLearningRate( lr ); greedy_connections[index]->setLearningRate( lr ); greedy_layers[index]->setLearningRate( lr ); layers[index]->update( pos_down_val, neg_down_val ); greedy_connections[index]->update( pos_down_val, pos_up_val, neg_down_val, neg_up_val ); greedy_layers[index]->update( pos_up_val, neg_up_val ); } }
void PLearn::StackedFocusedAutoassociatorsNet::makeDeepCopyFromShallowCopy | ( | CopiesMap & | copies | ) | [virtual] |
Transforms a shallow copy into a deep copy.
Reimplemented from PLearn::PLearner.
Definition at line 531 of file StackedFocusedAutoassociatorsNet.cc.
References activation_gradients, activations, class_datasets, connections, PLearn::deepCopyField(), dissimilar_example_representation, dissimilar_gradient_contribution, expectation_gradients, expectations, final_cost, final_cost_gradient, final_cost_input, final_cost_value, final_module, greedy_activation, greedy_activation_gradient, greedy_connections, greedy_expectation, greedy_expectation_gradient, greedy_layers, greedy_stages, input_representation, layers, PLearn::PLearner::makeDeepCopyFromShallowCopy(), nearest_neighbors_indices, neg_down_val, neg_up_val, other_classes_proportions, pos_down_val, pos_up_val, previous_input_representation, reconstruction_activation_gradients, reconstruction_activations, reconstruction_connections, reconstruction_expectation_gradients, similar_example_representation, test_nearest_neighbors_indices, test_votes, train_set_representations, train_set_representations_vmat, train_set_targets, training_schedule, unsupervised_connections, and unsupervised_layers.
{ inherited::makeDeepCopyFromShallowCopy(copies); // deepCopyField(, copies); // Public options deepCopyField(training_schedule, copies); deepCopyField(layers, copies); deepCopyField(connections, copies); deepCopyField(reconstruction_connections, copies); deepCopyField(unsupervised_layers, copies); deepCopyField(unsupervised_connections, copies); // Protected options deepCopyField(activations, copies); deepCopyField(expectations, copies); deepCopyField(activation_gradients, copies); deepCopyField(expectation_gradients, copies); deepCopyField(greedy_activation, copies); deepCopyField(greedy_expectation, copies); deepCopyField(greedy_activation_gradient, copies); deepCopyField(greedy_expectation_gradient, copies); deepCopyField(reconstruction_activations, copies); deepCopyField(reconstruction_activation_gradients, copies); deepCopyField(reconstruction_expectation_gradients, copies); deepCopyField(greedy_layers, copies); deepCopyField(greedy_connections, copies); deepCopyField(similar_example_representation, copies); deepCopyField(dissimilar_example_representation, copies); deepCopyField(input_representation, copies); deepCopyField(previous_input_representation, copies); deepCopyField(dissimilar_gradient_contribution, copies); deepCopyField(pos_down_val, copies); deepCopyField(pos_up_val, copies); deepCopyField(neg_down_val, copies); deepCopyField(neg_up_val, copies); deepCopyField(final_cost_input, copies); deepCopyField(final_cost_value, copies); deepCopyField(final_cost_gradient, copies); deepCopyField(class_datasets, copies); deepCopyField(other_classes_proportions, copies); deepCopyField(nearest_neighbors_indices, copies); deepCopyField(test_nearest_neighbors_indices, copies); deepCopyField(test_votes, copies); deepCopyField(train_set_representations, copies); deepCopyField(train_set_representations_vmat, copies); deepCopyField(train_set_targets, copies); deepCopyField(greedy_stages, copies); deepCopyField(final_module, copies); deepCopyField(final_cost, copies); }
int PLearn::StackedFocusedAutoassociatorsNet::outputsize | ( | ) | const [virtual] |
Returns the size of this learner's output, (which typically may depend on its inputsize(), targetsize() and set options).
Implements PLearn::PLearner.
Definition at line 585 of file StackedFocusedAutoassociatorsNet.cc.
References n_classes.
{ //if(currently_trained_layer < n_layers) // return layers[currently_trained_layer]->size; //return layers[n_layers-1]->size; return n_classes; }
void PLearn::StackedFocusedAutoassociatorsNet::setLearningRate | ( | real | the_learning_rate | ) | [private] |
Definition at line 1346 of file StackedFocusedAutoassociatorsNet.cc.
References connections, do_not_use_knn_classifier, final_cost, final_module, i, layers, and n_layers.
Referenced by train().
{ for( int i=0 ; i<n_layers-1 ; i++ ) { layers[i]->setLearningRate( the_learning_rate ); connections[i]->setLearningRate( the_learning_rate ); } layers[n_layers-1]->setLearningRate( the_learning_rate ); if( do_not_use_knn_classifier ) { final_module->setLearningRate( the_learning_rate ); final_cost->setLearningRate( the_learning_rate ); } }
void PLearn::StackedFocusedAutoassociatorsNet::setTrainingSet | ( | VMat | training_set, |
bool | call_forget = true |
||
) | [virtual] |
Declares the training set.
Then calls build() and forget() if necessary. Also sets this learner's inputsize_ targetsize_ weightsize_ from those of the training_set. Note: You shouldn't have to override this in subclasses, except in maybe to forward the call to an underlying learner.
Reimplemented from PLearn::PLearner.
Definition at line 1285 of file StackedFocusedAutoassociatorsNet.cc.
References class_datasets, PLearn::computeNearestNeighbors(), do_not_use_knn_classifier, PLearn::fast_exact_is_equal(), PLearn::TMat< T >::fill(), i, PLearn::PLearner::inputsize(), j, k_neighbors, PLearn::VMat::length(), PLearn::TVec< T >::length(), n_classes, nearest_neighbors_indices, other_classes_proportions, PLearn::TMat< T >::resize(), PLearn::TVec< T >::resize(), PLearn::PLearner::setTrainingSet(), PLearn::sum(), supervised_greedy_learning_rate, PLearn::PLearner::targetsize(), and train_set_representations_up_to_date.
{ inherited::setTrainingSet(training_set,call_forget); train_set_representations_up_to_date = false; if( do_not_use_knn_classifier && fast_exact_is_equal( supervised_greedy_learning_rate, 0 ) ) return; Vec input( inputsize() ); Vec target( targetsize() ); real weight; // unused // Separate classes class_datasets.resize(n_classes); for(int k=0; k<n_classes; k++) { class_datasets[k] = new ClassSubsetVMatrix(); class_datasets[k]->classes.resize(1); class_datasets[k]->classes[0] = k; class_datasets[k]->source = training_set; class_datasets[k]->build(); } // Find other classes proportions other_classes_proportions.resize(n_classes,n_classes); other_classes_proportions.fill(0); for(int k=0; k<n_classes; k++) { real sum = 0; for(int j=0; j<n_classes; j++) { if(j==k) continue; other_classes_proportions(k,j) = class_datasets[j]->length(); sum += class_datasets[j]->length(); } other_classes_proportions(k) /= sum; } // Find training nearest neighbors input.resize(training_set->inputsize()); target.resize(training_set->targetsize()); nearest_neighbors_indices.resize(training_set->length(), k_neighbors); TVec<int> nearest_neighbors_indices_row; for(int k=0; k<n_classes; k++) { for(int i=0; i<class_datasets[k]->length(); i++) { class_datasets[k]->getExample(i,input,target,weight); nearest_neighbors_indices_row = nearest_neighbors_indices( class_datasets[k]->indices[i]); computeNearestNeighbors( new GetInputVMatrix((VMatrix *)class_datasets[k]),input, nearest_neighbors_indices_row, i); } } }
void PLearn::StackedFocusedAutoassociatorsNet::train | ( | ) | [virtual] |
The role of the train method is to bring the learner up to stage==nstages, updating the train_stats collector with training costs measured on-line in the process.
Implements PLearn::PLearner.
Definition at line 633 of file StackedFocusedAutoassociatorsNet.cc.
References class_datasets, classname(), currently_trained_layer, PLearn::TVec< T >::data(), dissimilar_example_representation, dissimilar_gradient_contribution, do_not_use_knn_classifier, PLearn::endl(), PLearn::fast_exact_is_equal(), PLearn::TVec< T >::fill(), final_cost_gradient, final_cost_input, final_cost_value, fine_tuning_decrease_ct, fine_tuning_learning_rate, fineTuningStep(), PLearn::VMat::getExample(), getTrainCostNames(), greedy_activation, greedy_activation_gradient, greedy_expectation, greedy_expectation_gradient, greedy_layers, greedy_learning_rate, greedy_stages, greedyStep(), i, input_representation, PLearn::PLearner::inputsize(), k_neighbors, layers, PLearn::VMat::length(), PLearn::TVec< T >::length(), MISSING_VALUE, n_classes, n_layers, nearest_neighbors_indices, neg_down_val, neg_up_val, PLearn::PLearner::nstages, other_classes_proportions, PLERROR, pos_down_val, pos_up_val, PLearn::PLearner::random_gen, reconstruction_activation_gradients, reconstruction_activations, reconstruction_expectation_gradients, PLearn::PLearner::report_progress, PLearn::TVec< T >::resize(), PLearn::sample(), setLearningRate(), similar_example_representation, PLearn::PLearner::stage, PLearn::TVec< T >::subVec(), supervised_greedy_learning_rate, PLearn::PLearner::targetsize(), PLearn::tostring(), PLearn::PLearner::train_set, PLearn::PLearner::train_stats, training_schedule, and PLearn::PLearner::verbosity.
{ MODULE_LOG << "train() called " << endl; MODULE_LOG << " training_schedule = " << training_schedule << endl; Vec input( inputsize() ); Vec similar_example( inputsize() ); Vec dissimilar_example( inputsize() ); Vec target( targetsize() ); Vec target2( targetsize() ); real weight; // unused real weight2; // unused Vec similar_example_index(1); TVec<string> train_cost_names = getTrainCostNames() ; Vec train_costs( train_cost_names.length() ); train_costs.fill(MISSING_VALUE) ; int nsamples = train_set->length(); int sample; PP<ProgressBar> pb; // clear stats of previous epoch train_stats->forget(); int init_stage; /***** initial greedy training *****/ for( int i=0 ; i<n_layers-1 ; i++ ) { MODULE_LOG << "Training connection weights between layers " << i << " and " << i+1 << endl; int end_stage = training_schedule[i]; int* this_stage = greedy_stages.subVec(i,1).data(); init_stage = *this_stage; MODULE_LOG << " stage = " << *this_stage << endl; MODULE_LOG << " end_stage = " << end_stage << endl; MODULE_LOG << " greedy_learning_rate = " << greedy_learning_rate << endl; if( report_progress && *this_stage < end_stage ) pb = new ProgressBar( "Training layer "+tostring(i) +" of "+classname(), end_stage - init_stage ); train_costs.fill(MISSING_VALUE); reconstruction_activations.resize(layers[i]->size); reconstruction_activation_gradients.resize(layers[i]->size); reconstruction_expectation_gradients.resize(layers[i]->size); if( !fast_exact_is_equal( supervised_greedy_learning_rate, 0 ) ) { similar_example_representation.resize(layers[i+1]->size); dissimilar_example_representation.resize(layers[i+1]->size); dissimilar_gradient_contribution.resize(layers[i+1]->size); } input_representation.resize(layers[i+1]->size); greedy_activation.resize(greedy_layers[i]->size); greedy_expectation.resize(greedy_layers[i]->size); greedy_activation_gradient.resize(greedy_layers[i]->size); greedy_expectation_gradient.resize(greedy_layers[i]->size); pos_down_val.resize(layers[i]->size); pos_up_val.resize(greedy_layers[i]->size); neg_down_val.resize(layers[i]->size); neg_up_val.resize(greedy_layers[i]->size); for( ; *this_stage<end_stage ; (*this_stage)++ ) { sample = *this_stage % nsamples; train_set->getExample(sample, input, target, weight); if( !fast_exact_is_equal( supervised_greedy_learning_rate, 0 ) ) { // Find similar example int sim_index = random_gen->uniform_multinomial_sample(k_neighbors); class_datasets[(int)round(target[0])]->getExample( nearest_neighbors_indices(sample,sim_index), similar_example, target2, weight2); if(round(target[0]) != round(target2[0])) PLERROR("StackedFocusedAutoassociatorsNet::train(): similar" " example is not from same class!"); // Find dissimilar example int dissim_class_index = random_gen->multinomial_sample( other_classes_proportions((int)round(target[0]))); int dissim_index = random_gen->uniform_multinomial_sample( class_datasets[dissim_class_index]->length()); class_datasets[dissim_class_index]->getExample(dissim_index, dissimilar_example, target2, weight2); if(((int)round(target[0])) == ((int)round(target2[0]))) PLERROR("StackedFocusedAutoassociatorsNet::train(): dissimilar" " example is from same class!"); } greedyStep( input, target, i, train_costs, *this_stage, similar_example, dissimilar_example); train_stats->update( train_costs ); if( pb ) pb->update( *this_stage - init_stage + 1 ); } } /***** fine-tuning by gradient descent *****/ if( stage < nstages ) { MODULE_LOG << "Fine-tuning all parameters, by gradient descent" << endl; MODULE_LOG << " stage = " << stage << endl; MODULE_LOG << " nstages = " << nstages << endl; MODULE_LOG << " fine_tuning_learning_rate = " << fine_tuning_learning_rate << endl; init_stage = stage; if( report_progress && stage < nstages ) pb = new ProgressBar( "Fine-tuning parameters of all layers of " + classname(), nstages - init_stage ); setLearningRate( fine_tuning_learning_rate ); train_costs.fill(MISSING_VALUE); if( !do_not_use_knn_classifier ) { similar_example_representation.resize( layers[n_layers-1]->size); dissimilar_example_representation.resize( layers[n_layers-1]->size); dissimilar_gradient_contribution.resize( layers[n_layers-1]->size); similar_example.resize(inputsize()); dissimilar_example.resize(inputsize()); } final_cost_input.resize(n_classes); final_cost_value.resize(2); // Should be resized anyways final_cost_gradient.resize(n_classes); for( ; stage<nstages ; stage++ ) { sample = stage % nsamples; if( !fast_exact_is_equal( fine_tuning_decrease_ct, 0. ) ) setLearningRate( fine_tuning_learning_rate / (1. + fine_tuning_decrease_ct * stage ) ); train_set->getExample( sample, input, target, weight ); if( !do_not_use_knn_classifier ) { // Find similar example int sim_index = random_gen->uniform_multinomial_sample(k_neighbors); class_datasets[(int)round(target[0])]->getExample( nearest_neighbors_indices(sample,sim_index), similar_example, target2, weight2); if(((int)round(target[0])) != ((int)round(target2[0]))) PLERROR("StackedFocusedAutoassociatorsNet::train(): similar" " example is not from same class!"); // Find dissimilar example int dissim_class_index = random_gen->multinomial_sample( other_classes_proportions((int)round(target[0]))); int dissim_index = random_gen->uniform_multinomial_sample( class_datasets[dissim_class_index]->length()); class_datasets[dissim_class_index]->getExample(dissim_index, dissimilar_example, target2, weight2); if(((int)round(target[0])) == ((int)round(target2[0]))) PLERROR("StackedFocusedAutoassociatorsNet::train(): dissimilar" " example is from same class!"); } fineTuningStep( input, target, train_costs, similar_example, dissimilar_example); train_stats->update( train_costs ); if( pb ) pb->update( stage - init_stage + 1 ); } if(verbosity>2) { Vec train_stats_vec = train_stats->getMean(); cout << "similarity_cost = " << train_stats_vec[train_stats_vec.length()-3] << endl; cout << "dissimilarity_cost = " << train_stats_vec[train_stats_vec.length()-2] << endl; cout << "metric_cost = " << train_stats_vec[train_stats_vec.length()-1] << endl; } } train_stats->finalize(); MODULE_LOG << " train costs = " << train_stats->getMean() << endl; // Update currently_trained_layer if(stage > 0) currently_trained_layer = n_layers; else { currently_trained_layer = n_layers-1; while(currently_trained_layer>1 && greedy_stages[currently_trained_layer-1] <= 0) currently_trained_layer--; } }
void PLearn::StackedFocusedAutoassociatorsNet::updateTrainSetRepresentations | ( | ) | const [virtual] |
Precomputes the representations of the training set examples, to speed up nearest neighbors searches in that space.
Definition at line 1233 of file StackedFocusedAutoassociatorsNet.cc.
References computeRepresentation(), currently_trained_layer, PLearn::VMat::getExample(), i, PLearn::PLearner::inputsize(), layers, PLearn::VMat::length(), PLearn::min(), n_layers, PLearn::TMat< T >::resize(), PLearn::TVec< T >::resize(), PLearn::TVec< T >::size(), PLearn::PLearner::targetsize(), PLearn::PLearner::train_set, train_set_representations, train_set_representations_up_to_date, train_set_representations_vmat, and train_set_targets.
Referenced by computeOutput().
{ if(!train_set_representations_up_to_date) { // Precompute training set examples' representation int l = min(currently_trained_layer,n_layers-1); Vec input( inputsize() ); Vec target( targetsize() ); Vec train_set_representation; real weight; train_set_representations.resize(train_set->length(), layers[l]->size); train_set_targets.resize(train_set->length()); for(int i=0; i<train_set->length(); i++) { train_set->getExample(i,input,target,weight); computeRepresentation(input,train_set_representation,l); train_set_representations(i) << train_set_representation; train_set_targets[i] = (int)round(target[0]); } train_set_representations_vmat = VMat(train_set_representations); train_set_representations_up_to_date = true; } }
Reimplemented from PLearn::PLearner.
Definition at line 213 of file StackedFocusedAutoassociatorsNet.h.
TVec<Vec> PLearn::StackedFocusedAutoassociatorsNet::activation_gradients [mutable, protected] |
Stores the gradient of the cost wrt the activations of the input and hidden layers (at the input of the layers)
Definition at line 236 of file StackedFocusedAutoassociatorsNet.h.
Referenced by build_layers_and_connections(), fineTuningStep(), greedyStep(), and makeDeepCopyFromShallowCopy().
TVec<Vec> PLearn::StackedFocusedAutoassociatorsNet::activations [mutable, protected] |
Stores the activations of the input and hidden layers (at the input of the layers)
Definition at line 227 of file StackedFocusedAutoassociatorsNet.h.
Referenced by build_layers_and_connections(), computeRepresentation(), fineTuningStep(), and makeDeepCopyFromShallowCopy().
Contrastive divergence decrease constant.
Definition at line 78 of file StackedFocusedAutoassociatorsNet.h.
Referenced by declareOptions(), and greedyStep().
Contrastive divergence learning rate.
Definition at line 75 of file StackedFocusedAutoassociatorsNet.h.
Referenced by declareOptions(), and greedyStep().
Datasets for each class.
Definition at line 304 of file StackedFocusedAutoassociatorsNet.h.
Referenced by makeDeepCopyFromShallowCopy(), setTrainingSet(), and train().
The weights of the connections between the layers.
Definition at line 109 of file StackedFocusedAutoassociatorsNet.h.
Referenced by build_layers_and_connections(), computeRepresentation(), declareOptions(), fineTuningStep(), forget(), greedyStep(), makeDeepCopyFromShallowCopy(), and setLearningRate().
Currently trained layer (1 means the first hidden layer, n_layers means the output layer)
Definition at line 332 of file StackedFocusedAutoassociatorsNet.h.
Referenced by build_(), computeCostsFromOutputs(), computeOutput(), train(), and updateTrainSetRepresentations().
Parameter that constrols the importance of the dissimilar example cost.
Definition at line 128 of file StackedFocusedAutoassociatorsNet.h.
Referenced by declareOptions(), fineTuningStep(), and greedyStep().
Dissimilar example representation.
Definition at line 276 of file StackedFocusedAutoassociatorsNet.h.
Referenced by fineTuningStep(), greedyStep(), makeDeepCopyFromShallowCopy(), and train().
Dissimilar gradient contribution.
Definition at line 285 of file StackedFocusedAutoassociatorsNet.h.
Referenced by fineTuningStep(), greedyStep(), makeDeepCopyFromShallowCopy(), and train().
Use standard neural net architecture, not the nearest neighbor model.
Definition at line 132 of file StackedFocusedAutoassociatorsNet.h.
Referenced by build_(), computeOutput(), declareOptions(), fineTuningStep(), forget(), setLearningRate(), setTrainingSet(), and train().
TVec<Vec> PLearn::StackedFocusedAutoassociatorsNet::expectation_gradients [mutable, protected] |
Stores the gradient of the cost wrt the expectations of the input and hidden layers (at the output of the layers)
Definition at line 241 of file StackedFocusedAutoassociatorsNet.h.
Referenced by build_layers_and_connections(), fineTuningStep(), greedyStep(), and makeDeepCopyFromShallowCopy().
TVec<Vec> PLearn::StackedFocusedAutoassociatorsNet::expectations [mutable, protected] |
Stores the expectations of the input and hidden layers (at the output of the layers)
Definition at line 231 of file StackedFocusedAutoassociatorsNet.h.
Referenced by build_layers_and_connections(), computeCostsFromOutputs(), computeRepresentation(), fineTuningStep(), greedyStep(), and makeDeepCopyFromShallowCopy().
Cost on output layer of neural net.
Definition at line 338 of file StackedFocusedAutoassociatorsNet.h.
Referenced by build_(), build_output_layer_and_cost(), declareOptions(), fineTuningStep(), makeDeepCopyFromShallowCopy(), and setLearningRate().
Vec PLearn::StackedFocusedAutoassociatorsNet::final_cost_gradient [mutable, protected] |
Cost gradient on output layer.
Definition at line 301 of file StackedFocusedAutoassociatorsNet.h.
Referenced by fineTuningStep(), makeDeepCopyFromShallowCopy(), and train().
Vec PLearn::StackedFocusedAutoassociatorsNet::final_cost_input [mutable, protected] |
Input of cost function.
Definition at line 297 of file StackedFocusedAutoassociatorsNet.h.
Referenced by computeOutput(), fineTuningStep(), makeDeepCopyFromShallowCopy(), and train().
Vec PLearn::StackedFocusedAutoassociatorsNet::final_cost_value [mutable, protected] |
Cost value.
Definition at line 299 of file StackedFocusedAutoassociatorsNet.h.
Referenced by fineTuningStep(), makeDeepCopyFromShallowCopy(), and train().
Output layer of neural net.
Definition at line 335 of file StackedFocusedAutoassociatorsNet.h.
Referenced by build_(), build_output_layer_and_cost(), computeOutput(), declareOptions(), fineTuningStep(), makeDeepCopyFromShallowCopy(), and setLearningRate().
The decrease constant of the learning rate used during fine tuning gradient descent.
Definition at line 99 of file StackedFocusedAutoassociatorsNet.h.
Referenced by declareOptions(), and train().
The learning rate used during the fine tuning gradient descent.
Definition at line 95 of file StackedFocusedAutoassociatorsNet.h.
Referenced by declareOptions(), and train().
Vec PLearn::StackedFocusedAutoassociatorsNet::greedy_activation [mutable, protected] |
Stores the activation of the trained hidden layer during a greedy step.
Definition at line 244 of file StackedFocusedAutoassociatorsNet.h.
Referenced by computeCostsFromOutputs(), greedyStep(), makeDeepCopyFromShallowCopy(), and train().
Vec PLearn::StackedFocusedAutoassociatorsNet::greedy_activation_gradient [mutable, protected] |
Stores the activation gradient of the trained hidden layer during a greedy step.
Definition at line 251 of file StackedFocusedAutoassociatorsNet.h.
Referenced by makeDeepCopyFromShallowCopy(), and train().
Connections used for greedy learning.
Definition at line 270 of file StackedFocusedAutoassociatorsNet.h.
Referenced by build_layers_and_connections(), computeCostsFromOutputs(), greedyStep(), and makeDeepCopyFromShallowCopy().
The decrease constant of the learning rate used during the autoassociator gradient descent training.
When a hidden layer has finished its training, the learning rate is reset to it's initial value.
Definition at line 86 of file StackedFocusedAutoassociatorsNet.h.
Referenced by declareOptions(), and greedyStep().
Vec PLearn::StackedFocusedAutoassociatorsNet::greedy_expectation [mutable, protected] |
Stores the expectation of the trained hidden layer during a greedy step.
Definition at line 247 of file StackedFocusedAutoassociatorsNet.h.
Referenced by computeCostsFromOutputs(), greedyStep(), makeDeepCopyFromShallowCopy(), and train().
Vec PLearn::StackedFocusedAutoassociatorsNet::greedy_expectation_gradient [mutable, protected] |
Stores the expectation gradient of the trained hidden layer during a greedy step.
Definition at line 255 of file StackedFocusedAutoassociatorsNet.h.
Referenced by makeDeepCopyFromShallowCopy(), and train().
TVec< PP<RBMLayer> > PLearn::StackedFocusedAutoassociatorsNet::greedy_layers [protected] |
Layers used for greedy learning.
Definition at line 267 of file StackedFocusedAutoassociatorsNet.h.
Referenced by build_layers_and_connections(), computeCostsFromOutputs(), greedyStep(), makeDeepCopyFromShallowCopy(), and train().
The learning rate used during the autoassociator gradient descent training.
Definition at line 81 of file StackedFocusedAutoassociatorsNet.h.
Referenced by build_layers_and_connections(), declareOptions(), greedyStep(), and train().
Stages of the different greedy phases.
Definition at line 328 of file StackedFocusedAutoassociatorsNet.h.
Referenced by build_(), declareOptions(), forget(), makeDeepCopyFromShallowCopy(), and train().
Vec PLearn::StackedFocusedAutoassociatorsNet::input_representation [mutable, protected] |
Example representation.
Definition at line 279 of file StackedFocusedAutoassociatorsNet.h.
Referenced by computeOutput(), greedyStep(), makeDeepCopyFromShallowCopy(), and train().
Number of good nearest neighbors to attract and bad nearest neighbors to repel.
Definition at line 122 of file StackedFocusedAutoassociatorsNet.h.
Referenced by build_(), declareOptions(), setTrainingSet(), and train().
The layers of units in the network.
Definition at line 106 of file StackedFocusedAutoassociatorsNet.h.
Referenced by build_(), build_layers_and_connections(), build_output_layer_and_cost(), computeCostsFromOutputs(), computeRepresentation(), declareOptions(), fineTuningStep(), forget(), getTestCostNames(), greedyStep(), makeDeepCopyFromShallowCopy(), setLearningRate(), train(), and updateTrainSetRepresentations().
Number of classes.
Definition at line 125 of file StackedFocusedAutoassociatorsNet.h.
Referenced by build_(), build_output_layer_and_cost(), declareOptions(), outputsize(), setTrainingSet(), and train().
Number of layers.
Definition at line 143 of file StackedFocusedAutoassociatorsNet.h.
Referenced by build_(), build_layers_and_connections(), build_output_layer_and_cost(), computeCostsFromOutputs(), computeOutput(), declareOptions(), fineTuningStep(), forget(), greedyStep(), setLearningRate(), train(), and updateTrainSetRepresentations().
Nearest neighbors for each training example.
Definition at line 311 of file StackedFocusedAutoassociatorsNet.h.
Referenced by makeDeepCopyFromShallowCopy(), setTrainingSet(), and train().
Negative down statistic.
Definition at line 292 of file StackedFocusedAutoassociatorsNet.h.
Referenced by greedyStep(), makeDeepCopyFromShallowCopy(), and train().
Negative up statistic.
Definition at line 294 of file StackedFocusedAutoassociatorsNet.h.
Referenced by greedyStep(), makeDeepCopyFromShallowCopy(), and train().
Proportions of examples from the other classes (columns), for each class (rows)
Definition at line 308 of file StackedFocusedAutoassociatorsNet.h.
Referenced by makeDeepCopyFromShallowCopy(), setTrainingSet(), and train().
Output weights l1_penalty_factor.
Definition at line 135 of file StackedFocusedAutoassociatorsNet.h.
Referenced by build_output_layer_and_cost().
Output weights l2_penalty_factor.
Definition at line 138 of file StackedFocusedAutoassociatorsNet.h.
Referenced by build_output_layer_and_cost().
Positive down statistic.
Definition at line 288 of file StackedFocusedAutoassociatorsNet.h.
Referenced by greedyStep(), makeDeepCopyFromShallowCopy(), and train().
Positive up statistic.
Definition at line 290 of file StackedFocusedAutoassociatorsNet.h.
Referenced by greedyStep(), makeDeepCopyFromShallowCopy(), and train().
Example representation at the previous layer, in a greedy step.
Definition at line 282 of file StackedFocusedAutoassociatorsNet.h.
Referenced by fineTuningStep(), greedyStep(), and makeDeepCopyFromShallowCopy().
Vec PLearn::StackedFocusedAutoassociatorsNet::reconstruction_activation_gradients [mutable, protected] |
Reconstruction activation gradients.
Definition at line 261 of file StackedFocusedAutoassociatorsNet.h.
Referenced by greedyStep(), makeDeepCopyFromShallowCopy(), and train().
Vec PLearn::StackedFocusedAutoassociatorsNet::reconstruction_activations [mutable, protected] |
Reconstruction activations.
Definition at line 258 of file StackedFocusedAutoassociatorsNet.h.
Referenced by computeCostsFromOutputs(), greedyStep(), makeDeepCopyFromShallowCopy(), and train().
The reconstruction weights of the autoassociators.
Definition at line 112 of file StackedFocusedAutoassociatorsNet.h.
Referenced by build_layers_and_connections(), computeCostsFromOutputs(), declareOptions(), forget(), greedyStep(), and makeDeepCopyFromShallowCopy().
Vec PLearn::StackedFocusedAutoassociatorsNet::reconstruction_expectation_gradients [mutable, protected] |
Reconstruction expectation gradients.
Definition at line 264 of file StackedFocusedAutoassociatorsNet.h.
Referenced by greedyStep(), makeDeepCopyFromShallowCopy(), and train().
Similar example representation.
Definition at line 273 of file StackedFocusedAutoassociatorsNet.h.
Referenced by fineTuningStep(), greedyStep(), makeDeepCopyFromShallowCopy(), and train().
Supervised, non-parametric, greedy decrease constant.
Definition at line 92 of file StackedFocusedAutoassociatorsNet.h.
Referenced by declareOptions(), and greedyStep().
Supervised, non-parametric, greedy learning rate.
Definition at line 89 of file StackedFocusedAutoassociatorsNet.h.
Referenced by declareOptions(), greedyStep(), setTrainingSet(), and train().
TVec<int> PLearn::StackedFocusedAutoassociatorsNet::test_nearest_neighbors_indices [mutable, protected] |
Nearest neighbors for each test example.
Definition at line 314 of file StackedFocusedAutoassociatorsNet.h.
Referenced by build_(), computeOutput(), and makeDeepCopyFromShallowCopy().
TVec<int> PLearn::StackedFocusedAutoassociatorsNet::test_votes [protected] |
Nearest neighbor votes for test example.
Definition at line 317 of file StackedFocusedAutoassociatorsNet.h.
Referenced by build_(), computeOutput(), and makeDeepCopyFromShallowCopy().
Mat PLearn::StackedFocusedAutoassociatorsNet::train_set_representations [mutable, protected] |
Data set mapped to last hidden layer space.
Definition at line 320 of file StackedFocusedAutoassociatorsNet.h.
Referenced by makeDeepCopyFromShallowCopy(), and updateTrainSetRepresentations().
bool PLearn::StackedFocusedAutoassociatorsNet::train_set_representations_up_to_date [mutable, protected] |
Indication that train_set_representations is up to date.
Definition at line 325 of file StackedFocusedAutoassociatorsNet.h.
Referenced by build_(), fineTuningStep(), forget(), greedyStep(), setTrainingSet(), and updateTrainSetRepresentations().
VMat PLearn::StackedFocusedAutoassociatorsNet::train_set_representations_vmat [mutable, protected] |
Definition at line 321 of file StackedFocusedAutoassociatorsNet.h.
Referenced by computeOutput(), makeDeepCopyFromShallowCopy(), and updateTrainSetRepresentations().
TVec<int> PLearn::StackedFocusedAutoassociatorsNet::train_set_targets [mutable, protected] |
Definition at line 322 of file StackedFocusedAutoassociatorsNet.h.
Referenced by computeOutput(), makeDeepCopyFromShallowCopy(), and updateTrainSetRepresentations().
Number of examples to use during each phase of greedy pre-training.
The number of fine-tunig steps is defined by nstages.
Definition at line 103 of file StackedFocusedAutoassociatorsNet.h.
Referenced by build_(), declareOptions(), makeDeepCopyFromShallowCopy(), and train().
Additional connections for greedy unsupervised learning.
Definition at line 118 of file StackedFocusedAutoassociatorsNet.h.
Referenced by build_layers_and_connections(), declareOptions(), forget(), and makeDeepCopyFromShallowCopy().
Additional units for greedy unsupervised learning.
Definition at line 115 of file StackedFocusedAutoassociatorsNet.h.
Referenced by build_layers_and_connections(), declareOptions(), forget(), and makeDeepCopyFromShallowCopy().