PLearn 0.1
Public Member Functions | Static Public Member Functions | Public Attributes | Static Public Attributes | Protected Member Functions | Static Protected Member Functions | Protected Attributes | Private Types | Private Member Functions
PLearn::SupervisedDBN Class Reference

Hinton's DBN plus supervised gradient from a logistic regression layer but without joint layer on top. More...

#include <SupervisedDBN.h>

Inheritance diagram for PLearn::SupervisedDBN:
Inheritance graph
[legend]
Collaboration diagram for PLearn::SupervisedDBN:
Collaboration graph
[legend]

List of all members.

Public Member Functions

 SupervisedDBN ()
 Default constructor.
virtual real density (const Vec &y) const
 Return probability density p(y | x)
virtual real log_density (const Vec &y) const
 Return log of probability density log(p(y | x)).
virtual real survival_fn (const Vec &y) const
 Return survival function: P(Y>y | x).
virtual real cdf (const Vec &y) const
 Return cdf: P(Y<y | x).
virtual void expectation (Vec &mu) const
 Return E[Y | x].
virtual void variance (Mat &cov) const
 Return Var[Y | x].
virtual void generate (Vec &y) const
 Return a pseudo-random sample generated from the conditional distribution, of density p(y | x).
virtual bool setPredictorPredictedSizes (int the_predictor_size, int the_predicted_size, bool call_parent=true)
 Generates a pseudo-random sample x from the reversed conditional distribution, of density p(x | y) (and NOT p(y | x)).
virtual void setPredictor (const Vec &predictor, bool call_parent=true) const
 Set the value for the predictor part of a conditional probability.
virtual void forget ()
 (Re-)initializes the PDistribution in its fresh state (that state may depend on the 'seed' option).
virtual void train ()
 The role of the train method is to bring the learner up to stage == nstages, updating the train_stats collector with training costs measured on-line in the process.
virtual void computeCostsFromOutputs (const Vec &input, const Vec &output, const Vec &target, Vec &costs) const
 Compute a cost, depending on the type of the first output : if it is the density or the log-density: NLL if it is the expectation: NLL and class error.
virtual TVec< string > getTestCostNames () const
 Return [ "NLL" ] (the only cost computed by a PDistribution).
virtual TVec< string > getTrainCostNames () const
 Return [ ].
virtual string classname () const
virtual OptionListgetOptionList () const
virtual OptionMapgetOptionMap () const
virtual RemoteMethodMapgetRemoteMethodMap () const
virtual SupervisedDBNdeepCopy (CopiesMap &copies) const
virtual void build ()
 Simply calls inherited::build() then build_().
virtual void makeDeepCopyFromShallowCopy (CopiesMap &copies)
 Transforms a shallow copy into a deep copy.

Static Public Member Functions

static string _classname_ ()
 REDEFINE test FOR PARALLELIZATION OF THE TEST.
static OptionList_getOptionList_ ()
static RemoteMethodMap_getRemoteMethodMap_ ()
static Object_new_instance_for_typemap_ ()
static bool _isa_ (const Object *o)
static void _static_initialize_ ()
static const PPathdeclaringFile ()

Public Attributes

bool regression
 If true, the task is regression, else it is classification.
real learning_rate
 The learning rate used during greedy learning.
Vec supervised_learning_rates
 The learning rates used for the supervised part during greedy learning.
real fine_tuning_learning_rate
 The learning rate used during the gradient descent.
real initial_momentum
 Initial momentum.
real final_momentum
 Final momentum.
int momentum_switch_time
 number of samples to be seen by layer i before its momentum switches from initial_momentum to final_momentum
real weight_decay
 The weight decay.
string initialization_method
 The method used to initialize the weights:
int n_layers
 Number of layers, including input layer and last layer, but not target layer.
TVec< PP< RBMLayer > > layers
 Layers that learn representations of the input, layers[0] is input layer, layers[n_layers-1] is last layer.
TVec< PP< RBMLLParameters > > params
 Last layer, learning joint representations of input and target.
PP< RBMLLParameterstarget_params
 Parameters linking target_layer and last_layer.
TVec< PP< OnlineLearningModule > > regressors
 Parameters linking joint_layer and last_layer.
int parallelization_minibatch_size
 only used when USING_MPI for parallelization this is the number of examples seen by one process during training after which the weight updates are shared among all the processes.
bool sum_parallel_contributions
 only used when USING_MPI for parallelization: sum or average the delta-w contributions from different processes?
TVec< inttraining_schedule
 Number of examples to use during each of the different greedy steps of the training phase.
string fine_tuning_method
 Method for fine-tuning the whole network after greedy learning.
TVec< intuse_sample_or_expectation
 Vector providing information on which information to use during the contrastive divergence step:

Static Public Attributes

static StaticInitializer _static_initializer_

Protected Member Functions

virtual void contrastiveDivergenceStep (const PP< RBMLayer > &down_layer, const PP< RBMParameters > &parameters, const PP< RBMLayer > &up_layer)
 gradient wrt output activations
virtual real supervisedContrastiveDivergenceStep (const PP< RBMLayer > &down_layer, const PP< RBMParameters > &parameters, const PP< RBMLayer > &up_layer, const Vec &target, int index)
virtual real greedyStep (const Vec &predictor, int params_index)
virtual void fineTuneByGradientDescent (const Vec &input, Vec &train_costs)

Static Protected Member Functions

static void declareOptions (OptionList &ol)
 Declares the class options.

Protected Attributes

TVec< Vecactivation_gradients
 gradients of cost wrt the activations (output of params)
TVec< Vecexpectation_gradients
 gradients of cost wrt the expectations (output of layers)
Vec supervised_input
Vec store_costs

Private Types

typedef PDistribution inherited

Private Member Functions

void build_ ()
 This does the actual building.
void build_layers ()
 Build the layers.
void build_params ()
 Build the parameters if needed.
void build_regressors ()
 Build the regressors if needed.

Detailed Description

Hinton's DBN plus supervised gradient from a logistic regression layer but without joint layer on top.

Todo:
Yes
Deprecated:
Use ../DeepBeliefNet.h instead

Definition at line 63 of file SupervisedDBN.h.


Member Typedef Documentation

Reimplemented from PLearn::PDistribution.

Definition at line 65 of file SupervisedDBN.h.


Constructor & Destructor Documentation

PLearn::SupervisedDBN::SupervisedDBN ( )

Member Function Documentation

string PLearn::SupervisedDBN::_classname_ ( ) [static]

REDEFINE test FOR PARALLELIZATION OF THE TEST.

Reimplemented from PLearn::PDistribution.

Definition at line 71 of file SupervisedDBN.cc.

OptionList & PLearn::SupervisedDBN::_getOptionList_ ( ) [static]

Reimplemented from PLearn::PDistribution.

Definition at line 71 of file SupervisedDBN.cc.

RemoteMethodMap & PLearn::SupervisedDBN::_getRemoteMethodMap_ ( ) [static]

Reimplemented from PLearn::PDistribution.

Definition at line 71 of file SupervisedDBN.cc.

bool PLearn::SupervisedDBN::_isa_ ( const Object o) [static]

Reimplemented from PLearn::PDistribution.

Definition at line 71 of file SupervisedDBN.cc.

Object * PLearn::SupervisedDBN::_new_instance_for_typemap_ ( ) [static]

Reimplemented from PLearn::PDistribution.

Definition at line 71 of file SupervisedDBN.cc.

StaticInitializer SupervisedDBN::_static_initializer_ & PLearn::SupervisedDBN::_static_initialize_ ( ) [static]

Reimplemented from PLearn::PDistribution.

Definition at line 71 of file SupervisedDBN.cc.

void PLearn::SupervisedDBN::build ( ) [virtual]

Simply calls inherited::build() then build_().

Reimplemented from PLearn::PDistribution.

Definition at line 265 of file SupervisedDBN.cc.

References PLearn::PDistribution::build(), and build_().

{
    // ### Nothing to add here, simply calls build_().
    inherited::build();
    build_();
}

Here is the call graph for this function:

void PLearn::SupervisedDBN::build_ ( ) [private]

This does the actual building.

Reimplemented from PLearn::PDistribution.

Definition at line 275 of file SupervisedDBN.cc.

References build_layers(), build_params(), build_regressors(), PLearn::endl(), fine_tuning_learning_rate, fine_tuning_method, initialization_method, layers, learning_rate, PLearn::TVec< T >::length(), PLearn::lowerstring(), n_layers, PLERROR, PLearn::PDistribution::predicted_size, regression, PLearn::TVec< T >::resize(), supervised_learning_rates, and training_schedule.

Referenced by build().

{
    MODULE_LOG << "build_() called" << endl;
    n_layers = layers.length();
    if( n_layers <= 1 )
        return;

    if( fine_tuning_learning_rate < 0. )
        fine_tuning_learning_rate = learning_rate;

    if( regression )
        predicted_size = 1;

    // check value of initialization_method
    string im = lowerstring( initialization_method );
    if( im == "" || im == "uniform_sqrt" )
        initialization_method = "uniform_sqrt";
    else if( im == "uniform_linear" )
        initialization_method = im;
    else if( im == "zero" )
        initialization_method = im;
    else
        PLERROR( "RBMParameters::build_ - initialization_method\n"
                 "\"%s\" unknown.\n", initialization_method.c_str() );
    MODULE_LOG << "  initialization_method = \"" << initialization_method
        << "\"" << endl;

    // check value of fine_tuning_method
    string ftm = lowerstring( fine_tuning_method );
    if( ftm == "" | ftm == "none" )
        fine_tuning_method = "";
    else if( ftm == "cd" | ftm == "contrastive_divergence" )
        fine_tuning_method = "CD";
    else if( ftm == "egd" | ftm == "error_gradient_descent" )
        fine_tuning_method = "EGD";
    else if( ftm == "ws" | ftm == "wake_sleep" )
        fine_tuning_method = "WS";
    else
        PLERROR( "SupervisedDBN::build_ - fine_tuning_method \"%s\"\n"
                 "is unknown.\n", fine_tuning_method.c_str() );
    MODULE_LOG << "  fine_tuning_method = \"" << fine_tuning_method << "\""
        <<  endl;
    //TODO: build structure to store gradients during gradient descent

    if( training_schedule.length() != n_layers-1 )
        training_schedule = TVec<int>( n_layers-1, 1000000 );

    // fills with 0's if too short
    supervised_learning_rates.resize( n_layers-1 );

    MODULE_LOG << "  training_schedule = " << training_schedule << endl;
    MODULE_LOG << "learning_rate = " << learning_rate << endl;
    MODULE_LOG << "fine_tuning_learning_rate = "
        << fine_tuning_learning_rate << endl;
    MODULE_LOG << "supervised_learning_rates = "
        << supervised_learning_rates << endl;
    MODULE_LOG << endl;

    build_layers();
    build_params();
    build_regressors();
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::SupervisedDBN::build_layers ( ) [private]

Build the layers.

Definition at line 338 of file SupervisedDBN.cc.

References PLearn::endl(), i, PLearn::PLearner::inputsize(), PLearn::PLearner::inputsize_, layers, n_layers, PLearn::PDistribution::n_predicted, PLearn::PDistribution::n_predictor, PLASSERT, PLearn::PDistribution::predicted_size, PLearn::PLearner::random_gen, and setPredictorPredictedSizes().

Referenced by build_().

{
    MODULE_LOG << "build_layers() called" << endl;
    if( inputsize_ >= 0 )
    {
        PLASSERT( layers[0]->size + predicted_size == inputsize() );
        setPredictorPredictedSizes( layers[0]->size,
                                    predicted_size, false );
        MODULE_LOG << "  n_predictor = " << n_predictor << endl;
        MODULE_LOG << "  n_predicted = " << n_predicted << endl;
    }

    for( int i=0 ; i<n_layers ; i++ )
        layers[i]->random_gen = random_gen;
/*
    target_layer->random_gen = random_gen;

    last_layer = layers[n_layers-1];

    // concatenate target_layer and layers[n_layers-2] into joint_layer,
    // if it is not already done
    if( !joint_layer
        || joint_layer->sub_layers.size() !=2
        || joint_layer->sub_layers[0] != target_layer
        || joint_layer->sub_layers[1] != layers[n_layers-2] )
    {
        TVec< PP<RBMLayer> > the_sub_layers( 2 );
        the_sub_layers[0] = target_layer;
        the_sub_layers[1] = layers[n_layers-2];
        joint_layer = new RBMMixedLayer( the_sub_layers );
    }
    joint_layer->random_gen = random_gen;
*/
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::SupervisedDBN::build_params ( ) [private]

Build the parameters if needed.

Definition at line 373 of file SupervisedDBN.cc.

References activation_gradients, PLearn::endl(), expectation_gradients, i, initialization_method, layers, PLearn::TVec< T >::length(), n_layers, params, PLERROR, PLearn::PLearner::random_gen, and PLearn::TVec< T >::resize().

Referenced by build_().

{
    MODULE_LOG << "build_params() called" << endl;
    if( params.length() == 0 )
    {
        params.resize( n_layers-1 );
        for( int i=0 ; i<n_layers-1 ; i++ )
            params[i] = new RBMLLParameters();
    }
    else if( params.length() != n_layers-1 )
        PLERROR( "SupervisedDBN::build_params - params.length() should\n"
                 "be equal to layers.length()-1 (%d != %d).\n",
                 params.length(), n_layers-1 );

    activation_gradients.resize( n_layers );
    expectation_gradients.resize( n_layers );
//    output_gradient.resize( n_predicted );

    for( int i=0 ; i<n_layers-1 ; i++ )
    {
        //TODO: call changeOptions instead
        params[i]->down_units_types = layers[i]->units_types;
        params[i]->up_units_types = layers[i+1]->units_types;
        params[i]->initialization_method = initialization_method;
        params[i]->random_gen = random_gen;
        params[i]->build();

        activation_gradients[i].resize( params[i]->down_layer_size );
        expectation_gradients[i].resize( params[i]->down_layer_size );
    }

    activation_gradients[n_layers-1].resize(params[n_layers-2]->up_layer_size);
    expectation_gradients[n_layers-1].resize(params[n_layers-2]->up_layer_size);

/*
    if( target_layer && !target_params )
        target_params = new RBMLLParameters();

    //TODO: call changeOptions instead
    target_params->down_units_types = target_layer->units_types;
    target_params->up_units_types = last_layer->units_types;
    target_params->initialization_method = initialization_method;
    target_params->random_gen = random_gen;
    target_params->build();

    // build joint_params from params[n_layers-1] and target_params
    // if it is not already done
    if( !joint_params
        || joint_params->target_params != target_params
        || joint_params->cond_params != params[n_layers-2] )
    {
        joint_params = new RBMJointLLParameters( target_params,
                                                 params[n_layers-2] );
    }
    joint_params->random_gen = random_gen;
*/

    // share the biases
    for( int i=0 ; i<n_layers-2 ; i++ )
        params[i]->up_units_bias = params[i+1]->down_units_bias;
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::SupervisedDBN::build_regressors ( ) [private]

Build the regressors if needed.

Definition at line 435 of file SupervisedDBN.cc.

References PLearn::endl(), i, PLearn::TVec< T >::length(), n_layers, PLearn::PDistribution::n_predicted, params, PLearn::PDistribution::predicted_size, regression, regressors, PLearn::TVec< T >::resize(), and supervised_learning_rates.

Referenced by build_().

{
    MODULE_LOG << "build_regressors() called" << endl;
    if( regressors.length() != n_layers-1 )
        regressors.resize( n_layers-1 );

    for( int i=0 ; i<n_layers-1 ; i++ )
        if( !(regressors[i])
            || regressors[i]->input_size != params[i]->up_layer_size )
        {
            MODULE_LOG << "creating regressor " << i << "..." << endl;

            // A linear layer of the appropriate size, that will be trained by
            // stochastic gradient descent, initial weights are 0.
            PP<GradNNetLayerModule> p_gnnlm = new GradNNetLayerModule();
            p_gnnlm->input_size = params[i]->up_layer_size;
            p_gnnlm->output_size = n_predicted;
            p_gnnlm->start_learning_rate = supervised_learning_rates[i];
            MODULE_LOG << "start_learning_rate = "
                << p_gnnlm->start_learning_rate << endl;
            p_gnnlm->init_weights_random_scale = 0.;
            p_gnnlm->build();

            // The cost part
            PP<OnlineLearningModule> p_cost = NULL;

            if( regression ) // cost is MSE
            {
                MODULE_LOG << "... as a SquaredErrModule" << endl;
                p_cost = new SquaredErrModule();
            }
            else // cost is softmax+NLL
            {
                MODULE_LOG << "... as an NLLErrModule" << endl;
                p_cost = new NLLErrModule();
            }

            p_cost->input_size = n_predicted;
            if( regression )
                p_cost->output_size = 1;
            else
                p_cost->output_size = 2;
            p_cost->build();

            // Stack them, and...
            TVec< PP<OnlineLearningModule> > stack(2);
            stack[0] = (GradNNetLayerModule*) p_gnnlm;
            stack[1] = p_cost;

            // ... encapsulate them in another Module, that will compute
            // and backprop the NLL
            PP<StackedModulesModule> p_smm = new StackedModulesModule();
            p_smm->modules = stack;
            p_smm->last_layer_is_cost = true;
            p_smm->target_size = predicted_size;
            p_smm->build();

            regressors[i] = (StackedModulesModule*) p_smm;
        }
}

Here is the call graph for this function:

Here is the caller graph for this function:

real PLearn::SupervisedDBN::cdf ( const Vec y) const [virtual]

Return cdf: P(Y<y | x).

Reimplemented from PLearn::PDistribution.

Definition at line 540 of file SupervisedDBN.cc.

References PLERROR.

{
    PLERROR("cdf not implemented for SupervisedDBN"); return 0;
}
string PLearn::SupervisedDBN::classname ( ) const [virtual]

Reimplemented from PLearn::PDistribution.

Definition at line 71 of file SupervisedDBN.cc.

Referenced by train().

Here is the caller graph for this function:

void PLearn::SupervisedDBN::computeCostsFromOutputs ( const Vec input,
const Vec output,
const Vec target,
Vec costs 
) const [virtual]

Compute a cost, depending on the type of the first output : if it is the density or the log-density: NLL if it is the expectation: NLL and class error.

Reimplemented from PLearn::PDistribution.

Definition at line 1237 of file SupervisedDBN.cc.

References c, PLearn::PDistribution::computeCostsFromOutputs(), PLearn::PDistribution::outputs_def, regression, PLearn::TVec< T >::resize(), PLearn::TVec< T >::size(), and store_costs.

{
    char c = outputs_def[0];
    if( (c == 'l' || c == 'd') && !regression )
        inherited::computeCostsFromOutputs(input, output, target, costs);
    else if( c == 'e' )
    {
        // assumes computeOutput has just been called
        // (yes, this is ugly)
        costs.resize( store_costs.size() );
        costs << store_costs;
    }
}

Here is the call graph for this function:

void PLearn::SupervisedDBN::contrastiveDivergenceStep ( const PP< RBMLayer > &  down_layer,
const PP< RBMParameters > &  parameters,
const PP< RBMLayer > &  up_layer 
) [protected, virtual]

gradient wrt output activations

Definition at line 1036 of file SupervisedDBN.cc.

References use_sample_or_expectation.

Referenced by supervisedContrastiveDivergenceStep().

{
    // Re-initialize values in down_layer
    if( use_sample_or_expectation[0] == 0 )
        parameters->setAsDownInput( down_layer->expectation );
    else
    {
        down_layer->generateSample();
        parameters->setAsDownInput( down_layer->sample );
    }

    // positive phase
    up_layer->getAllActivations( parameters );
    up_layer->computeExpectation();
    up_layer->generateSample();

    // accumulate stats using the right vector (sample or expectation)
    if( use_sample_or_expectation[0] == 2 )
    {
        if( use_sample_or_expectation[1] == 2 )
            parameters->accumulatePosStats(down_layer->sample,
                                           up_layer->sample );
        else
            parameters->accumulatePosStats(down_layer->sample,
                                           up_layer->expectation );
    }
    else
    {
        if( use_sample_or_expectation[1] == 2 )
            parameters->accumulatePosStats(down_layer->expectation,
                                           up_layer->sample);
        else
            parameters->accumulatePosStats(down_layer->expectation,
                                           up_layer->expectation );
    }

    // down propagation
    if( use_sample_or_expectation[1] == 0 )
        parameters->setAsUpInput( up_layer->expectation );
    else
        parameters->setAsUpInput( up_layer->sample );

    down_layer->getAllActivations( parameters );
    down_layer->computeExpectation();
    down_layer->generateSample();

    if( use_sample_or_expectation[2] == 0 )
        parameters->setAsDownInput( down_layer->expectation );
    else
        parameters->setAsDownInput( down_layer->sample );

    up_layer->getAllActivations( parameters );
    up_layer->computeExpectation();

    // accumulate stats using the right vector (sample or expectation)
    if( use_sample_or_expectation[3] == 2 )
    {
        up_layer->generateSample();
        if( use_sample_or_expectation[2] == 2 )
            parameters->accumulateNegStats( down_layer->sample,
                                            up_layer->sample );
        else
            parameters->accumulateNegStats( down_layer->expectation,
                                            up_layer->sample );
    }
    else
    {
        if( use_sample_or_expectation[2] == 2 )
            parameters->accumulateNegStats( down_layer->sample,
                                            up_layer->expectation );
        else
            parameters->accumulateNegStats( down_layer->expectation,
                                            up_layer->expectation );
    }

    // update
    parameters->update();
}

Here is the caller graph for this function:

void PLearn::SupervisedDBN::declareOptions ( OptionList ol) [static, protected]

Declares the class options.

Reimplemented from PLearn::PDistribution.

Definition at line 98 of file SupervisedDBN.cc.

References PLearn::OptionBase::buildoption, PLearn::declareOption(), PLearn::PDistribution::declareOptions(), final_momentum, fine_tuning_learning_rate, fine_tuning_method, initial_momentum, initialization_method, layers, learning_rate, PLearn::OptionBase::learntoption, momentum_switch_time, n_layers, parallelization_minibatch_size, params, regression, regressors, sum_parallel_contributions, supervised_learning_rates, training_schedule, use_sample_or_expectation, and weight_decay.

{
    declareOption(ol, "regression", &SupervisedDBN::regression,
                  OptionBase::buildoption,
                  "If true, the task is regression, else it is classification");

    declareOption(ol, "learning_rate", &SupervisedDBN::learning_rate,
                  OptionBase::buildoption,
                  "Learning rate used during greedy learning");

    declareOption(ol, "supervised_learning_rates",
                  &SupervisedDBN::supervised_learning_rates,
                  OptionBase::buildoption,
                  "The learning rates used for the supervised part during"
                  " greedy learning\n"
                  "(layer by layer).\n");

    declareOption(ol, "fine_tuning_learning_rate",
                  &SupervisedDBN::fine_tuning_learning_rate,
                  OptionBase::buildoption,
                  "Learning rate used during the gradient descent");

    declareOption(ol, "initial_momentum",
                  &SupervisedDBN::initial_momentum,
                  OptionBase::buildoption,
                  "Initial momentum factor (should be between 0 and 1)");

    declareOption(ol, "final_momentum",
                  &SupervisedDBN::final_momentum,
                  OptionBase::buildoption,
                  "Final momentum factor (should be between 0 and 1)");

    declareOption(ol, "momentum_switch_time",
                  &SupervisedDBN::momentum_switch_time,
                  OptionBase::buildoption,
                  "Number of samples to be seen by layer i before its momentum"
                  " switches\n"
                  "from initial_momentum to final_momentum.\n");

    declareOption(ol, "weight_decay", &SupervisedDBN::weight_decay,
                  OptionBase::buildoption,
                  "Weight decay");

    declareOption(ol, "initialization_method",
                  &SupervisedDBN::initialization_method,
                  OptionBase::buildoption,
                  "The method used to initialize the weights:\n"
                  "  - \"uniform_linear\" = a uniform law in [-1/d, 1/d]\n"
                  "  - \"uniform_sqrt\"   = a uniform law in [-1/sqrt(d),"
                  " 1/sqrt(d)]\n"
                  "  - \"zero\"           = all weights are set to 0,\n"
                  "where d = max( up_layer_size, down_layer_size ).\n");


    declareOption(ol, "training_schedule",
                  &SupervisedDBN::training_schedule,
                  OptionBase::buildoption,
                  "Total number of examples that should be seen until each"
                  " layer\n"
                  "have been greedily trained.\n"
                  "We should always have training_schedule[i] <"
                  " training_schedule[i+1].\n");

    declareOption(ol, "fine_tuning_method",
                  &SupervisedDBN::fine_tuning_method,
                  OptionBase::buildoption,
                  "Method for fine-tuning the whole network after greedy"
                  " learning.\n"
                  "One of:\n"
                  "  - \"none\"\n"
                  "  - \"CD\" or \"contrastive_divergence\"\n"
                  "  - \"EGD\" or \"error_gradient_descent\"\n"
                  "  - \"WS\" or \"wake_sleep\".\n");

    declareOption(ol, "layers", &SupervisedDBN::layers,
                  OptionBase::buildoption,
                  "Layers that learn representations of the input,"
                  " unsupervisedly.\n"
                  "layers[0] is input layer.\n");

/*
    declareOption(ol, "target_layer", &SupervisedDBN::target_layer,
                  OptionBase::buildoption,
                  "Target (or label) layer");
*/
    declareOption(ol, "params", &SupervisedDBN::params,
                  OptionBase::buildoption,
                  "RBMParameters linking the unsupervised layers.\n"
                  "params[i] links layers[i] and layers[i+1], except for"
                  "params[n_layers-1],\n"
                  "that links layers[n_layers-1] and last_layer.\n");
/*
    declareOption(ol, "target_params", &SupervisedDBN::target_params,
                  OptionBase::buildoption,
                  "Parameters linking target_layer and last_layer");
*/
/*
    declareOption(ol, "use_sample_rather_than_expectation_in_positive_phase_statistics",
                  &SupervisedDBN::use_sample_rather_than_expectation_in_positive_phase_statistics,
                  OptionBase::buildoption,
                  "In positive phase statistics use output->sample * input\n"
                  "rather than output->expectation * input.\n");
*/
    declareOption(ol, "use_sample_or_expectation",
                  &SupervisedDBN::use_sample_or_expectation,
                  OptionBase::buildoption,
                  "Vector providing information on which information to use"
                  " during the\n"
                  "contrastive divergence step:\n"
                  "  - 0 means that we use the expectation only,\n"
                  "  - 1 means that we sample (for the next step), but we use"
                  " the\n"
                  "    expectation in the CD update formula,\n"
                  "  - 2 means that we use the sample only.\n"
                  "The order of the arguments matches the steps of CD:\n"
                  "  - visible unit during positive phase (you should keep it"
                  " to 0),\n"
                  "  - hidden unit during positive phase,\n"
                  "  - visible unit during negative phase,\n"
                  "  - hidden unit during negative phase (you should keep it"
                  " to 0).\n");

    declareOption(ol, "parallelization_minibatch_size",
                  &SupervisedDBN::parallelization_minibatch_size,
                  OptionBase::buildoption,
                  "Only used when USING_MPI for parallelization.\n"
                  "This is the number of examples seen by one process\n"
                  "during training after which the weight updates are shared\n"
                  "among all the processes.\n");

    declareOption(ol, "sum_parallel_contributions",
                  &SupervisedDBN::sum_parallel_contributions,
                  OptionBase::buildoption,
                  "Only used when USING_MPI for parallelization.\n"
                  "sum or average the delta-w contributions from different processes?\n");

    declareOption(ol, "n_layers", &SupervisedDBN::n_layers,
                  OptionBase::learntoption,
                  "Number of unsupervised layers, including input layer");
/*
    declareOption(ol, "last_layer", &SupervisedDBN::last_layer,
                  OptionBase::learntoption,
                  "Last layer, learning joint representations of input and"
                  " target");

    declareOption(ol, "joint_layer", &SupervisedDBN::joint_layer,
                  OptionBase::nosave,
                  "Concatenation of target_layer and layers[n_layers-1]");

    declareOption(ol, "joint_params", &SupervisedDBN::joint_params,
                  OptionBase::nosave,
                  "Parameters linking joint_layer and last_layer");
*/
    declareOption(ol, "regressors", &SupervisedDBN::regressors,
                  OptionBase::learntoption,
                  "Linear (if regression) of logistic (if !regression)"
                  " regressors\n"
                  " that will provide the supervised gradient for each"
                  " RBMParameters\n");

    // Now call the parent class' declareOptions().
    inherited::declareOptions(ol);
}

Here is the call graph for this function:

static const PPath& PLearn::SupervisedDBN::declaringFile ( ) [inline, static]

Reimplemented from PLearn::PDistribution.

Definition at line 297 of file SupervisedDBN.h.

:
    //#####  Protected Options  ###############################################
SupervisedDBN * PLearn::SupervisedDBN::deepCopy ( CopiesMap copies) const [virtual]

Reimplemented from PLearn::PDistribution.

Definition at line 71 of file SupervisedDBN.cc.

real PLearn::SupervisedDBN::density ( const Vec y) const [virtual]

Return probability density p(y | x)

Reimplemented from PLearn::PDistribution.

Definition at line 583 of file SupervisedDBN.cc.

References PLearn::argmax(), expectation(), i, PLearn::is_equal(), PLearn::PDistribution::n_predicted, PLASSERT, regression, PLearn::TVec< T >::size(), and PLearn::PDistribution::store_expect.

Referenced by log_density().

{
    PLASSERT( y.size() == n_predicted );

    if( regression ) // the probabilistic model does not work
        return 0;

    // TODO: 'y'[0] devrait plutot etre l'entier "index" lui-meme!
    int index = argmax( y );

    // If y != onehot( index ), then density is 0
    if( !is_equal( y[index], 1. ) )
        return 0;
    for( int i=0 ; i<n_predicted ; i++ )
        if( !is_equal( y[i], 0 ) && i != index )
            return 0;

    expectation( store_expect );
    return store_expect[index];
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::SupervisedDBN::expectation ( Vec mu) const [virtual]

Return E[Y | x].

Reimplemented from PLearn::PDistribution.

Definition at line 548 of file SupervisedDBN.cc.

References PLearn::TVec< T >::append(), i, layers, n_layers, params, PLearn::PDistribution::predicted_part, PLearn::PDistribution::predicted_size, PLearn::PDistribution::predictor_part, regressors, PLearn::TVec< T >::resize(), store_costs, and supervised_input.

Referenced by density(), fineTuneByGradientDescent(), and greedyStep().

{
    mu.resize( predicted_size );

    // Propagate input (predictor_part) until penultimate layer
    layers[0]->expectation << predictor_part;
    for( int i=0 ; i<n_layers-1 ; i++ )
    {
        params[i]->setAsDownInput( layers[i]->expectation );
        layers[i+1]->getAllActivations( (RBMLLParameters*) params[i] );
        layers[i+1]->computeExpectation();
    }
/*
    // Set layers[n_layers-2]->expectation (penultimate) as conditionning input
    // of joint_params
    joint_params->setAsCondInput( layers[n_layers-2]->expectation );

    // Get all activations on target_layer from target_params
    target_layer->getAllActivations( (RBMLLParameters*) joint_params );
    target_layer->computeExpectation();
*/

    supervised_input.resize( layers[n_layers-1]->expectation.size() );
    supervised_input << layers[n_layers-1]->expectation;
    supervised_input.append( predicted_part ); // yes, it is ugly

    // Compute supervised cost and gradient
    regressors[n_layers-2]->fprop( supervised_input, store_costs );
    mu << ((StackedModulesModule*) (OnlineLearningModule*)
                regressors[n_layers-2])->values[1];
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::SupervisedDBN::fineTuneByGradientDescent ( const Vec input,
Vec train_costs 
) [protected, virtual]

Definition at line 1197 of file SupervisedDBN.cc.

References activation_gradients, PLearn::TVec< T >::append(), expectation(), expectation_gradients, i, layers, n_layers, PLearn::PDistribution::n_predicted, PLearn::PDistribution::n_predictor, params, regressors, PLearn::TVec< T >::resize(), PLearn::PDistribution::splitCond(), PLearn::TVec< T >::subVec(), and supervised_input.

Referenced by train().

{
    // split input in predictor_part and predicted_part
    splitCond(input);

    // fprop
    layers[0]->expectation << input.subVec(0, n_predictor);
    for( int i=0 ; i<n_layers-1 ; i++ )
    {
        params[i]->setAsDownInput( layers[i]->expectation );
        layers[i+1]->getAllActivations( (RBMLLParameters*) params[i] );
        layers[i+1]->computeExpectation();
    }

    supervised_input.resize( layers[n_layers-1]->expectation.length() );
    supervised_input << layers[n_layers-1]->expectation;
    supervised_input.append( input.subVec( n_predictor, n_predicted ) );

    // Compute supervised cost and gradient
    regressors[n_layers-2]->fprop( supervised_input, train_costs );
    regressors[n_layers-2]->bpropUpdate( supervised_input, train_costs,
                                         expectation_gradients[n_layers-1],
                                         Vec() );

    // bprop and update
    for( int i=n_layers-1 ; i>0 ; i-- )
    {
        layers[i]->bpropUpdate( layers[i]->activations,
                                layers[i]->expectation,
                                activation_gradients[i],
                                expectation_gradients[i] );
        params[i-1]->bpropUpdate( layers[i-1]->expectation,
                                  layers[i]->activations,
                                  expectation_gradients[i-1],
                                  activation_gradients[i] );
    }
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::SupervisedDBN::forget ( ) [virtual]

(Re-)initializes the PDistribution in its fresh state (that state may depend on the 'seed' option).

And sets 'stage' back to 0 (this is the stage of a fresh learner!). ### You may remove this method if your distribution does not ### implement it.

A typical forget() method should do the following:

  • initialize a random number generator with the seed option
  • initialize the learner's parameters, using this random generator
  • stage = 0

Reimplemented from PLearn::PDistribution.

Definition at line 500 of file SupervisedDBN.cc.

References PLearn::endl(), i, layers, n_layers, params, regressors, PLearn::PDistribution::resetGenerator(), PLearn::PLearner::seed_, and PLearn::PLearner::stage.

Referenced by train().

{
    MODULE_LOG << "forget() called" << endl;
    resetGenerator(seed_);
    for( int i=0 ; i<n_layers-1 ; i++ )
        params[i]->forget();

    for( int i=0 ; i<n_layers ; i++ )
        layers[i]->reset();

    for( int i=0 ; i<n_layers-1 ; i++ )
        regressors[i]->forget();

#if USING_MPI
    global_params.resize(0);
#endif
/*
    target_params->forget();
    target_layer->reset();
*/
    stage = 0;
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::SupervisedDBN::generate ( Vec y) const [virtual]

Return a pseudo-random sample generated from the conditional distribution, of density p(y | x).

Reimplemented from PLearn::PDistribution.

Definition at line 532 of file SupervisedDBN.cc.

References PLERROR.

{
    PLERROR("generate not implemented for SupervisedDBN");
}
OptionList & PLearn::SupervisedDBN::getOptionList ( ) const [virtual]

Reimplemented from PLearn::PDistribution.

Definition at line 71 of file SupervisedDBN.cc.

OptionMap & PLearn::SupervisedDBN::getOptionMap ( ) const [virtual]

Reimplemented from PLearn::PDistribution.

Definition at line 71 of file SupervisedDBN.cc.

RemoteMethodMap & PLearn::SupervisedDBN::getRemoteMethodMap ( ) const [virtual]

Reimplemented from PLearn::PDistribution.

Definition at line 71 of file SupervisedDBN.cc.

TVec< string > PLearn::SupervisedDBN::getTestCostNames ( ) const [virtual]

Return [ "NLL" ] (the only cost computed by a PDistribution).

Reimplemented from PLearn::PDistribution.

Definition at line 1254 of file SupervisedDBN.cc.

References PLearn::TVec< T >::append(), c, PLearn::PDistribution::outputs_def, and regression.

Referenced by getTrainCostNames().

{
    char c = outputs_def[0];
    TVec<string> result;
    if( (c == 'l' || c == 'd') && !regression )
        result.append( "NLL" );
    else if( c == 'e' )
    {
        if( regression )
            result.append( "mse" );
        else
        {
            result.append( "NLL" );
            result.append( "class_error" );
        }
    }
    return result;
}

Here is the call graph for this function:

Here is the caller graph for this function:

TVec< string > PLearn::SupervisedDBN::getTrainCostNames ( ) const [virtual]

Return [ ].

Reimplemented from PLearn::PDistribution.

Definition at line 1273 of file SupervisedDBN.cc.

References getTestCostNames().

{
    return getTestCostNames();
}

Here is the call graph for this function:

real PLearn::SupervisedDBN::greedyStep ( const Vec predictor,
int  params_index 
) [protected, virtual]

Definition at line 1118 of file SupervisedDBN.cc.

References expectation(), i, layers, PLearn::PDistribution::n_predicted, PLearn::PDistribution::n_predictor, params, PLearn::TVec< T >::subVec(), and supervisedContrastiveDivergenceStep().

Referenced by train().

{
    // deterministic propagation until we reach index
    layers[0]->expectation << input.subVec(0, n_predictor);
    for( int i=0 ; i<index ; i++ )
    {
        params[i]->setAsDownInput( layers[i]->expectation );
        layers[i+1]->getAllActivations( (RBMLLParameters*) params[i] );
        layers[i+1]->computeExpectation();
    }

    // perform one step of CD + partially supervised gradient
    real sup_cost = supervisedContrastiveDivergenceStep(
                        layers[index],
                        (RBMLLParameters*) params[index],
                        layers[index+1],
                        input.subVec(n_predictor,n_predicted),
                        index );
    return sup_cost;
}

Here is the call graph for this function:

Here is the caller graph for this function:

real PLearn::SupervisedDBN::log_density ( const Vec y) const [virtual]

Return log of probability density log(p(y | x)).

Reimplemented from PLearn::PDistribution.

Definition at line 608 of file SupervisedDBN.cc.

References density(), and pl_log.

{
    return pl_log( density(y) );
}

Here is the call graph for this function:

void PLearn::SupervisedDBN::makeDeepCopyFromShallowCopy ( CopiesMap copies) [virtual]

Transforms a shallow copy into a deep copy.

Reimplemented from PLearn::PDistribution.

Definition at line 632 of file SupervisedDBN.cc.

References PLearn::deepCopyField(), layers, PLearn::PDistribution::makeDeepCopyFromShallowCopy(), params, and training_schedule.

{
    inherited::makeDeepCopyFromShallowCopy(copies);

    deepCopyField(layers, copies);
//    deepCopyField(last_layer, copies);
//    deepCopyField(target_layer, copies);
//    deepCopyField(joint_layer, copies);
    deepCopyField(params, copies);
//    deepCopyField(joint_params, copies);
//    deepCopyField(target_params, copies);
    deepCopyField(training_schedule, copies);
}

Here is the call graph for this function:

void PLearn::SupervisedDBN::setPredictor ( const Vec predictor,
bool  call_parent = true 
) const [virtual]

Set the value for the predictor part of a conditional probability.

Reimplemented from PLearn::PDistribution.

Definition at line 649 of file SupervisedDBN.cc.

References PLearn::PDistribution::setPredictor().

{
    if (call_parent)
        inherited::setPredictor(predictor, true);
    // ### Add here any specific code required by your subclass.
}

Here is the call graph for this function:

bool PLearn::SupervisedDBN::setPredictorPredictedSizes ( int  the_predictor_size,
int  the_predicted_size,
bool  call_parent = true 
) [virtual]

Generates a pseudo-random sample x from the reversed conditional distribution, of density p(x | y) (and NOT p(y | x)).

i.e., generates a "predictor" part given a "predicted" part, regardless of any previously set predictor. Set the 'predictor' and 'predicted' sizes for this distribution.

Reimplemented from PLearn::PDistribution.

Definition at line 660 of file SupervisedDBN.cc.

References layers, PLearn::PDistribution::n_predicted, PLearn::PDistribution::n_predictor, PLERROR, PLearn::PDistribution::predicted_size, PLearn::PDistribution::setPredictorPredictedSizes(), and PLearn::TVec< T >::size().

Referenced by build_layers().

{
    bool sizes_have_changed = false;
    if (call_parent)
        sizes_have_changed = inherited::setPredictorPredictedSizes(
            the_predictor_size, the_predicted_size, true);

    // ### Add here any specific code required by your subclass.
    if( the_predictor_size >= 0 && the_predictor_size != layers[0]->size ||
        the_predicted_size >= 0 && the_predicted_size != predicted_size )
        PLERROR( "SupervisedDBN::setPredictorPredictedSizes - \n"
                 "n_predictor should be equal to layer[0]->size (%d)\n"
                 "n_predicted should be equal to predicted_size (%d).\n",
                 layers[0]->size, predicted_size );

    n_predictor = layers[0]->size;
    n_predicted = predicted_size;

    // Returned value.
    return sizes_have_changed;
}

Here is the call graph for this function:

Here is the caller graph for this function:

real PLearn::SupervisedDBN::supervisedContrastiveDivergenceStep ( const PP< RBMLayer > &  down_layer,
const PP< RBMParameters > &  parameters,
const PP< RBMLayer > &  up_layer,
const Vec target,
int  index 
) [protected, virtual]

Definition at line 982 of file SupervisedDBN.cc.

References activation_gradients, PLearn::TVec< T >::append(), contrastiveDivergenceStep(), expectation_gradients, learning_rate, MISSING_VALUE, regressors, PLearn::TVec< T >::resize(), supervised_input, and supervised_learning_rates.

Referenced by greedyStep().

{

    real supervised_cost = MISSING_VALUE;
    if( supervised_learning_rates[index] > 0 )
    {
        // (Deterministic) forward pass
        parameters->setAsDownInput( down_layer->expectation );
        up_layer->getAllActivations( parameters );
        up_layer->computeExpectation();

        supervised_input.resize( up_layer->expectation.size() );
        supervised_input << up_layer->expectation;
        supervised_input.append( target );

        // Compute supervised cost and gradient
        Vec sup_cost(1);
        regressors[index]->fprop( supervised_input, sup_cost );
        regressors[index]->bpropUpdate( supervised_input, sup_cost,
                                        expectation_gradients[index+1],
                                        Vec() );

        // propagate gradient to params
        up_layer->bpropUpdate( up_layer->activations,
                               up_layer->expectation,
                               activation_gradients[index+1],
                               expectation_gradients[index+1] );

        // put the right learning rate
        parameters->learning_rate = supervised_learning_rates[index];
        // updates the parameters
        parameters->bpropUpdate( down_layer->expectation,
                                 up_layer->activations,
                                 expectation_gradients[index],
                                 activation_gradients[index+1] );
        // put the learning rate back
        parameters->learning_rate = learning_rate;

        // return the cost
        supervised_cost = sup_cost[0];
    }

    // We have to do another forward pass because the weights have changed
    contrastiveDivergenceStep( down_layer, parameters, up_layer );

    // return supervised cost
    return supervised_cost;
}

Here is the call graph for this function:

Here is the caller graph for this function:

real PLearn::SupervisedDBN::survival_fn ( const Vec y) const [virtual]

Return survival function: P(Y>y | x).

Reimplemented from PLearn::PDistribution.

Definition at line 616 of file SupervisedDBN.cc.

References PLERROR.

{
    PLERROR("survival_fn not implemented for SupervisedDBN"); return 0;
}
void PLearn::SupervisedDBN::train ( ) [virtual]

The role of the train method is to bring the learner up to stage == nstages, updating the train_stats collector with training costs measured on-line in the process.

Reimplemented from PLearn::PDistribution.

Definition at line 688 of file SupervisedDBN.cc.

References classname(), PLearn::endl(), PLearn::PLearner::expdir, final_momentum, fine_tuning_learning_rate, fine_tuning_method, fineTuneByGradientDescent(), forget(), PLearn::VMat::getExample(), greedyStep(), i, initial_momentum, PLearn::PLearner::initTrain(), PLearn::PLearner::inputsize(), learning_rate, PLearn::TVec< T >::length(), PLearn::VMat::length(), PLearn::min(), momentum_switch_time, n_layers, PLearn::PLearner::nstages, PLearn::openFile(), parallelization_minibatch_size, params, PLERROR, PLearn::PLMPI::rank, PLearn::PStream::raw_ascii, regressors, PLearn::PLearner::report_progress, PLearn::TVec< T >::resize(), PLearn::sample(), PLearn::PLMPI::size, PLearn::PLearner::stage, sum_parallel_contributions, PLearn::PLearner::targetsize(), PLearn::tostring(), PLearn::PLearner::train_set, PLearn::PLearner::train_stats, training_schedule, and PLearn::ProgressBar::update().

{
    MODULE_LOG << "train() called" << endl;
    // The role of the train method is to bring the learner up to
    // stage==nstages, updating train_stats with training costs measured
    // on-line in the process.

    /* TYPICAL CODE:

    static Vec input;  // static so we don't reallocate memory each time...
    static Vec target; // (but be careful that static means shared!)
    input.resize(inputsize());    // the train_set's inputsize()
    target.resize(targetsize());  // the train_set's targetsize()
    real weight;

    // This generic PLearner method does a number of standard stuff useful for
    // (almost) any learner, and return 'false' if no training should take
    // place. See PLearner.h for more details.
    if (!initTrain())
        return;

    while(stage<nstages)
    {
        // clear statistics of previous epoch
        train_stats->forget();

        //... train for 1 stage, and update train_stats,
        // using train_set->getExample(input, target, weight)
        // and train_stats->update(train_costs)

        ++stage;
        train_stats->finalize(); // finalize statistics for this epoch
    }
    */

    Vec input( inputsize() );
    Vec target( targetsize() ); // unused
    real weight; // unused
    Vec train_costs(2);

    // hack for supervised cost
    real sum_sup_cost = 0;
    PStream sup_cost_file = openFile( expdir/"sup_cost.amat",
                                      PStream::raw_ascii, "a" );

    int nsamples = train_set->length();

#if USING_MPI
    // initialize global parameters for allowing to easily share them across
    // multiple CPUs

    // wait until we can attach a gdb process
    //pout << "START WAITING..." << endl;
    //sleep(20);
    //pout << "DONE WAITING!" << endl;
    MPI_Barrier(MPI_COMM_WORLD);
    int total_bsize=parallelization_minibatch_size*PLMPI::size;
//#endif
    forget(); // DEBUGGING TO GET REPRODUCIBLE RESULTS
    if (global_params.size()==0)
    {
        int n_params = joint_params->nParameters(1,1);
        for (int i=0;i<params.length()-1;i++)
            n_params += params[i]->nParameters(0,1);
        global_params.resize(n_params);
        previous_global_params.resize(n_params);
        Vec p=global_params;
        for (int i=0;i<params.length()-1;i++)
            p=params[i]->makeParametersPointHere(p,0,1);
        p=joint_params->makeParametersPointHere(p,1,1);
        if (p.length()!=0)
            PLERROR("HintonDeepBeliefNet: Inconsistencies between nParameters and makeParametersPointHere!");
    }
#endif

    MODULE_LOG << "  nsamples = " << nsamples << endl;
    MODULE_LOG << "  initial stage = " << stage << endl;
    MODULE_LOG << "  objective: nstages = " << nstages << endl;

    if( !initTrain() )
    {
        MODULE_LOG << "train() aborted" << endl;
        return;
    }

    ProgressBar* pb = 0;

    // clear stats of previous epoch
    train_stats->forget();

    /***** initial greedy training *****/
    for( int layer=0 ; layer < n_layers-1 ; layer++ )
    {
        MODULE_LOG << "Training parameters between layers " << layer
            << " and " << layer+1 << endl;

        int end_stage = min( training_schedule[layer], nstages );

        MODULE_LOG << "  stage = " << stage << endl;
        MODULE_LOG << "  end_stage = " << end_stage << endl;

        if( report_progress && stage < end_stage )
        {
            pb = new ProgressBar( "Training layer "+tostring(layer)
                                  +" of "+classname(),
                                  end_stage - stage );
        }

        params[layer]->learning_rate = learning_rate;

        int momentum_switch_stage = momentum_switch_time;
        if( layer > 0 )
            momentum_switch_stage += training_schedule[layer-1];

        if( stage <= momentum_switch_stage )
            params[layer]->momentum = initial_momentum;
        else
            params[layer]->momentum = final_momentum;

#if USING_MPI
        // make a copy of the parameters as they were at the beginning of
        // the minibatch
        if (sum_parallel_contributions)
            previous_global_params << global_params;
#endif
        int begin_sample = stage % nsamples;
        for( ; stage<end_stage ; stage++ )
        {
#if USING_MPI
            // only look at some of the examples, associated with this process
            // number (rank)
            if (stage%PLMPI::size==PLMPI::rank)
            {
#endif
//                resetGenerator(1); // DEBUGGING HACK TO MAKE SURE RESULTS ARE INDEPENDENT OF PARALLELIZATION
                int sample = stage % nsamples;
                if( sample == begin_sample )
                {
                    sup_cost_file << sum_sup_cost / nsamples << endl;
                    sum_sup_cost = 0;
                }

                train_set->getExample(sample, input, target, weight);
                sum_sup_cost += greedyStep( input, layer );

                if( stage == momentum_switch_stage )
                    params[layer]->momentum = final_momentum;

                if( pb )
                {
                    if( layer == 0 )
                        pb->update( stage + 1 );
                    else
                        pb->update( stage - training_schedule[layer-1] + 1 );
                }
#if USING_MPI
            }
            // time to share among processors
            if (stage%total_bsize==0 || stage==end_stage-1)
                shareParamsMPI();
#endif
        }
    }

#if 0
    /***** joint training *****/
    MODULE_LOG << "Training joint parameters, between target,"
        << " penultimate (" << n_layers-2 << ")," << endl
        << "and last (" << n_layers-1 << ") layers." << endl;

    int end_stage = min( training_schedule[n_layers-2], nstages );

    MODULE_LOG << "  stage = " << stage << endl;
    MODULE_LOG << "  end_stage = " << end_stage << endl;

    if( report_progress && stage < end_stage )
        pb = new ProgressBar( "Training joint layer (target and "
                             +tostring(n_layers-2)+") of "+classname(),
                             end_stage - stage );

    joint_params->learning_rate = learning_rate;
//    target_params->learning_rate = learning_rate;

    int previous_stage = (n_layers < 3) ? 0 : training_schedule[n_layers-3];
    int momentum_switch_stage = momentum_switch_time + previous_stage;
    if( stage <= momentum_switch_stage )
        joint_params->momentum = initial_momentum;
    else
        joint_params->momentum = final_momentum;

    int begin_sample = stage % nsamples;
    int last = min(training_schedule[n_layers-2],nstages);
    for( ; stage<last ; stage++ )
    {
#if USING_MPI
        // only look at some of the examples, associated with this process
        // number (rank)
        if (stage%PLMPI::size==PLMPI::rank)
        {
#endif
            int sample = stage % nsamples;
            if( sample == begin_sample )
            {
                sup_cost_file << sum_sup_cost / nsamples << endl;
                sum_sup_cost = 0;
            }

            train_set->getExample(sample, input, target, weight);
            sum_sup_cost += jointGreedyStep( input );

            if( stage == momentum_switch_stage )
                joint_params->momentum = final_momentum;

            if( pb )
                pb->update( stage - previous_stage + 1 );
#if USING_MPI
        }
        // time to share among processors
        if (stage%total_bsize==0 || stage==last-1)
            shareParamsMPI();
#endif
    }
#endif //0

    /***** fine-tuning *****/
    MODULE_LOG << "Fine-tuning all parameters, using method "
        << fine_tuning_method << endl;
    MODULE_LOG << "  fine_tuning_learning_rate = "
        << fine_tuning_learning_rate << endl;

    int init_stage = stage;
    if( report_progress && stage < nstages )
        pb = new ProgressBar( "Fine-tuning parameters of all layers of "
                             +classname(),
                             nstages - init_stage );

    for( int i=0 ; i<n_layers-1 ; i++ )
        params[i]->learning_rate = fine_tuning_learning_rate;

    ((GradNNetLayerModule*) (OnlineLearningModule*)
        ((StackedModulesModule*) (OnlineLearningModule*)
            regressors[n_layers-2])->modules[1])->start_learning_rate =
        fine_tuning_learning_rate;

//    joint_params->learning_rate = fine_tuning_learning_rate;
//    target_params->learning_rate = fine_tuning_learning_rate;

    if( fine_tuning_method == "" ) // do nothing
    {
        stage = nstages;
        if( pb )
            pb->update( nstages - init_stage + 1 );
    }
    else if( fine_tuning_method == "EGD" )
    {
        int begin_sample = stage % nsamples;
        for( ; stage<nstages ; stage++ )
        {
#if USING_MPI
            // only look at some of the examples, associated with
            // this process number (rank)
            if (stage%PLMPI::size==PLMPI::rank)
            {
#endif
                int sample = stage % nsamples;
                if( sample == begin_sample )
                    train_stats->forget();

                train_set->getExample(sample, input, target, weight);
                fineTuneByGradientDescent( input, train_costs );
                train_stats->update( train_costs );

                if( pb )
                    pb->update( stage - init_stage + 1 );
#if USING_MPI
            }
            // time to share among processors
            if (stage%total_bsize==0 || stage==nstages-1)
                shareParamsMPI();
#endif
        }
        train_stats->finalize(); // finalize statistics for this epoch
    }
    else
        PLERROR( "Fine-tuning methods other than \"EGD\" are not"
                 " implemented yet." );

    if( pb )
        delete pb;

    MODULE_LOG << "Training finished" << endl << endl;
}

Here is the call graph for this function:

void PLearn::SupervisedDBN::variance ( Mat cov) const [virtual]

Return Var[Y | x].

Reimplemented from PLearn::PDistribution.

Definition at line 624 of file SupervisedDBN.cc.

References PLERROR.

{
    PLERROR("variance not implemented for SupervisedDBN");
}

Member Data Documentation

Reimplemented from PLearn::PDistribution.

Definition at line 297 of file SupervisedDBN.h.

gradients of cost wrt the activations (output of params)

Definition at line 313 of file SupervisedDBN.h.

Referenced by build_params(), fineTuneByGradientDescent(), and supervisedContrastiveDivergenceStep().

gradients of cost wrt the expectations (output of layers)

Definition at line 316 of file SupervisedDBN.h.

Referenced by build_params(), fineTuneByGradientDescent(), and supervisedContrastiveDivergenceStep().

Final momentum.

Definition at line 86 of file SupervisedDBN.h.

Referenced by declareOptions(), and train().

The learning rate used during the gradient descent.

Definition at line 80 of file SupervisedDBN.h.

Referenced by build_(), declareOptions(), and train().

Method for fine-tuning the whole network after greedy learning.

One of:

  • "none"
  • "CD" or "contrastive_divergence"
  • "EGD" or "error_gradient_descent"
  • "WS" or "wake_sleep"

Definition at line 154 of file SupervisedDBN.h.

Referenced by build_(), declareOptions(), and train().

Initial momentum.

Definition at line 83 of file SupervisedDBN.h.

Referenced by declareOptions(), and train().

The method used to initialize the weights:

  • "uniform_linear" = a uniform law in [-1/d, 1/d]
  • "uniform_sqrt" = a uniform law in [-1/sqrt(d), 1/sqrt(d)]
  • "zero" = all weights are set to 0 Where d = max( up_layer_size, down_layer_size )

Definition at line 100 of file SupervisedDBN.h.

Referenced by build_(), build_params(), and declareOptions().

Layers that learn representations of the input, layers[0] is input layer, layers[n_layers-1] is last layer.

Definition at line 108 of file SupervisedDBN.h.

Referenced by build_(), build_layers(), build_params(), declareOptions(), expectation(), fineTuneByGradientDescent(), forget(), greedyStep(), makeDeepCopyFromShallowCopy(), and setPredictorPredictedSizes().

The learning rate used during greedy learning.

Definition at line 74 of file SupervisedDBN.h.

Referenced by build_(), declareOptions(), supervisedContrastiveDivergenceStep(), and train().

number of samples to be seen by layer i before its momentum switches from initial_momentum to final_momentum

Definition at line 90 of file SupervisedDBN.h.

Referenced by declareOptions(), and train().

Number of layers, including input layer and last layer, but not target layer.

Definition at line 104 of file SupervisedDBN.h.

Referenced by build_(), build_layers(), build_params(), build_regressors(), declareOptions(), expectation(), fineTuneByGradientDescent(), forget(), and train().

only used when USING_MPI for parallelization this is the number of examples seen by one process during training after which the weight updates are shared among all the processes.

Definition at line 138 of file SupervisedDBN.h.

Referenced by declareOptions(), and train().

Last layer, learning joint representations of input and target.

Target (or label) layer Concatenation of target_layer and layers[n_layers-2] RBMParameters linking the unsupervised layers. params[i] links layers[i] and layers[i+1]

Definition at line 121 of file SupervisedDBN.h.

Referenced by build_params(), build_regressors(), declareOptions(), expectation(), fineTuneByGradientDescent(), forget(), greedyStep(), makeDeepCopyFromShallowCopy(), and train().

If true, the task is regression, else it is classification.

Definition at line 71 of file SupervisedDBN.h.

Referenced by build_(), build_regressors(), computeCostsFromOutputs(), declareOptions(), density(), and getTestCostNames().

Parameters linking joint_layer and last_layer.

Contains params[n_layers-2] and target_params. Logistic regressors that will provide the supervised gradient for each RBMParameters

Definition at line 132 of file SupervisedDBN.h.

Referenced by build_regressors(), declareOptions(), expectation(), fineTuneByGradientDescent(), forget(), supervisedContrastiveDivergenceStep(), and train().

Definition at line 320 of file SupervisedDBN.h.

Referenced by computeCostsFromOutputs(), and expectation().

only used when USING_MPI for parallelization: sum or average the delta-w contributions from different processes?

Definition at line 142 of file SupervisedDBN.h.

Referenced by declareOptions(), and train().

The learning rates used for the supervised part during greedy learning.

Definition at line 77 of file SupervisedDBN.h.

Referenced by build_(), build_regressors(), declareOptions(), and supervisedContrastiveDivergenceStep().

Parameters linking target_layer and last_layer.

Definition at line 124 of file SupervisedDBN.h.

Number of examples to use during each of the different greedy steps of the training phase.

Definition at line 146 of file SupervisedDBN.h.

Referenced by build_(), declareOptions(), makeDeepCopyFromShallowCopy(), and train().

Vector providing information on which information to use during the contrastive divergence step:

  • 0 means that we use the expectation only,
  • 1 means that we sample (for the next step), but we use the expectation in the CD update formula,
  • 2 means that we use the sample only. The order of the arguments matches the steps of CD:
  • visible unit during positive phase (you should keep it to 0),
  • hidden unit during positive phase,
  • visible unit during negative phase,
  • hidden unit during negative phase (you should keep it to 0).

Definition at line 169 of file SupervisedDBN.h.

Referenced by contrastiveDivergenceStep(), declareOptions(), and SupervisedDBN().

The weight decay.

Definition at line 93 of file SupervisedDBN.h.

Referenced by declareOptions().


The documentation for this class was generated from the following files:
 All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Friends Defines