PLearn 0.1
Public Member Functions | Static Public Member Functions | Public Attributes | Static Public Attributes | Protected Member Functions | Static Protected Member Functions | Protected Attributes | Private Types | Private Member Functions
PLearn::DeepFeatureExtractorNNet Class Reference

Deep Neural Network that extracts features in a greedy, mostly unsupervised way. More...

#include <DeepFeatureExtractorNNet.h>

Inheritance diagram for PLearn::DeepFeatureExtractorNNet:
Inheritance graph
[legend]
Collaboration diagram for PLearn::DeepFeatureExtractorNNet:
Collaboration graph
[legend]

List of all members.

Public Member Functions

 DeepFeatureExtractorNNet ()
 Default constructor.
virtual int outputsize () const
 Returns the size of this learner's output, (which typically may depend on its inputsize(), targetsize() and set options).
virtual void forget ()
 (Re-)initializes the PLearner in its fresh state (that state may depend on the 'seed' option) and sets 'stage' back to 0 (this is the stage of a fresh learner!).
virtual void train ()
 The role of the train method is to bring the learner up to stage==nstages, updating the train_stats collector with training costs measured on-line in the process.
virtual void computeOutput (const Vec &input, Vec &output) const
 Computes the output from the input.
virtual void computeCostsFromOutputs (const Vec &input, const Vec &output, const Vec &target, Vec &costs) const
 Computes the costs from already computed output.
virtual TVec< std::string > getTestCostNames () const
 Returns the names of the costs computed by computeCostsFromOutpus (and thus the test method).
virtual TVec< std::string > getTrainCostNames () const
 Returns the names of the objective costs that the train method computes and for which it updates the VecStatsCollector train_stats.
virtual void computeOutputAndCosts (const Vec &input, const Vec &target, Vec &output, Vec &costs) const
 Default calls computeOutput and computeCostsFromOutputs.
virtual string classname () const
virtual OptionListgetOptionList () const
virtual OptionMapgetOptionMap () const
virtual RemoteMethodMapgetRemoteMethodMap () const
virtual DeepFeatureExtractorNNetdeepCopy (CopiesMap &copies) const
virtual void build ()
 Finish building the object; just call inherited::build followed by build_()
virtual void makeDeepCopyFromShallowCopy (CopiesMap &copies)
 Transforms a shallow copy into a deep copy.

Static Public Member Functions

static string _classname_ ()
static OptionList_getOptionList_ ()
static RemoteMethodMap_getRemoteMethodMap_ ()
static Object_new_instance_for_typemap_ ()
static bool _isa_ (const Object *o)
static void _static_initialize_ ()
static const PPathdeclaringFile ()

Public Attributes

TVec< intnhidden_schedule
 Number of hidden units of each hidden layers to add.
PP< Optimizeroptimizer
 Optimizer of the neural network.
PP< Optimizeroptimizer_supervised
 Optimizer of the supervised phase of the neural network.
int batch_size
 Batch size.
int batch_size_supervised
 Batch size used for supervised phase.
string hidden_transfer_func
 Transfer function for the hidden nodes.
string output_transfer_func
 Output transfer function, when all hidden layers are added.
int nhidden_schedule_position
 Index of the layer that will be trained at the next call of train.
TVec< string > cost_funcs
 Cost function for the supervised phase.
real weight_decay
 Weight decay for all weights.
real bias_decay
 Bias decay for all biases.
string penalty_type
 Penalty to use on the weights (for weight and bias decay)
real classification_regularizer
 Used only in the stable_cross_entropy cost function, to fight overfitting (0<=r<1)
real regularizer
 Used in the stable_cross_entropy cost function of the hidden activations, in the unsupervised stages (0<=r<1)
real margin
 Margin requirement, used only with the margin_perceptron_cost cost function.
string initialization_method
 The method used to initialize the weights.
Vec paramsvalues
 Values of all parameters.
int noutputs
 Number of outputs for the neural network.
bool use_same_input_and_output_weights
 Use the same weights for the input and output weights for the autoassociators.
bool always_reconstruct_input
 Always use the reconstruction cost of the input, not of the last layer.
bool use_activations_with_cubed_input
 Use the cubed value of the input of the activation functions (not used for reconstruction/auto-associator layers and ouput layer)
int use_n_first_as_supervised
 To simulate semi-supervised learning.
bool use_only_supervised_part
 Use only supervised part.
real relative_minimum_improvement
 Threshold on training set error relative improvement, before adding a new layer.
string input_reconstruction_error
 Input reconstruction error.
real autoassociator_regularisation_weight
 Weight of autoassociator regularisation terms in the fine-tuning phase.
real supervised_signal_weight
 Weight of supervised signal used in addition to unsupervised signal in greedy phase.
int k_nearest_neighbors_reconstruction
 Number of nearest neighbors to reconstruct in greedy phase.

Static Public Attributes

static StaticInitializer _static_initializer_

Protected Member Functions

Var hiddenLayer (const Var &input, const Var &weights, string transfer_func, Var &before_transfer_function, bool use_cubed_value=false)
 Return a variable that is the hidden layer corresponding to given input and weights.
Var hiddenLayer (const Var &input, const Var &weights, const Var &bias, bool transpose_weights, string transfer_func, Var &before_transfer_function, bool use_cubed_value=false)
 Return a variable that is the hidden layer corresponding to given input and weights.
void buildOutputFromInput (const Var &the_input, Var &hidden_layer, Var &before_transfer_func)
 Build the output of the neural network, from the given input.
void buildTargetAndWeight ()
 Builds the target and sampleweight variables.
void buildCosts (const Var &output, const Var &target, const Var &unsupervised_target, const Var &before_transfer_func, const Var &output_sup)
 Build the costs variable from other variables.
void buildFuncs (const Var &the_input, const Var &the_output, const Var &the_target, const Var &the_sampleweight)
 Build the various functions used in the network.
void fillWeights (const Var &weights, bool fill_first_row, real fill_with_this=0)
 Fill a matrix of weights according to the 'initialization_method' specified.
virtual void buildPenalties ()
 Fill the costs penalties.

Static Protected Member Functions

static void declareOptions (OptionList &ol)
 Declares the class options.

Protected Attributes

int nhidden_schedule_current_position
 Index of the hidden layer that was added last.
VarArray params
 Parameter variables.
VarArray params_to_train
 Parameter variables to train.
VarArray weights
 Weights.
VarArray reconstruction_weights
 Reconstruction weights.
VarArray biases
 Biases.
VarArray invars
 Input variables.
Var input
 Input variable.
Var output
 Output variable.
Var feature_vector
 Feature vector output.
Var hidden_representation
 Hidden representation variable.
Var neighbor_indices
 Neighbor indices.
Var target
 Target variable.
Var unsupervised_target
 Unsupervised target variable.
Var sampleweight
 Sample weight variable.
VarArray costs
 Costs variables.
VarArray penalties
 Penalties variables.
Var training_cost
 Training cost variable.
Var test_costs
 Test costs variable.
VMat sup_train_set
 Fake supervised data;.
VMat unsup_train_set
 Unsupervised data when using nearest neighbors.
PP< AppendNeighborsVMatrixknn_train_set
 Unsupervised data when using nearest neighbors.
Func f
 Function: input -> output.
Func test_costf
 Function: input & target -> output & test_costs.
Func output_and_target_to_cost
 Function: output & target -> cost.
Func to_feature_vector
 Function from input space to learned function space.
TVec< VarArrayautoassociator_params
 Different training_costs used for autoassociator regularisation.
VarArray autoassociator_training_costs
 Different training_costs used for autoassociator regularisation.

Private Types

typedef PLearner inherited

Private Member Functions

void build_ ()
 This does the actual building.

Detailed Description

Deep Neural Network that extracts features in a greedy, mostly unsupervised way.

TODO: - change comments about nhidden_schedule_position (can train only top weights)

Definition at line 61 of file DeepFeatureExtractorNNet.h.


Member Typedef Documentation

Reimplemented from PLearn::PLearner.

Definition at line 63 of file DeepFeatureExtractorNNet.h.


Constructor & Destructor Documentation

PLearn::DeepFeatureExtractorNNet::DeepFeatureExtractorNNet ( )

Member Function Documentation

string PLearn::DeepFeatureExtractorNNet::_classname_ ( ) [static]

Reimplemented from PLearn::PLearner.

Definition at line 91 of file DeepFeatureExtractorNNet.cc.

OptionList & PLearn::DeepFeatureExtractorNNet::_getOptionList_ ( ) [static]

Reimplemented from PLearn::PLearner.

Definition at line 91 of file DeepFeatureExtractorNNet.cc.

RemoteMethodMap & PLearn::DeepFeatureExtractorNNet::_getRemoteMethodMap_ ( ) [static]

Reimplemented from PLearn::PLearner.

Definition at line 91 of file DeepFeatureExtractorNNet.cc.

bool PLearn::DeepFeatureExtractorNNet::_isa_ ( const Object o) [static]

Reimplemented from PLearn::PLearner.

Definition at line 91 of file DeepFeatureExtractorNNet.cc.

Object * PLearn::DeepFeatureExtractorNNet::_new_instance_for_typemap_ ( ) [static]

Reimplemented from PLearn::Object.

Definition at line 91 of file DeepFeatureExtractorNNet.cc.

StaticInitializer DeepFeatureExtractorNNet::_static_initializer_ & PLearn::DeepFeatureExtractorNNet::_static_initialize_ ( ) [static]

Reimplemented from PLearn::PLearner.

Definition at line 91 of file DeepFeatureExtractorNNet.cc.

void PLearn::DeepFeatureExtractorNNet::build ( ) [virtual]

Finish building the object; just call inherited::build followed by build_()

Reimplemented from PLearn::PLearner.

Definition at line 655 of file DeepFeatureExtractorNNet.cc.

References PLearn::PLearner::build(), and build_().

Referenced by forget(), and train().

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::DeepFeatureExtractorNNet::build_ ( ) [private]

This does the actual building.

Reimplemented from PLearn::PLearner.

Definition at line 341 of file DeepFeatureExtractorNNet.cc.

References always_reconstruct_input, autoassociator_params, autoassociator_regularisation_weight, autoassociator_training_costs, b, biases, buildCosts(), buildFuncs(), buildTargetAndWeight(), feature_vector, fillWeights(), hidden_representation, hiddenLayer(), i, input, input_reconstruction_error, PLearn::PLearner::inputsize(), PLearn::PLearner::inputsize_, k_nearest_neighbors_reconstruction, PLearn::TVec< T >::last(), PLearn::TVec< T >::length(), PLearn::lowerstring(), PLearn::VarArray::makeSharedValue(), PLearn::VarArray::nelems(), nhidden_schedule, nhidden_schedule_current_position, nhidden_schedule_position, noutputs, optimizer, optimizer_supervised, output, output_transfer_func, params, params_to_train, paramsvalues, penalty_type, PLERROR, PLWARNING, PLearn::TVec< T >::push_back(), PLearn::PLearner::random_gen, reconstruction_weights, PLearn::TVec< T >::resize(), sampleweight, PLearn::PLearner::seed_, PLearn::Variable::setName(), PLearn::TVec< T >::size(), PLearn::PLearner::stage, PLearn::VMat::subMatRows(), sup_train_set, supervised_signal_weight, target, PLearn::PLearner::targetsize_, PLearn::tostring(), PLearn::PLearner::train_set, unsupervised_target, use_activations_with_cubed_input, use_n_first_as_supervised, use_same_input_and_output_weights, w, weights, and PLearn::PLearner::weightsize_.

Referenced by build().

{
    /*
     * Create Topology Var Graph
     */

    // nhidden_schedule_position's maximum value is nhidden_schedule.length()+1,
    // which means that the network is in its fine-tuning phase.
    if(nhidden_schedule_position > nhidden_schedule.length()+1)
        nhidden_schedule_position = nhidden_schedule.length()+1;

    // Don't do anything if we don't have a train_set
    // It's the only one who knows the inputsize and targetsize anyway...
    // Also, nothing is done if no layers need to be added
    if(inputsize_>=0 && targetsize_>=0 && weightsize_>=0 
       && nhidden_schedule_current_position < nhidden_schedule.length()+1 
       && nhidden_schedule_current_position < nhidden_schedule_position)
    {

        if(use_n_first_as_supervised > 0)
            sup_train_set = train_set.subMatRows(0,use_n_first_as_supervised);

        // Initialize the input.
        if(nhidden_schedule_current_position < 0)
        {
            input = Var(inputsize(), "input");
            output = input;
            weights.resize(0);
            reconstruction_weights.resize(0);
            params.resize(0);
            biases.resize(0);
            if(use_same_input_and_output_weights)
            {
                Var b = new SourceVariable(1,inputsize());
                b->setName("b0");
                b->value.clear();
                biases.push_back(b);
            }
            if (seed_ != 0) random_gen->manual_seed(seed_);
            if(autoassociator_regularisation_weight > 0) 
            {
                autoassociator_training_costs.resize(nhidden_schedule.length());
                autoassociator_params.resize(nhidden_schedule.length());
            }
        }

        feature_vector = hidden_representation;

        if(nhidden_schedule_current_position < nhidden_schedule_position)
        {
            // Update de network's topology
            if(nhidden_schedule_current_position < nhidden_schedule.length()
               && nhidden_schedule_current_position>=0)
                output = hidden_representation;

            Var before_transfer_function;
            params_to_train.resize(0);  // Will now train new set of weights

            // Will reconstruct input ...
            if(nhidden_schedule_current_position < 0 || always_reconstruct_input)
            {
                if(k_nearest_neighbors_reconstruction>=0)
                    unsupervised_target = 
                        Var((k_nearest_neighbors_reconstruction+1)*inputsize());
                else
                    unsupervised_target = input;
            }
            else // ... or will reconstruct last hidden layer
            {
                if(k_nearest_neighbors_reconstruction>=0)
                    unsupervised_target = 
                        Var((k_nearest_neighbors_reconstruction+1)
                            *nhidden_schedule[nhidden_schedule_current_position]);
                else
                    unsupervised_target = hidden_representation;
            }

            // Number of hidden layers added
            int n_added_layers = 0;

            if((nhidden_schedule_position < nhidden_schedule.length() 
                && supervised_signal_weight != 1) && 
               use_same_input_and_output_weights)
            {
                params_to_train.push_back(biases.last());
            }
        
            // Add new hidden layers until schedule position is reached
            // or all hidden layers have been added
            while(nhidden_schedule_current_position < nhidden_schedule_position 
                  && nhidden_schedule_current_position+1 < 
                  nhidden_schedule.length())
            {
                nhidden_schedule_current_position++;
                n_added_layers++;
                Var w;

                // Share layer and reconstruction weights ...
                if(use_same_input_and_output_weights)
                {
                    // Weights
                    Var w_weights = new SourceVariable(
                        output->size(),
                        nhidden_schedule[nhidden_schedule_current_position]);
                    w_weights->setName("w" + tostring(nhidden_schedule_current_position+1));
                    weights.push_back(w_weights);
                    fillWeights(w_weights,false);
                    params.push_back(w_weights);
                    params_to_train.push_back(w_weights);

                    // Bias
                    Var w_biases = new SourceVariable(
                        1,nhidden_schedule[nhidden_schedule_current_position]);
                    w_biases->setName("b" + tostring(nhidden_schedule_current_position+1));
                    biases.push_back(w_biases);
                    w_biases->value.clear();
                    params.push_back(w_biases);
                    params_to_train.push_back(w_biases);

                    //w = vconcat(w_biases & w_weights);
                    output = hiddenLayer(
                        output,w_weights,w_biases,false,"sigmoid",
                        before_transfer_function,use_activations_with_cubed_input);
                    //output = hiddenLayer(
                    //    output,w,"sigmoid",
                    //    before_transfer_function,use_activations_with_cubed_input);
                }
                else // ... or have different set of weights.
                {
                    // Weights and bias
                    w = new SourceVariable(
                        output->size()+1,
                        nhidden_schedule[nhidden_schedule_current_position]);
                    w->setName("wb" + tostring(nhidden_schedule_current_position+1));
                    weights.push_back(w);
                    fillWeights(w,true,0);            
                    params.push_back(w);
                    params_to_train.push_back(w);
                    output = hiddenLayer(
                        output,w,"sigmoid",
                        before_transfer_function,use_activations_with_cubed_input);
                }

                hidden_representation = output;
            }

            // Add supervised layer, when all hidden layers have been trained
            // or when a supervised target is also used in the greedy phase.
        
            if(supervised_signal_weight < 0 || supervised_signal_weight > 1)
                PLERROR("In DeepFeatureExtractorNNet::build_(): "
                        "supervised_signal_weight should be in [0,1]");

            Var output_sup;
            if(nhidden_schedule_position < nhidden_schedule.length() 
               && supervised_signal_weight > 0)
                output_sup = output;

            if(nhidden_schedule_current_position < nhidden_schedule_position)
                nhidden_schedule_current_position++;

            if(output_sup || 
               nhidden_schedule_current_position == nhidden_schedule.length())
            {
                if(noutputs<=0) 
                    PLERROR("In DeepFeatureExtractorNNet::build_(): "
                            "building the output layer but noutputs<=0");

                Var w = new SourceVariable(output->size()+1,noutputs);
                w->setName("wbout");
                fillWeights(w,true,0);
            
                // If all hidden layers have been added, these weights
                // can be added to the network
                if(nhidden_schedule_current_position == nhidden_schedule.length())
                {
                    params.push_back(w);
                    weights.push_back(w);
                }

                params_to_train.push_back(w);
                if(output_sup)
                    output_sup = hiddenLayer(
                        output_sup,w,
                        output_transfer_func,before_transfer_function);
                else
                    output = hiddenLayer(output,w,
                                         output_transfer_func,
                                         before_transfer_function);            
            }

            if(nhidden_schedule_current_position < nhidden_schedule_position)
                nhidden_schedule_current_position++;            

            if(nhidden_schedule_current_position == nhidden_schedule.length()+1)
            {
                params_to_train.resize(0);
                // Fine-tune the whole network
                for(int i=0; i<params.length(); i++)
                    params_to_train.push_back(params[i]);
            }

            // Add reconstruction/auto-associator layer
            reconstruction_weights.resize(0);
            if(supervised_signal_weight != 1 
               && nhidden_schedule_current_position < nhidden_schedule.length())
            {
                int it = 0;
                // Add reconstruction/auto-associator layers until last layer
                // is reached, or until input reconstruction is reached
                // if always_reconstruct_input is true
                string rec_trans_func = "some_transfer_func";
                while((!always_reconstruct_input && n_added_layers > 0) 
                      || (always_reconstruct_input && it<weights.size()))
                {                    
                    n_added_layers--;
                    it++;                

                    if((always_reconstruct_input 
                        && nhidden_schedule_current_position-it == -1) 
                       || nhidden_schedule_current_position == 0)
                    {
                        if(input_reconstruction_error == "cross_entropy")
                            rec_trans_func = "sigmoid";
                        else if (input_reconstruction_error == "mse")
                            rec_trans_func = "linear";
                        else PLERROR("In DeepFeatureExtractorNNet::build_(): %s "
                                     "is not a valid reconstruction error", 
                                     input_reconstruction_error.c_str());
                    }
                    else
                        rec_trans_func = "sigmoid";

                    if(use_same_input_and_output_weights)
                    {
                        output =  hiddenLayer(
                            output,weights[weights.size()-it],
                            biases[biases.size()-it-1], 
                            true, rec_trans_func,
                            before_transfer_function,
                            use_activations_with_cubed_input);
                        //output =  hiddenLayer(
                        //    output, 
                        //    vconcat(biases[biases.size()-it-1]
                        //            & transpose(weights[weights.size()-it])),
                        //    rec_trans_func,
                        //    before_transfer_function,
                        //    use_activations_with_cubed_input);
                    }
                    else
                    {
                        Var rw;
                        if(nhidden_schedule_current_position-it == -1)
                            rw  = new SourceVariable(output->size()+1,inputsize());
                        else
                            rw  = new SourceVariable(
                                output->size()+1,
                                nhidden_schedule[
                                    nhidden_schedule_current_position-it]);
                        reconstruction_weights.push_back(rw);
                        rw->setName("rwb" + tostring(nhidden_schedule_current_position-it+1));
                        fillWeights(rw,true,0);
                        params_to_train.push_back(rw);
                        output =  hiddenLayer(
                            output,rw, rec_trans_func,
                            before_transfer_function,
                            use_activations_with_cubed_input);
                    }                
                }         
            }

            // Build target and weight variables.
            buildTargetAndWeight();

            // Build costs.
            string pt = lowerstring( penalty_type );
            if( pt == "l1" )
                penalty_type = "L1";
            //else if( pt == "l1_square" || pt == "l1 square" || pt == "l1square" )
            //    penalty_type = "L1_square";
            else if( pt == "l2_square" || pt == "l2 square" || pt == "l2square" )
                penalty_type = "L2_square";
            else if( pt == "l2" )
            {
                PLWARNING("L2 penalty not supported, assuming you want L2 square");
                penalty_type = "L2_square";
            }
            else
                PLERROR("penalty_type \"%s\" not supported", penalty_type.c_str());

            buildCosts(output, target, 
                       unsupervised_target, before_transfer_function, output_sup);
        
            // Build functions.
            buildFuncs(input, output, target, sampleweight);

        }
        
        if((bool)paramsvalues && (paramsvalues.size() == params.nelems()))
            params << paramsvalues;
        else
            paramsvalues.resize(params.nelems());
        params.makeSharedValue(paramsvalues);
        
        // Reinitialize the optimization phase
        if(optimizer)
            optimizer->reset();
        if(optimizer_supervised)
            optimizer_supervised->reset();
        stage = 0;
    }
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::DeepFeatureExtractorNNet::buildCosts ( const Var output,
const Var target,
const Var unsupervised_target,
const Var before_transfer_func,
const Var output_sup 
) [protected]

Build the costs variable from other variables.

Definition at line 994 of file DeepFeatureExtractorNNet.cc.

References always_reconstruct_input, PLearn::binary_classification_loss(), buildPenalties(), c, PLearn::classification_loss(), classification_regularizer, cost_funcs, costs, PLearn::cross_entropy(), PLearn::hconcat(), i, input_reconstruction_error, k_nearest_neighbors_reconstruction, PLearn::TVec< T >::length(), PLearn::lift_output(), margin, PLearn::margin_perceptron_cost(), PLearn::multiclass_loss(), n, PLearn::neg_log_pi(), PLearn::newObject(), nhidden_schedule, nhidden_schedule_current_position, nhidden_schedule_position, PLearn::onehot_squared_loss(), output_transfer_func, penalties, PLASSERT, PLERROR, PLearn::TVec< T >::push_back(), regularizer, PLearn::TVec< T >::resize(), sampleweight, PLearn::TVec< T >::size(), PLearn::stable_cross_entropy(), PLearn::sum(), PLearn::sumsquare(), supervised_signal_weight, test_costs, training_cost, PLearn::vconcat(), and PLearn::PLearner::weightsize_.

Referenced by build_().

{
    costs.resize(0);

    // If in a mainly supervised phase ...
    if(nhidden_schedule_current_position >= nhidden_schedule.length())
    {

        // ... add supervised costs ...
        int ncosts = cost_funcs.size();  
        costs.resize(ncosts);
        
        for(int k=0; k<ncosts; k++)
        {
            // create costfuncs and apply individual weights if weightpart > 1
            if(cost_funcs[k]=="mse")
                costs[k]= sumsquare(the_output-the_target);
            else if(cost_funcs[k]=="mse_onehot")
                costs[k] = onehot_squared_loss(the_output, the_target);
            else if(cost_funcs[k]=="NLL") 
            {
                if (the_output->size() == 1) {
                    // Assume sigmoid output here!
                    costs[k] = cross_entropy(the_output, the_target);
                } else {
                    if (output_transfer_func == "log_softmax")
                        costs[k] = -the_output[the_target];
                    else
                        costs[k] = neg_log_pi(the_output, the_target);
                }
            } 
            else if(cost_funcs[k]=="class_error")
                costs[k] = classification_loss(the_output, the_target);
            else if(cost_funcs[k]=="binary_class_error")
                costs[k] = binary_classification_loss(the_output, the_target);
            else if(cost_funcs[k]=="multiclass_error")
                costs[k] = multiclass_loss(the_output, the_target);
            else if(cost_funcs[k]=="cross_entropy")
                costs[k] = cross_entropy(the_output, the_target);
            else if (cost_funcs[k]=="stable_cross_entropy") {
                Var c = stable_cross_entropy(before_transfer_func, the_target);
                costs[k] = c;
                PLASSERT( classification_regularizer >= 0 );
                if (classification_regularizer > 0) {
                    // There is a regularizer to add to the cost function.
                    dynamic_cast<NegCrossEntropySigmoidVariable*>((Variable*) c)->
                        setRegularizer(classification_regularizer);
                }
            }
            else if (cost_funcs[k]=="margin_perceptron_cost")
                costs[k] = margin_perceptron_cost(the_output,the_target,margin);
            else if (cost_funcs[k]=="lift_output")
                costs[k] = lift_output(the_output, the_target);
            else  // Assume we got a Variable name and its options
            {
                costs[k]= dynamic_cast<Variable*>(newObject(cost_funcs[k]));
                if(costs[k].isNull())
                    PLERROR("In NNet::build_()  unknown cost_func option: %s",
                            cost_funcs[k].c_str());
                costs[k]->setParents(the_output & the_target);
                costs[k]->build();
            }
        }

        // ... and unsupervised cost, which is useless here 
        //     (autoassociator regularisation is incorporated elsewhere, in train())
        Vec val(1);
        val[0] = REAL_MAX;
        costs.push_back(new SourceVariable(val));
    }
    else // If in a mainly unsupervised phase ...
    {
        // ... insert supervised cost if supervised_signal_weight > 0 ...
        if(output_sup)
        {            
            int ncosts = cost_funcs.size();  
            costs.resize(ncosts);
        
            for(int k=0; k<ncosts; k++)
            {
                // create costfuncs and apply individual weights if weightpart > 1
                if(cost_funcs[k]=="mse")
                    costs[k]= sumsquare(output_sup-the_target);
                else if(cost_funcs[k]=="mse_onehot")
                    costs[k] = onehot_squared_loss(output_sup, the_target);
                else if(cost_funcs[k]=="NLL") 
                {
                    if (output_sup->size() == 1) {
                        // Assume sigmoid output here!
                        costs[k] = cross_entropy(output_sup, the_target);
                    } else {
                        if (output_transfer_func == "log_softmax")
                            costs[k] = -output_sup[the_target];
                        else
                            costs[k] = neg_log_pi(output_sup, the_target);
                    }
                } 
                else if(cost_funcs[k]=="class_error")
                    costs[k] = classification_loss(output_sup, the_target);
                else if(cost_funcs[k]=="binary_class_error")
                    costs[k] = binary_classification_loss(output_sup, the_target);
                else if(cost_funcs[k]=="multiclass_error")
                    costs[k] = multiclass_loss(output_sup, the_target);
                else if(cost_funcs[k]=="cross_entropy")
                    costs[k] = cross_entropy(output_sup, the_target);
                else if (cost_funcs[k]=="stable_cross_entropy") {
                    Var c = stable_cross_entropy(before_transfer_func, the_target);
                    costs[k] = c;
                    PLASSERT( classification_regularizer >= 0 );
                    if (classification_regularizer > 0) {
                        // There is a regularizer to add to the cost function.
                        dynamic_cast<NegCrossEntropySigmoidVariable*>((Variable*) c)->
                            setRegularizer(classification_regularizer);
                    }
                }
                else if (cost_funcs[k]=="margin_perceptron_cost")
                    costs[k] = margin_perceptron_cost(output_sup,the_target,margin);
                else if (cost_funcs[k]=="lift_output")
                    costs[k] = lift_output(output_sup, the_target);
                else  // Assume we got a Variable name and its options
                {
                    costs[k]= dynamic_cast<Variable*>(newObject(cost_funcs[k]));
                    if(costs[k].isNull())
                        PLERROR("In NNet::build_()  unknown cost_func option: %s",cost_funcs[k].c_str());
                    costs[k]->setParents(output_sup & the_target);
                    costs[k]->build();
                }

                costs[k] = supervised_signal_weight*costs[k];
            }                    
        }
        else // ... otherwise insert useless maximum cost variables ...
        {
            int ncosts = cost_funcs.size();  
            costs.resize(ncosts);
            Vec val(1);
            val[0] = REAL_MAX;
            for(int i=0; i<costs.length(); i++)
                costs[i] = new SourceVariable(val);
        }
        Var c;

        // ... then insert appropriate unsupervised reconstruction cost ...
        if(supervised_signal_weight == 1) // ... unless only using supervised signal.
        {
            Vec val(1);
            val[0] = REAL_MAX;
            costs.push_back(new SourceVariable(val));
        }
        else
        {
            if(k_nearest_neighbors_reconstruction>=0)
            {
                
                VarArray copies(k_nearest_neighbors_reconstruction+1);
                for(int n=0; n<k_nearest_neighbors_reconstruction+1; n++)
                {
                    if(always_reconstruct_input || nhidden_schedule_position == 0)
                    {
                        if(input_reconstruction_error == "cross_entropy")
                            copies[n] = before_transfer_func;
                        else if (input_reconstruction_error == "mse")
                            copies[n] = the_output;
                    }
                    else
                        copies[n] = before_transfer_func;
                }
                
                Var reconstruct = vconcat(copies);
                
                if(always_reconstruct_input || nhidden_schedule_position == 0)
                {
                    if(input_reconstruction_error == "cross_entropy")
                        c = stable_cross_entropy(reconstruct, the_unsupervised_target);
                    else if (input_reconstruction_error == "mse")
                        c = sumsquare(reconstruct-the_unsupervised_target);
                    else PLERROR("In DeepFeatureExtractorNNet::buildCosts(): %s is not "
                                 "a valid reconstruction error", 
                                 input_reconstruction_error.c_str());
                }
                else
                    c = stable_cross_entropy(reconstruct, the_unsupervised_target);
                
            }
            else
            {
                if(always_reconstruct_input || nhidden_schedule_position == 0)
                {
                    if(input_reconstruction_error == "cross_entropy")
                        c = stable_cross_entropy(before_transfer_func, 
                                                 the_unsupervised_target);
                    else if (input_reconstruction_error == "mse")
                        c = sumsquare(the_output-the_unsupervised_target);
                    else PLERROR("In DeepFeatureExtractorNNet::buildCosts(): %s is not "
                                 "a valid reconstruction error", 
                                 input_reconstruction_error.c_str());
                }
                else
                    c = stable_cross_entropy(before_transfer_func, 
                                             the_unsupervised_target);
            }
        
            if(output_sup) c = (1-supervised_signal_weight) * c + costs[0];
            costs.push_back(c);
        }

        PLASSERT( regularizer >= 0 );
        if (regularizer > 0) {
            // There is a regularizer to add to the cost function.
            dynamic_cast<NegCrossEntropySigmoidVariable*>((Variable*) c)->
                setRegularizer(regularizer);
        }
    }

    // This is so that an EarlyStoppingOracle can be used to
    // do early stopping at each layer
    Vec pos(1);
    pos[0] = -nhidden_schedule_current_position;
    costs.push_back(new SourceVariable(pos));

    /*
     * weight and bias decay penalty
     */

    // create penalties
    buildPenalties();
    test_costs = hconcat(costs);

    // Apply penalty to cost.
    // If there is no penalty, we still add costs[0] as the first cost, in
    // order to keep the same number of costs as if there was a penalty.
    if(penalties.size() != 0) {
        // We only multiply by sampleweight if there are weights
        // and assign the appropriate training cost.
        if (weightsize_>0)
            if(nhidden_schedule_current_position < nhidden_schedule.length() 
               && supervised_signal_weight != 1)
                training_cost = hconcat(
                    sampleweight*sum(hconcat(costs[costs.length()-2] & penalties))
                    & (test_costs*sampleweight));
            else
                training_cost = hconcat(
                    sampleweight*sum(hconcat(costs[0] & penalties))
                    & (test_costs*sampleweight));
        else {
            if(nhidden_schedule_current_position < nhidden_schedule.length() 
               && supervised_signal_weight != 1)
                training_cost = hconcat(sum(hconcat(costs[costs.length()-2] 
                                                    & penalties)) & test_costs);
            else
                training_cost = hconcat(sum(hconcat(costs[0] & penalties)) 
                                        & test_costs);
        }
    }
    else {
        // We only multiply by sampleweight if there are weights
        // and assign the appropriate training cost.
        if(weightsize_>0) {
            if(nhidden_schedule_current_position < nhidden_schedule.length() 
               && supervised_signal_weight != 1)
                training_cost = hconcat(costs[costs.length()-2]*sampleweight 
                                        & test_costs*sampleweight);
            else
                training_cost = hconcat(costs[0]*sampleweight 
                                        & test_costs*sampleweight);
        } else {
            if(nhidden_schedule_current_position < nhidden_schedule.length() 
               && supervised_signal_weight != 1)                
                training_cost = hconcat(costs[costs.length()-2] & test_costs);
            else
                training_cost = hconcat(costs[0] & test_costs);
        }
    }

    training_cost->setName("training_cost");
    test_costs->setName("test_costs");
    the_output->setName("output");
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::DeepFeatureExtractorNNet::buildFuncs ( const Var the_input,
const Var the_output,
const Var the_target,
const Var the_sampleweight 
) [protected]

Build the various functions used in the network.

Definition at line 1405 of file DeepFeatureExtractorNNet.cc.

References autoassociator_params, autoassociator_regularisation_weight, autoassociator_training_costs, f, feature_vector, i, input, invars, k_nearest_neighbors_reconstruction, PLearn::TVec< T >::length(), neighbor_indices, nhidden_schedule, nhidden_schedule_current_position, output_and_target_to_cost, params_to_train, PLearn::TVec< T >::push_back(), PLearn::TVec< T >::resize(), test_costf, test_costs, to_feature_vector, training_cost, and unsupervised_target.

Referenced by build_().

                                                                       {
    invars.resize(0);
    VarArray outvars;
    VarArray testinvars;
    if (the_input)
    {
        invars.push_back(the_input);
        testinvars.push_back(the_input);
    }
    if(k_nearest_neighbors_reconstruction>=0 
       && nhidden_schedule_current_position < nhidden_schedule.length())
    {
        invars.push_back(unsupervised_target);
        testinvars.push_back(unsupervised_target);
        if(neighbor_indices)
        {
            invars.push_back(neighbor_indices);
            testinvars.push_back(neighbor_indices);
        }
    }
    if (the_output)
        outvars.push_back(the_output);
    if(the_target)
    {
        invars.push_back(the_target);
        testinvars.push_back(the_target);
        outvars.push_back(the_target);
    }
    if(the_sampleweight)
    {
        invars.push_back(the_sampleweight);
    }
    f = Func(the_input, the_output);
    test_costf = Func(testinvars, the_output&test_costs);
    test_costf->recomputeParents();
    output_and_target_to_cost = Func(outvars, test_costs); 
    output_and_target_to_cost->recomputeParents();

    // To be used later, in the fine-tuning phase
    if(autoassociator_regularisation_weight>0 
       && nhidden_schedule_current_position < nhidden_schedule.length())
    {
        autoassociator_training_costs[nhidden_schedule_current_position] = 
            training_cost;
        autoassociator_params[nhidden_schedule_current_position].resize(
            params_to_train.length());
        for(int i=0; i<params_to_train.length(); i++)
            autoassociator_params[nhidden_schedule_current_position][i] = 
                params_to_train[i];
    }
    to_feature_vector = Func(input,feature_vector);
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::DeepFeatureExtractorNNet::buildOutputFromInput ( const Var the_input,
Var hidden_layer,
Var before_transfer_func 
) [protected]

Build the output of the neural network, from the given input.

The hidden layer is also made available in the 'hidden_layer' parameter. The output before the transfer function is applied is also made available in the 'before_transfer_func' parameter.

void PLearn::DeepFeatureExtractorNNet::buildPenalties ( ) [protected, virtual]

Fill the costs penalties.

Definition at line 1347 of file DeepFeatureExtractorNNet.cc.

References PLearn::affine_transform_weight_penalty(), PLearn::TVec< T >::append(), bias_decay, biases, i, PLearn::TVec< T >::length(), penalties, penalty_type, reconstruction_weights, PLearn::TVec< T >::resize(), use_same_input_and_output_weights, weight_decay, and weights.

Referenced by buildCosts().

                                              {
    // Prevents penalties from being added twice by consecutive builds
    penalties.resize(0);  
    if(weight_decay > 0 || bias_decay > 0)
    {
        for(int i=0; i<weights.length(); i++)
        {
            // If using same input and output weights,
            // then the weights do not include the bias!
            penalties.append(affine_transform_weight_penalty(
                                 weights[i], weight_decay, 
                                 use_same_input_and_output_weights ? 
                                 weight_decay : bias_decay, 
                                 penalty_type));
        }
        
        if(bias_decay > 0)
            for(int i=0; i<biases.length(); i++)
            {
                penalties.append(affine_transform_weight_penalty(
                                     biases[i], bias_decay, 
                                     bias_decay, 
                                     penalty_type));
            }


        for(int i=0; i<reconstruction_weights.length(); i++)
        {
            penalties.append(affine_transform_weight_penalty(
                                 reconstruction_weights[i], 
                                 weight_decay, bias_decay, penalty_type));
        }                
    }
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::DeepFeatureExtractorNNet::buildTargetAndWeight ( ) [protected]

Builds the target and sampleweight variables.

Definition at line 979 of file DeepFeatureExtractorNNet.cc.

References PLERROR, sampleweight, target, PLearn::PLearner::targetsize(), and PLearn::PLearner::weightsize_.

Referenced by build_().

                                                    {
    if(targetsize() > 0)
    {        
        target = Var(targetsize(), "target");
        if(weightsize_>0)
        {
            if (weightsize_!=1)
                PLERROR("In NNet::buildTargetAndWeight - Expected weightsize to "
                        "be 1 or 0 (or unspecified = -1, meaning 0), got %d",
                        weightsize_);
            sampleweight = Var(1, "weight");
        }
    }
}

Here is the call graph for this function:

Here is the caller graph for this function:

string PLearn::DeepFeatureExtractorNNet::classname ( ) const [virtual]

Reimplemented from PLearn::Object.

Definition at line 91 of file DeepFeatureExtractorNNet.cc.

Referenced by train().

Here is the caller graph for this function:

void PLearn::DeepFeatureExtractorNNet::computeCostsFromOutputs ( const Vec input,
const Vec output,
const Vec target,
Vec costs 
) const [virtual]

Computes the costs from already computed output.

Implements PLearn::PLearner.

Definition at line 944 of file DeepFeatureExtractorNNet.cc.

References PLearn::TVec< T >::contains(), cost_funcs, output_and_target_to_cost, and PLERROR.

{
#ifdef BOUNDCHECK
    // Stable cross entropy needs the value *before* the transfer function.
    if (cost_funcs.contains("stable_cross_entropy"))
        PLERROR("In NNet::computeCostsFromOutputs - Cannot directly compute stable "
                "cross entropy from output and target");
#endif
    output_and_target_to_cost->fprop(output&target, costs); 
}

Here is the call graph for this function:

void PLearn::DeepFeatureExtractorNNet::computeOutput ( const Vec input,
Vec output 
) const [virtual]

Computes the output from the input.

Reimplemented from PLearn::PLearner.

Definition at line 938 of file DeepFeatureExtractorNNet.cc.

References f, outputsize(), and PLearn::TVec< T >::resize().

{
    output.resize(outputsize());
    f->fprop(input,output);
}    

Here is the call graph for this function:

void PLearn::DeepFeatureExtractorNNet::computeOutputAndCosts ( const Vec input,
const Vec target,
Vec output,
Vec costs 
) const [virtual]

Default calls computeOutput and computeCostsFromOutputs.

You may override this if you have a more efficient way to compute both output and weighted costs at the same time.

Reimplemented from PLearn::PLearner.

Definition at line 957 of file DeepFeatureExtractorNNet.cc.

References outputsize(), PLearn::TVec< T >::resize(), and test_costf.

{
    outputv.resize(outputsize());
    test_costf->fprop(inputv&targetv, outputv&costsv);
}

Here is the call graph for this function:

void PLearn::DeepFeatureExtractorNNet::declareOptions ( OptionList ol) [static, protected]

Declares the class options.

Reimplemented from PLearn::PLearner.

Definition at line 121 of file DeepFeatureExtractorNNet.cc.

References always_reconstruct_input, autoassociator_regularisation_weight, batch_size, batch_size_supervised, bias_decay, PLearn::OptionBase::buildoption, classification_regularizer, cost_funcs, PLearn::declareOption(), PLearn::PLearner::declareOptions(), initialization_method, input_reconstruction_error, k_nearest_neighbors_reconstruction, PLearn::OptionBase::learntoption, margin, nhidden_schedule, nhidden_schedule_current_position, nhidden_schedule_position, noutputs, optimizer, optimizer_supervised, output_transfer_func, paramsvalues, penalty_type, regularizer, relative_minimum_improvement, supervised_signal_weight, use_activations_with_cubed_input, use_n_first_as_supervised, use_only_supervised_part, use_same_input_and_output_weights, and weight_decay.

{
    declareOption(ol, "nhidden_schedule", 
                  &DeepFeatureExtractorNNet::nhidden_schedule, 
                  OptionBase::buildoption,
                  "Number of hidden units of each hidden layers to add");
    
    declareOption(ol, "optimizer", &DeepFeatureExtractorNNet::optimizer, 
                  OptionBase::buildoption,
                  "Optimizer of the neural network");

    declareOption(ol, "optimizer_supervised", 
                  &DeepFeatureExtractorNNet::optimizer_supervised, 
                  OptionBase::buildoption,
                  "Optimizer of the supervised phase of the neural network.\n"
                  "If not specified, then the same optimizer will always be\n"
                  "used.\n");

    declareOption(ol, "batch_size", &DeepFeatureExtractorNNet::batch_size, 
                  OptionBase::buildoption, 
                  "How many samples to use to estimate the avergage gradient\n"
                  "before updating the weights\n"
                  "0 is equivalent to specifying training_set->length() \n");
    
    declareOption(ol, "batch_size_supervised", &DeepFeatureExtractorNNet::batch_size_supervised, 
                  OptionBase::buildoption, 
                  "How many samples to use to estimate the avergage gradient\n"
                  "before updating the weights, for the supervised phase.\n"
                  "0 is equivalent to specifying training_set->length() \n");
    
    declareOption(ol, "output_transfer_func", 
                  &DeepFeatureExtractorNNet::output_transfer_func, 
                  OptionBase::buildoption,
                  "Output transfer function, when all hidden layers are \n"
                  "added. Choose among:\n"
                  "  - \"tanh\" \n"
                  "  - \"sigmoid\" \n"
                  "  - \"exp\" \n"
                  "  - \"softplus\" \n"
                  "  - \"softmax\" \n"
                  "  - \"log_softmax\" \n"
                  "  - \"interval(<minval>,<maxval>)\", which stands for\n"
                  "          <minval>+(<maxval>-<minval>)*sigmoid(.).\n"
                  "An empty string or \"none\" means no output \n"
                  "transfer function \n");
    
    declareOption(ol, "nhidden_schedule_position", 
                  &DeepFeatureExtractorNNet::nhidden_schedule_position, 
                  OptionBase::buildoption,
                  "Index of the layer(s) that will be trained at the next\n"
                  "call of train. Should be bigger then the last\n"
                  "nhidden_schedule_position, which is initialy -1. \n"
                  "Then, all the layers up to nhidden_schedule_position that\n"
                  "were not trained so far will be. Also, when\n"
                  "nhidden_schedule_position is greater than or equal\n"
                  "to the size of nhidden_schedule, then the output layer is also\n"
                  "added.");
    
    declareOption(ol, "nhidden_schedule_current_position", 
                  &DeepFeatureExtractorNNet::nhidden_schedule_current_position, 
                  OptionBase::learntoption,
                  "Index of the layer that is being trained at the current state");

    declareOption(ol, "cost_funcs", &DeepFeatureExtractorNNet::cost_funcs, 
                  OptionBase::buildoption, 
                  "A list of cost functions to use\n"
                  "in the form \"[ cf1; cf2; cf3; ... ]\"\n"
                  "where each function is one of: \n"
                  "  - \"mse\" (for regression)\n"
                  "  - \"mse_onehot\" (for classification)\n"
                  "  - \"NLL\" (negative log likelihood -log(p[c])\n"
                  "             for classification) \n"
                  "  - \"class_error\" (classification error) \n"
                  "  - \"binary_class_error\" (classification error for a\n"
                  "                            0-1 binary classifier)\n"
                  "  - \"multiclass_error\" \n"
                  "  - \"cross_entropy\" (for binary classification)\n"
                  "  - \"stable_cross_entropy\" (more accurate backprop and\n"
                  "                              possible regularization, for\n"
                  "                              binary classification)\n"
                  "  - \"margin_perceptron_cost\" (a hard version of the \n"
                  "                                cross_entropy, uses the\n"
                  "                                'margin' option)\n"
                  "  - \"lift_output\" (not a real cost function, just the\n"
                  "                     output for lift computation)\n"
                  "The FIRST function of the list will be used as \n"
                  "the objective function to optimize \n"
                  "(possibly with an added weight decay penalty) \n");
    
    declareOption(ol, "weight_decay", 
                  &DeepFeatureExtractorNNet::weight_decay, OptionBase::buildoption, 
                  "Global weight decay for all layers\n");

    declareOption(ol, "bias_decay", &DeepFeatureExtractorNNet::bias_decay, 
                  OptionBase::buildoption, 
                  "Global bias decay for all layers\n");
    
    declareOption(ol, "penalty_type", &DeepFeatureExtractorNNet::penalty_type,
                  OptionBase::buildoption,
                  "Penalty to use on the weights (for weight and bias decay).\n"
                  "Can be any of:\n"
                  "  - \"L1\": L1 norm,\n"
                  //"  - \"L1_square\": square of the L1 norm,\n"
                  "  - \"L2_square\" (default): square of the L2 norm.\n");
    
    declareOption(ol, "classification_regularizer", 
                  &DeepFeatureExtractorNNet::classification_regularizer, 
                  OptionBase::buildoption, 
                  "Used only in the stable_cross_entropy cost function, to fight overfitting (0<=r<1)\n");  
    
    declareOption(ol, "regularizer", &DeepFeatureExtractorNNet::regularizer, 
                  OptionBase::buildoption, 
                  "Used in the stable_cross_entropy cost function for the hidden activations, in the unsupervised stages (0<=r<1)\n");  
    
    declareOption(ol, "margin", &DeepFeatureExtractorNNet::margin, 
                  OptionBase::buildoption, 
                  "Margin requirement, used only with the \n"
                  "margin_perceptron_cost cost function.\n"
                  "It should be positive, and larger values regularize more.\n");
    
    declareOption(ol, "initialization_method", 
                  &DeepFeatureExtractorNNet::initialization_method, 
                  OptionBase::buildoption, 
                  "The method used to initialize the weights:\n"
                  " - \"normal_linear\"  = a normal law with variance 1/n_inputs\n"
                  " - \"normal_sqrt\"    = a normal law with variance"
                  "1/sqrt(n_inputs)\n"
                  " - \"uniform_linear\" = a uniform law in [-1/n_inputs, "
                  "1/n_inputs]\n"
                  " - \"uniform_sqrt\"   = a uniform law in [-1/sqrt(n_inputs), "
                  "1/sqrt(n_inputs)]\n"
                  " - \"zero\"           = all weights are set to 0\n");
    
    declareOption(ol, "paramsvalues", &DeepFeatureExtractorNNet::paramsvalues, 
                  OptionBase::learntoption, 
                  "The learned parameter vector\n");
    declareOption(ol, "noutputs", &DeepFeatureExtractorNNet::noutputs, 
                  OptionBase::buildoption, 
                  "Number of output units. This gives this learner \n"
                  "its outputsize. It is typically of the same dimensionality\n"
                  "as the target for regression problems\n"
                  "But for classification problems where target is just\n"
                  "the class number, noutputs is usually of dimensionality \n"
                  "number of classes (as we want to output a score or\n"
                  "probability vector, one per class)\n");    

    declareOption(ol, "use_same_input_and_output_weights", 
                  &DeepFeatureExtractorNNet::use_same_input_and_output_weights, 
                  OptionBase::buildoption, 
                  "Use the same weights for the input and output weights for\n"
                  "the autoassociators.");  

    declareOption(ol, "always_reconstruct_input", 
                  &DeepFeatureExtractorNNet::always_reconstruct_input, 
                  OptionBase::buildoption, 
                  "Always use the reconstruction cost of the input, not of\n"
                  "the last layer. This option should be used if\n"
                  "use_same_input_and_output_weights is true.");  

    declareOption(ol, "use_activations_with_cubed_input", 
                  &DeepFeatureExtractorNNet::use_activations_with_cubed_input, 
                  OptionBase::buildoption, 
                  "Use the cubed value of the input of the activation functions\n"
                  "(not used for reconstruction/auto-associator layers and\n"
                  " output layer).\n");

    declareOption(ol, "use_n_first_as_supervised", 
                  &DeepFeatureExtractorNNet::use_n_first_as_supervised, 
                  OptionBase::buildoption, 
                  "To simulate semi-supervised learning.");

    declareOption(ol, "use_only_supervised_part", 
                  &DeepFeatureExtractorNNet::use_only_supervised_part, 
                  OptionBase::buildoption, 
                  "Indication that only the supervised part should be\n"
                  "used, throughout the whole training, when simulating\n"
                  "semi-supervised learning.");

    declareOption(ol, "relative_minimum_improvement", 
                  &DeepFeatureExtractorNNet::relative_minimum_improvement,
                  OptionBase::buildoption, 
                  "Threshold on training set error relative improvement,\n"
                  "before adding a new layer. If < 0, then the addition\n"
                  "of layers must be done by the user." );

    declareOption(ol, "autoassociator_regularisation_weight", 
                  &DeepFeatureExtractorNNet::autoassociator_regularisation_weight,
                  OptionBase::buildoption, 
                  "Weight of autoassociator regularisation terms\n"
                  "in the fine-tuning phase.\n"
                  "If it is equal to 0,\n"
                  "then the unsupervised signal is ignored.\n");

     declareOption(ol, "input_reconstruction_error", 
                  &DeepFeatureExtractorNNet::input_reconstruction_error,
                  OptionBase::buildoption, 
                   "Input reconstruction error. The reconstruction error\n"
                   "of the hidden layers will always be \"cross_entropy\"."
                   "Choose among:\n"
                   "  - \"cross_entropy\" (default)\n"
                   "  - \"mse\" \n");

     declareOption(ol, "supervised_signal_weight", 
                  &DeepFeatureExtractorNNet::supervised_signal_weight,
                  OptionBase::buildoption, 
                   "Weight of supervised signal used in addition\n"
                  "to unsupervised signal in greedy phase.\n"
                  "This weights should be in [0,1]. If it is equal\n"
                  "to 0, then the supervised signal is ignored.\n"
                  "If it is equal to 1, then the unsupervised signal\n"
                  "is ignored.\n");

     declareOption(ol, "k_nearest_neighbors_reconstruction", 
                  &DeepFeatureExtractorNNet::k_nearest_neighbors_reconstruction,
                  OptionBase::buildoption, 
                   "Number of nearest neighbors to reconstruct in greedy phase.");
    // Now call the parent class' declareOptions
    inherited::declareOptions(ol);
}

Here is the call graph for this function:

static const PPath& PLearn::DeepFeatureExtractorNNet::declaringFile ( ) [inline, static]

Reimplemented from PLearn::PLearner.

Definition at line 204 of file DeepFeatureExtractorNNet.h.

:
    //#####  Protected Options  ###############################################
DeepFeatureExtractorNNet * PLearn::DeepFeatureExtractorNNet::deepCopy ( CopiesMap copies) const [virtual]

Reimplemented from PLearn::PLearner.

Definition at line 91 of file DeepFeatureExtractorNNet.cc.

void PLearn::DeepFeatureExtractorNNet::fillWeights ( const Var weights,
bool  fill_first_row,
real  fill_with_this = 0 
) [protected]

Fill a matrix of weights according to the 'initialization_method' specified.

The 'clear_first_row' boolean indicates whether we should fill the first row with zeros.

Definition at line 1382 of file DeepFeatureExtractorNNet.cc.

References initialization_method, PLearn::Var::length(), PLearn::PLearner::random_gen, and PLearn::sqrt().

Referenced by build_().

                                                                {
    if (initialization_method == "zero") {
        weights->value->clear();
        return;
    }
    real delta;
    int is = weights.length();
    if (fill_first_row)
        is--; // -1 to get the same result as before.
    if (initialization_method.find("linear") != string::npos)
        delta = 1.0 / real(is);
    else
        delta = 1.0 / sqrt(real(is));
    if (initialization_method.find("normal") != string::npos)
        random_gen->fill_random_normal(weights->value, 0, delta);
    else
        random_gen->fill_random_uniform(weights->value, -delta, delta);
    if (fill_first_row)
        weights->matValue(0).fill(fill_with_this);
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::DeepFeatureExtractorNNet::forget ( ) [virtual]

(Re-)initializes the PLearner in its fresh state (that state may depend on the 'seed' option) and sets 'stage' back to 0 (this is the stage of a fresh learner!).

Reimplemented from PLearn::PLearner.

Definition at line 718 of file DeepFeatureExtractorNNet.cc.

References build(), nhidden_schedule_current_position, optimizer, optimizer_supervised, params, PLearn::TVec< T >::resize(), PLearn::PLearner::stage, and weights.

{
    if(optimizer)
        optimizer->reset();
    if(optimizer_supervised)
        optimizer_supervised->reset();
    stage = 0;
    
    params.resize(0);
    weights.resize(0);
    nhidden_schedule_current_position = -1;
    build();
}

Here is the call graph for this function:

OptionList & PLearn::DeepFeatureExtractorNNet::getOptionList ( ) const [virtual]

Reimplemented from PLearn::Object.

Definition at line 91 of file DeepFeatureExtractorNNet.cc.

OptionMap & PLearn::DeepFeatureExtractorNNet::getOptionMap ( ) const [virtual]

Reimplemented from PLearn::Object.

Definition at line 91 of file DeepFeatureExtractorNNet.cc.

RemoteMethodMap & PLearn::DeepFeatureExtractorNNet::getRemoteMethodMap ( ) const [virtual]

Reimplemented from PLearn::Object.

Definition at line 91 of file DeepFeatureExtractorNNet.cc.

TVec< string > PLearn::DeepFeatureExtractorNNet::getTestCostNames ( ) const [virtual]

Returns the names of the costs computed by computeCostsFromOutpus (and thus the test method).

Implements PLearn::PLearner.

Definition at line 966 of file DeepFeatureExtractorNNet.cc.

References PLearn::TVec< T >::copy(), cost_funcs, and PLearn::TVec< T >::push_back().

Referenced by getTrainCostNames().

{
    TVec<string> costs_str = cost_funcs.copy();
    costs_str.push_back("reconstruction_error");
    costs_str.push_back("nhidden_schedule_current_position");
    return costs_str;
}

Here is the call graph for this function:

Here is the caller graph for this function:

TVec< string > PLearn::DeepFeatureExtractorNNet::getTrainCostNames ( ) const [virtual]

Returns the names of the objective costs that the train method computes and for which it updates the VecStatsCollector train_stats.

Implements PLearn::PLearner.

Definition at line 974 of file DeepFeatureExtractorNNet.cc.

References getTestCostNames().

{
    return getTestCostNames();
}

Here is the call graph for this function:

Var PLearn::DeepFeatureExtractorNNet::hiddenLayer ( const Var input,
const Var weights,
const Var bias,
bool  transpose_weights,
string  transfer_func,
Var before_transfer_function,
bool  use_cubed_value = false 
) [protected]

Return a variable that is the hidden layer corresponding to given input and weights.

If the 'default' transfer_func is used, we use the hidden_transfer_func option.

Definition at line 1311 of file DeepFeatureExtractorNNet.cc.

References PLearn::bias_weight_affine_transform(), PLearn::exp(), PLearn::log_softmax(), PLERROR, PLearn::pow(), PLearn::sigmoid(), PLearn::softmax(), PLearn::softplus(), PLearn::tanh(), and PLearn::unary_hard_slope().

                                                                {
    Var hidden = bias_weight_affine_transform(input, weights, 
                                              bias,transpose_weights); 
    if(use_cubed_value)
        hidden = pow(hidden,3);    
    before_transfer_function = hidden;
    Var result;
    if(transfer_func=="linear")
        result = hidden;
    else if(transfer_func=="tanh")
        result = tanh(hidden);
    else if(transfer_func=="sigmoid")
        result = sigmoid(hidden);
    else if(transfer_func=="softplus")
        result = softplus(hidden);
    else if(transfer_func=="exp")
        result = exp(hidden);
    else if(transfer_func=="softmax")
        result = softmax(hidden);
    else if (transfer_func == "log_softmax")
        result = log_softmax(hidden);
    else if(transfer_func=="hard_slope")
        result = unary_hard_slope(hidden,0,1);
    else if(transfer_func=="symm_hard_slope")
        result = unary_hard_slope(hidden,-1,1);
    else
        PLERROR("In DeepFeatureExtractorNNet::hiddenLayer - "
                "Unknown value for transfer_func: %s",transfer_func.c_str());
    return result;
}

Here is the call graph for this function:

Var PLearn::DeepFeatureExtractorNNet::hiddenLayer ( const Var input,
const Var weights,
string  transfer_func,
Var before_transfer_function,
bool  use_cubed_value = false 
) [protected]

Return a variable that is the hidden layer corresponding to given input and weights.

If the 'default' transfer_func is used, we use the hidden_transfer_func option.

Definition at line 1278 of file DeepFeatureExtractorNNet.cc.

References PLearn::affine_transform(), PLearn::exp(), PLearn::log_softmax(), PLERROR, PLearn::pow(), PLearn::sigmoid(), PLearn::softmax(), PLearn::softplus(), PLearn::tanh(), and PLearn::unary_hard_slope().

Referenced by build_().

                                                                {
    Var hidden = affine_transform(input, weights); 
    if(use_cubed_value)
        hidden = pow(hidden,3);    
    before_transfer_function = hidden;
    Var result;
    if(transfer_func=="linear")
        result = hidden;
    else if(transfer_func=="tanh")
        result = tanh(hidden);
    else if(transfer_func=="sigmoid")
        result = sigmoid(hidden);
    else if(transfer_func=="softplus")
        result = softplus(hidden);
    else if(transfer_func=="exp")
        result = exp(hidden);
    else if(transfer_func=="softmax")
        result = softmax(hidden);
    else if (transfer_func == "log_softmax")
        result = log_softmax(hidden);
    else if(transfer_func=="hard_slope")
        result = unary_hard_slope(hidden,0,1);
    else if(transfer_func=="symm_hard_slope")
        result = unary_hard_slope(hidden,-1,1);
    else
        PLERROR("In DeepFeatureExtractorNNet::hiddenLayer - "
                "Unknown value for transfer_func: %s",transfer_func.c_str());
    return result;
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::DeepFeatureExtractorNNet::makeDeepCopyFromShallowCopy ( CopiesMap copies) [virtual]

Transforms a shallow copy into a deep copy.

Reimplemented from PLearn::PLearner.

Definition at line 662 of file DeepFeatureExtractorNNet.cc.

References autoassociator_params, autoassociator_training_costs, biases, cost_funcs, costs, PLearn::deepCopyField(), f, feature_vector, hidden_representation, input, invars, knn_train_set, PLearn::PLearner::makeDeepCopyFromShallowCopy(), neighbor_indices, nhidden_schedule, optimizer, optimizer_supervised, output, output_and_target_to_cost, params, params_to_train, paramsvalues, penalties, reconstruction_weights, sampleweight, sup_train_set, target, test_costf, test_costs, to_feature_vector, training_cost, unsup_train_set, unsupervised_target, PLearn::varDeepCopyField(), and weights.

Here is the call graph for this function:

int PLearn::DeepFeatureExtractorNNet::outputsize ( ) const [virtual]

Returns the size of this learner's output, (which typically may depend on its inputsize(), targetsize() and set options).

Implements PLearn::PLearner.

Definition at line 710 of file DeepFeatureExtractorNNet.cc.

References output.

Referenced by computeOutput(), and computeOutputAndCosts().

{
    if(output)
        return output->size();
    else
        return 0;
}

Here is the caller graph for this function:

void PLearn::DeepFeatureExtractorNNet::train ( ) [virtual]

The role of the train method is to bring the learner up to stage==nstages, updating the train_stats collector with training costs measured on-line in the process.

Implements PLearn::PLearner.

Definition at line 732 of file DeepFeatureExtractorNNet.cc.

References autoassociator_params, autoassociator_regularisation_weight, autoassociator_training_costs, batch_size, batch_size_supervised, build(), classname(), PLearn::endl(), f, PLearn::hconcat(), invars, PLearn::PP< T >::isNull(), k_nearest_neighbors_reconstruction, knn_train_set, PLearn::VMat::length(), PLearn::TVec< T >::length(), PLearn::meanOf(), nhidden_schedule, nhidden_schedule_current_position, nhidden_schedule_position, PLearn::PLearner::nstages, optimizer, optimizer_supervised, output_and_target_to_cost, params_to_train, PLERROR, relative_minimum_improvement, PLearn::PLearner::report_progress, PLearn::PLearner::stage, sup_train_set, supervised_signal_weight, test_costf, to_feature_vector, PLearn::tostring(), PLearn::PLearner::train_set, PLearn::PLearner::train_stats, training_cost, unsup_train_set, and PLearn::PLearner::verbosity.

{
    if(!train_set)
        PLERROR("In DeepFeatureExtractor::train, you did not setTrainingSet");
    
    if(!train_stats)
        PLERROR("In DeepFeatureExtractor::train, you did not setTrainStatsCollector");

    // k nearest neighbors prediction
    if(k_nearest_neighbors_reconstruction>=0 
       && nhidden_schedule_current_position < nhidden_schedule.length())
    {
        if(relative_minimum_improvement <= 0)
            PLERROR("In DeepFeatureExtractorNNEt::build_(): "
                    "relative_minimum_improvement need to be > 0 when "
                    "using nearest neighbors reconstruction");
        if(nhidden_schedule_current_position==0) 
        {
            // Compute nearest neighbors in input space
            if(verbosity > 2) cout << "Computing nearest neighbors" << endl;
            knn_train_set = new AppendNeighborsVMatrix();
            knn_train_set->source = train_set;
            knn_train_set->n_neighbors = k_nearest_neighbors_reconstruction;
            knn_train_set->append_neighbor_indices = false;
            knn_train_set->build();
            unsup_train_set = (VMatrix*) knn_train_set;
            if(verbosity > 2) cout << "Done" << endl;

            // Append input
            unsup_train_set = hconcat(
                new GetInputVMatrix(train_set),unsup_train_set);
            unsup_train_set->defineSizes(train_set->inputsize()*
                                         (k_nearest_neighbors_reconstruction+2),
                                         train_set->targetsize(),
                                         train_set->weightsize()); 
        }
        else
        {
            // Compute nearest neighbors in feature (hidden layer) space
            if(verbosity > 2) cout << "Computing nearest neighbors and performing transformation to hidden representation" << endl;
            knn_train_set->transformation =  to_feature_vector;
            knn_train_set->defineSizes(-1,-1,-1);
            knn_train_set->build();
            unsup_train_set = (VMatrix *)knn_train_set;
            if(verbosity > 2) cout << "Done" << endl;

            int feat_size = to_feature_vector->outputsize;
            // Append input
            unsup_train_set = hconcat(
                new GetInputVMatrix(train_set),unsup_train_set);
            unsup_train_set->defineSizes(
                train_set->inputsize()
                +feat_size*(k_nearest_neighbors_reconstruction+1),
                train_set->targetsize(),train_set->weightsize());            
        }

    }


    int l;
    if(sup_train_set && 
       (supervised_signal_weight == 1
        || nhidden_schedule_current_position >= nhidden_schedule.length()))
        l = sup_train_set->length();  
    else
        if(unsup_train_set 
           && nhidden_schedule_current_position < nhidden_schedule.length())
            l = unsup_train_set->length();  
        else
            l = train_set->length();

    // Net has not been properly built yet 
    // (because build was called before the learner had a proper training set)
    if(f.isNull()) 
        build();

    // Update de DeepFeatureExtractor structure if necessary
    if(nhidden_schedule_current_position < nhidden_schedule_position)
        build();

    // Number of samples seen by optimizer before each optimizer update
    int nsamples;
    if(supervised_signal_weight == 1
       || nhidden_schedule_current_position >= nhidden_schedule.length())
        nsamples = batch_size_supervised>0 ? batch_size_supervised : l;        
    else
        nsamples = batch_size>0 ? batch_size : l;


    // Parameterized function to optimize
    Func paramf = Func(invars, training_cost); 
    Var totalcost;
    
    if(sup_train_set 
       && (supervised_signal_weight == 1
           || nhidden_schedule_current_position >= nhidden_schedule.length()))
        totalcost = meanOf(sup_train_set,paramf,nsamples);
    else
        if(unsup_train_set 
           && nhidden_schedule_current_position < nhidden_schedule.length())
            totalcost = meanOf(unsup_train_set, paramf, nsamples);
        else            
            totalcost = meanOf(train_set, paramf, nsamples);

    PP<Optimizer> this_optimizer;

    if(optimizer_supervised 
       && nhidden_schedule_current_position >= nhidden_schedule.length())
    {
        if(nhidden_schedule_current_position == nhidden_schedule.length()+1
           && autoassociator_regularisation_weight>0)
        {            
            optimizer_supervised->setToOptimize(
                params_to_train, totalcost, autoassociator_training_costs, 
                autoassociator_params, 
                autoassociator_regularisation_weight);
        }
        else
            optimizer_supervised->setToOptimize(params_to_train, totalcost);
        optimizer_supervised->build();
        this_optimizer = optimizer_supervised;
    }
    else if(optimizer)
    {
        if(nhidden_schedule_current_position == nhidden_schedule.length()+1
           && autoassociator_regularisation_weight>0)
            optimizer->setToOptimize(
                params_to_train, totalcost, autoassociator_training_costs, 
                autoassociator_params, autoassociator_regularisation_weight);
        else
            optimizer->setToOptimize(params_to_train, totalcost);

        optimizer->build();
        this_optimizer = optimizer;
    }
    else PLERROR("DeepFeatureExtractor::train can't train without setting "
                 "an optimizer first!");

    // Number of optimizer stages corresponding to one learner stage (one epoch)
    int optstage_per_lstage = l/nsamples;

    PP<ProgressBar> pb;
    if(report_progress)
        pb = new ProgressBar("Training " + classname() + " from stage " 
                             + tostring(stage) + " to " + tostring(nstages), 
                             nstages-stage);

    //displayFunction(paramf, true, false, 250);
    //cout << params_to_train.size() << " params to train" << endl;
    //cout << params.size() << " params" << endl;
    int initial_stage = stage;
    real last_error = REAL_MAX;
    real this_error = 0;
    Vec stats;
    bool flag = (relative_minimum_improvement >= 0 
                 && nhidden_schedule_current_position <= nhidden_schedule.length());

    if(verbosity>2) cout << "Training layer " 
                         << nhidden_schedule_current_position+1 << endl;

    while((stage<nstages || flag))
    {
        this_optimizer->nstages = optstage_per_lstage;
        train_stats->forget();
        this_optimizer->early_stop = false;
        this_optimizer->optimizeN(*train_stats);
        // Uncomment the following if you want to check your new Var.
        // optimizer->verifyGradient(1e-4); 
        train_stats->finalize();
        stats = train_stats->getMean();
        if(verbosity>2)
        {
            if(flag)
                cout << "Initialization epoch, reconstruction train objective: " 
                     << stats << endl;
            else
                cout << "Epoch " << stage << " train objective: " << stats << endl;
        }
        if(pb)
            pb->update(stage-initial_stage);

        this_error = stats[stats.length()-2];
        if(flag 
           && last_error - this_error < relative_minimum_improvement * last_error) 
            break;
        if(!flag) ++stage;
        last_error = this_error;
    }
    if(verbosity>1)
        cout << "EPOCH " << stage << " train objective: " 
             << train_stats->getMean() << endl;

    output_and_target_to_cost->recomputeParents();
    test_costf->recomputeParents();
    
    if(relative_minimum_improvement >= 0 
       && nhidden_schedule_current_position <= nhidden_schedule.length())
    {
        nhidden_schedule_position++;
        totalcost = 0;
        build();
        train();
    }
    //PLERROR("fuck");
}

Here is the call graph for this function:


Member Data Documentation

Reimplemented from PLearn::PLearner.

Definition at line 204 of file DeepFeatureExtractorNNet.h.

Always use the reconstruction cost of the input, not of the last layer.

This option should be used if use_same_input_and_output_weights is true.

Definition at line 115 of file DeepFeatureExtractorNNet.h.

Referenced by build_(), buildCosts(), and declareOptions().

Different training_costs used for autoassociator regularisation.

Definition at line 274 of file DeepFeatureExtractorNNet.h.

Referenced by build_(), buildFuncs(), makeDeepCopyFromShallowCopy(), and train().

Weight of autoassociator regularisation terms in the fine-tuning phase.

If it is equal to 0, then the unsupervised signal is ignored.

Definition at line 138 of file DeepFeatureExtractorNNet.h.

Referenced by build_(), buildFuncs(), declareOptions(), and train().

Different training_costs used for autoassociator regularisation.

Definition at line 277 of file DeepFeatureExtractorNNet.h.

Referenced by build_(), buildFuncs(), makeDeepCopyFromShallowCopy(), and train().

Batch size.

Definition at line 77 of file DeepFeatureExtractorNNet.h.

Referenced by declareOptions(), and train().

Batch size used for supervised phase.

Definition at line 79 of file DeepFeatureExtractorNNet.h.

Referenced by declareOptions(), and train().

Bias decay for all biases.

Definition at line 91 of file DeepFeatureExtractorNNet.h.

Referenced by buildPenalties(), and declareOptions().

Biases.

Definition at line 230 of file DeepFeatureExtractorNNet.h.

Referenced by build_(), buildPenalties(), and makeDeepCopyFromShallowCopy().

Used only in the stable_cross_entropy cost function, to fight overfitting (0<=r<1)

Definition at line 96 of file DeepFeatureExtractorNNet.h.

Referenced by buildCosts(), and declareOptions().

Cost function for the supervised phase.

Definition at line 87 of file DeepFeatureExtractorNNet.h.

Referenced by buildCosts(), computeCostsFromOutputs(), declareOptions(), getTestCostNames(), and makeDeepCopyFromShallowCopy().

Costs variables.

Definition at line 250 of file DeepFeatureExtractorNNet.h.

Referenced by buildCosts(), and makeDeepCopyFromShallowCopy().

Function: input -> output.

Definition at line 265 of file DeepFeatureExtractorNNet.h.

Referenced by buildFuncs(), computeOutput(), makeDeepCopyFromShallowCopy(), and train().

Feature vector output.

Definition at line 238 of file DeepFeatureExtractorNNet.h.

Referenced by build_(), buildFuncs(), and makeDeepCopyFromShallowCopy().

Hidden representation variable.

Definition at line 240 of file DeepFeatureExtractorNNet.h.

Referenced by build_(), and makeDeepCopyFromShallowCopy().

Transfer function for the hidden nodes.

Definition at line 81 of file DeepFeatureExtractorNNet.h.

The method used to initialize the weights.

Definition at line 104 of file DeepFeatureExtractorNNet.h.

Referenced by declareOptions(), and fillWeights().

Input variable.

Definition at line 234 of file DeepFeatureExtractorNNet.h.

Referenced by build_(), buildFuncs(), and makeDeepCopyFromShallowCopy().

Input reconstruction error.

The reconstruction error of the hidden layers will always be "cross_entropy". Choose among:

  • "cross_entropy" (default, stable version)
  • "mse"

Definition at line 133 of file DeepFeatureExtractorNNet.h.

Referenced by build_(), buildCosts(), and declareOptions().

Input variables.

Definition at line 232 of file DeepFeatureExtractorNNet.h.

Referenced by buildFuncs(), makeDeepCopyFromShallowCopy(), and train().

Number of nearest neighbors to reconstruct in greedy phase.

Definition at line 147 of file DeepFeatureExtractorNNet.h.

Referenced by build_(), buildCosts(), buildFuncs(), declareOptions(), and train().

Unsupervised data when using nearest neighbors.

Definition at line 262 of file DeepFeatureExtractorNNet.h.

Referenced by makeDeepCopyFromShallowCopy(), and train().

Margin requirement, used only with the margin_perceptron_cost cost function.

Definition at line 102 of file DeepFeatureExtractorNNet.h.

Referenced by buildCosts(), and declareOptions().

Neighbor indices.

Definition at line 242 of file DeepFeatureExtractorNNet.h.

Referenced by buildFuncs(), and makeDeepCopyFromShallowCopy().

Number of hidden units of each hidden layers to add.

Definition at line 69 of file DeepFeatureExtractorNNet.h.

Referenced by build_(), buildCosts(), buildFuncs(), declareOptions(), makeDeepCopyFromShallowCopy(), and train().

Index of the hidden layer that was added last.

When equal to nhidden_schedule.length(), then only the output layer is currently being trained. When It is equal to nhidden_schedule.length()+1, the whole network is being fine-tuned.

Definition at line 220 of file DeepFeatureExtractorNNet.h.

Referenced by build_(), buildCosts(), buildFuncs(), declareOptions(), forget(), and train().

Index of the layer that will be trained at the next call of train.

Definition at line 85 of file DeepFeatureExtractorNNet.h.

Referenced by build_(), buildCosts(), declareOptions(), and train().

Number of outputs for the neural network.

Definition at line 108 of file DeepFeatureExtractorNNet.h.

Referenced by build_(), and declareOptions().

Optimizer of the neural network.

Definition at line 71 of file DeepFeatureExtractorNNet.h.

Referenced by build_(), declareOptions(), forget(), makeDeepCopyFromShallowCopy(), and train().

Optimizer of the supervised phase of the neural network.

If not specified, then the same optimizer will always be used.

Definition at line 75 of file DeepFeatureExtractorNNet.h.

Referenced by build_(), declareOptions(), forget(), makeDeepCopyFromShallowCopy(), and train().

Output variable.

Definition at line 236 of file DeepFeatureExtractorNNet.h.

Referenced by build_(), makeDeepCopyFromShallowCopy(), and outputsize().

Function: output & target -> cost.

Definition at line 269 of file DeepFeatureExtractorNNet.h.

Referenced by buildFuncs(), computeCostsFromOutputs(), makeDeepCopyFromShallowCopy(), and train().

Output transfer function, when all hidden layers are added.

Definition at line 83 of file DeepFeatureExtractorNNet.h.

Referenced by build_(), buildCosts(), and declareOptions().

Parameter variables.

Definition at line 222 of file DeepFeatureExtractorNNet.h.

Referenced by build_(), forget(), and makeDeepCopyFromShallowCopy().

Parameter variables to train.

Definition at line 224 of file DeepFeatureExtractorNNet.h.

Referenced by build_(), buildFuncs(), makeDeepCopyFromShallowCopy(), and train().

Values of all parameters.

Definition at line 106 of file DeepFeatureExtractorNNet.h.

Referenced by build_(), declareOptions(), and makeDeepCopyFromShallowCopy().

Penalties variables.

Definition at line 252 of file DeepFeatureExtractorNNet.h.

Referenced by buildCosts(), buildPenalties(), and makeDeepCopyFromShallowCopy().

Penalty to use on the weights (for weight and bias decay)

Definition at line 93 of file DeepFeatureExtractorNNet.h.

Referenced by build_(), buildPenalties(), and declareOptions().

Reconstruction weights.

Definition at line 228 of file DeepFeatureExtractorNNet.h.

Referenced by build_(), buildPenalties(), and makeDeepCopyFromShallowCopy().

Used in the stable_cross_entropy cost function of the hidden activations, in the unsupervised stages (0<=r<1)

Definition at line 99 of file DeepFeatureExtractorNNet.h.

Referenced by buildCosts(), and declareOptions().

Threshold on training set error relative improvement, before adding a new layer.

If < 0, then the addition of layers must be done by the user.

Definition at line 127 of file DeepFeatureExtractorNNet.h.

Referenced by declareOptions(), and train().

Sample weight variable.

Definition at line 248 of file DeepFeatureExtractorNNet.h.

Referenced by build_(), buildCosts(), buildTargetAndWeight(), and makeDeepCopyFromShallowCopy().

Fake supervised data;.

Definition at line 258 of file DeepFeatureExtractorNNet.h.

Referenced by build_(), makeDeepCopyFromShallowCopy(), and train().

Weight of supervised signal used in addition to unsupervised signal in greedy phase.

This weights should be in [0,1]. If it is equal to 0, then the supervised signal is ignored. If it is equal to 1, then the unsupervised signal is ignored.

Definition at line 145 of file DeepFeatureExtractorNNet.h.

Referenced by build_(), buildCosts(), declareOptions(), and train().

Target variable.

Definition at line 244 of file DeepFeatureExtractorNNet.h.

Referenced by build_(), buildTargetAndWeight(), and makeDeepCopyFromShallowCopy().

Function: input & target -> output & test_costs.

Definition at line 267 of file DeepFeatureExtractorNNet.h.

Referenced by buildFuncs(), computeOutputAndCosts(), makeDeepCopyFromShallowCopy(), and train().

Test costs variable.

Definition at line 256 of file DeepFeatureExtractorNNet.h.

Referenced by buildCosts(), buildFuncs(), and makeDeepCopyFromShallowCopy().

Function from input space to learned function space.

Definition at line 271 of file DeepFeatureExtractorNNet.h.

Referenced by buildFuncs(), makeDeepCopyFromShallowCopy(), and train().

Training cost variable.

Definition at line 254 of file DeepFeatureExtractorNNet.h.

Referenced by buildCosts(), buildFuncs(), makeDeepCopyFromShallowCopy(), and train().

Unsupervised data when using nearest neighbors.

Definition at line 260 of file DeepFeatureExtractorNNet.h.

Referenced by makeDeepCopyFromShallowCopy(), and train().

Unsupervised target variable.

Definition at line 246 of file DeepFeatureExtractorNNet.h.

Referenced by build_(), buildFuncs(), and makeDeepCopyFromShallowCopy().

Use the cubed value of the input of the activation functions (not used for reconstruction/auto-associator layers and ouput layer)

Definition at line 118 of file DeepFeatureExtractorNNet.h.

Referenced by build_(), and declareOptions().

To simulate semi-supervised learning.

Definition at line 120 of file DeepFeatureExtractorNNet.h.

Referenced by build_(), and declareOptions().

Use only supervised part.

Definition at line 122 of file DeepFeatureExtractorNNet.h.

Referenced by declareOptions().

Use the same weights for the input and output weights for the autoassociators.

Definition at line 111 of file DeepFeatureExtractorNNet.h.

Referenced by build_(), buildPenalties(), and declareOptions().

Weight decay for all weights.

Definition at line 89 of file DeepFeatureExtractorNNet.h.

Referenced by buildPenalties(), and declareOptions().

Weights.

Definition at line 226 of file DeepFeatureExtractorNNet.h.

Referenced by build_(), buildPenalties(), forget(), and makeDeepCopyFromShallowCopy().


The documentation for this class was generated from the following files:
 All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Friends Defines