PLearn 0.1
Public Member Functions | Static Public Member Functions | Public Attributes | Static Public Attributes | Static Protected Member Functions | Protected Attributes | Private Types | Private Member Functions
PLearn::DeepNonLocalManifoldParzen Class Reference

Neural net, trained layer-wise to predict the manifold structure of the data. More...

#include <DeepNonLocalManifoldParzen.h>

Inheritance diagram for PLearn::DeepNonLocalManifoldParzen:
Inheritance graph
[legend]
Collaboration diagram for PLearn::DeepNonLocalManifoldParzen:
Collaboration graph
[legend]

List of all members.

Public Member Functions

 DeepNonLocalManifoldParzen ()
 Default constructor.
virtual int outputsize () const
 Returns the size of this learner's output, (which typically may depend on its inputsize(), targetsize() and set options).
virtual void forget ()
 (Re-)initializes the PLearner in its fresh state (that state may depend on the 'seed' option) and sets 'stage' back to 0 (this is the stage of a fresh learner!).
virtual void train ()
 The role of the train method is to bring the learner up to stage==nstages, updating the train_stats collector with training costs measured on-line in the process.
virtual void computeOutput (const Vec &input, Vec &output) const
 Computes the output from the input.
virtual void computeCostsFromOutputs (const Vec &input, const Vec &output, const Vec &target, Vec &costs) const
 Computes the costs from already computed output.
virtual void updateManifoldParzenParameters () const
 Precomputes the representations of the training set examples, to speed up nearest neighbors searches in that space.
virtual TVec< std::string > getTestCostNames () const
 Returns the names of the costs computed by computeCostsFromOutpus (and thus the test method).
virtual TVec< std::string > getTrainCostNames () const
 Returns the names of the objective costs that the train method computes and for which it updates the VecStatsCollector train_stats.
virtual void setTrainingSet (VMat training_set, bool call_forget=true)
 Declares the training set.
void greedyStep (const Vec &input, const Vec &target, int index, Vec train_costs, int stage)
void fineTuningStep (const Vec &input, const Vec &target, Vec &train_costs, Mat nearest_neighbors)
void computeRepresentation (const Vec &input, Vec &representation, int layer) const
void computeManifoldParzenParameters (const Vec &input, Mat &F, Vec &mu, Vec &pre_sigma_noise, Mat &U, Vec &sm_svd, int target_class=-1) const
virtual string classname () const
virtual OptionListgetOptionList () const
virtual OptionMapgetOptionMap () const
virtual RemoteMethodMapgetRemoteMethodMap () const
virtual
DeepNonLocalManifoldParzen
deepCopy (CopiesMap &copies) const
virtual void build ()
 Finish building the object; just call inherited::build followed by build_()
virtual void makeDeepCopyFromShallowCopy (CopiesMap &copies)
 Transforms a shallow copy into a deep copy.

Static Public Member Functions

static string _classname_ ()
static OptionList_getOptionList_ ()
static RemoteMethodMap_getRemoteMethodMap_ ()
static Object_new_instance_for_typemap_ ()
static bool _isa_ (const Object *o)
static void _static_initialize_ ()
static const PPathdeclaringFile ()

Public Attributes

real cd_learning_rate
 Contrastive divergence learning rate.
real cd_decrease_ct
 Contrastive divergence decrease constant.
real greedy_learning_rate
 The learning rate used during the autoassociator gradient descent training.
real greedy_decrease_ct
 The decrease constant of the learning rate used during the autoassociator gradient descent training.
real fine_tuning_learning_rate
 The learning rate used during the fine tuning gradient descent.
real fine_tuning_decrease_ct
 The decrease constant of the learning rate used during fine tuning gradient descent.
TVec< inttraining_schedule
 Number of examples to use during each phase of greedy pre-training.
TVec< PP< RBMLayer > > layers
 The layers of units in the network.
TVec< PP< RBMConnection > > connections
 The weights of the connections between the layers.
TVec< PP< RBMConnection > > reconstruction_connections
 The reconstruction weights of the autoassociators.
int k_neighbors
 Number of nearest neighbors to use to learn the manifold structure.
int n_components
 Dimensionality of the manifold.
real min_sigma_noise
 Minimum value for the noise variance.
int n_classes
 Number of classes.
bool train_one_network_per_class
 Indication that one network per class should be trained.
real output_connections_l1_penalty_factor
 Output weights L1 penalty factor.
real output_connections_l2_penalty_factor
 Output weights L2 penalty factor.
bool save_manifold_parzen_parameters
 Indication that the parameters for the manifold parzen windows estimator should be saved during test, to speed up testing.
bool do_not_learn_sigma_noise
 Indication that the value of sigma noise should not be learned.
bool use_test_centric_nlmp
 Indication that the Test-Centric NLMP variant should be used.
int n_layers
 Number of layers.

Static Public Attributes

static StaticInitializer _static_initializer_

Static Protected Member Functions

static void declareOptions (OptionList &ol)
 Declares the class options.

Protected Attributes

TVec< Vecactivations
 Stores the activations of the input and hidden layers (at the input of the layers)
TVec< Vecexpectations
 Stores the expectations of the input and hidden layers (at the output of the layers)
TVec< Vecactivation_gradients
 Stores the gradient of the cost wrt the activations of the input and hidden layers (at the input of the layers)
TVec< Vecexpectation_gradients
 Stores the gradient of the cost wrt the expectations of the input and hidden layers (at the output of the layers)
Vec reconstruction_activations
 Reconstruction activations.
Vec reconstruction_activation_gradients
 Reconstruction activation gradients.
Vec reconstruction_expectation_gradients
 Reconstruction expectation gradients.
PP< OnlineLearningModuleoutput_connections
 Output weights.
TVec< TVec< PP< RBMLayer > > > all_layers
 Parameters for all networks, when training one network per class.
TVec< TVec< PP< RBMConnection > > > all_connections
TVec< TVec< PP< RBMConnection > > > all_reconstruction_connections
TVec< PP< OnlineLearningModule > > all_output_connections
Vec input_representation
 Example representation.
Vec previous_input_representation
 Example representation at the previous layer, in a greedy step.
Vec all_outputs
 All outputs that give the components and sigma_noise values.
Vec all_outputs_gradient
 All outputs' gradients.
Mat F
 Variables for density of a Gaussian.
Mat F_copy
Vec mu
Vec pre_sigma_noise
Mat Ut
 Variables for the SVD and gradient computation.
Mat U
Mat V
Mat z
Mat inv_Sigma_F
Mat inv_Sigma_z
Vec temp_ncomp
Vec diff_neighbor_input
Vec sm_svd
Vec S
Vec uk
Vec fk
Vec uk2
Vec inv_sigma_zj
Vec zj
Vec inv_sigma_fk
Vec diff
Vec pos_down_val
 Positive down statistic.
Vec pos_up_val
 Positive up statistic.
Vec neg_down_val
 Negative down statistic.
Vec neg_up_val
 Negative up statistic.
TVec< Mateigenvectors
 Eigenvectors.
Mat eigenvalues
 Eigenvalues.
Vec sigma_noises
 Sigma noises.
Mat mus
 Mus.
TVec< PP< ClassSubsetVMatrix > > class_datasets
 Datasets for each class.
TMat< intnearest_neighbors_indices
 Proportions of examples from the other classes (columns), for each class (rows)
Vec test_votes
 Nearest neighbor votes for test example.
TVec< intgreedy_stages
 Stages of the different greedy phases.
int currently_trained_layer
 Currently trained layer (1 means the first hidden layer, n_layers means the output layer)
bool manifold_parzen_parameters_are_up_to_date
 Indication that the saved manifold parzen parameters are up to date.

Private Types

typedef PLearner inherited

Private Member Functions

void build_ ()
 This does the actual building.
void build_layers_and_connections ()
void build_classification_cost ()
void bprop_to_bases (const Mat &R, const Mat &M, const Vec &v1, const Vec &v2, real alpha)
void setLearningRate (real the_learning_rate)

Detailed Description

Neural net, trained layer-wise to predict the manifold structure of the data.

This information is used in a Manifold Parzen Windows classifier.

Definition at line 60 of file DeepNonLocalManifoldParzen.h.


Member Typedef Documentation

Reimplemented from PLearn::PLearner.

Definition at line 62 of file DeepNonLocalManifoldParzen.h.


Constructor & Destructor Documentation

PLearn::DeepNonLocalManifoldParzen::DeepNonLocalManifoldParzen ( )

Member Function Documentation

string PLearn::DeepNonLocalManifoldParzen::_classname_ ( ) [static]

Reimplemented from PLearn::PLearner.

Definition at line 56 of file DeepNonLocalManifoldParzen.cc.

OptionList & PLearn::DeepNonLocalManifoldParzen::_getOptionList_ ( ) [static]

Reimplemented from PLearn::PLearner.

Definition at line 56 of file DeepNonLocalManifoldParzen.cc.

RemoteMethodMap & PLearn::DeepNonLocalManifoldParzen::_getRemoteMethodMap_ ( ) [static]

Reimplemented from PLearn::PLearner.

Definition at line 56 of file DeepNonLocalManifoldParzen.cc.

bool PLearn::DeepNonLocalManifoldParzen::_isa_ ( const Object o) [static]

Reimplemented from PLearn::PLearner.

Definition at line 56 of file DeepNonLocalManifoldParzen.cc.

Object * PLearn::DeepNonLocalManifoldParzen::_new_instance_for_typemap_ ( ) [static]

Reimplemented from PLearn::Object.

Definition at line 56 of file DeepNonLocalManifoldParzen.cc.

StaticInitializer DeepNonLocalManifoldParzen::_static_initializer_ & PLearn::DeepNonLocalManifoldParzen::_static_initialize_ ( ) [static]

Reimplemented from PLearn::PLearner.

Definition at line 56 of file DeepNonLocalManifoldParzen.cc.

void PLearn::DeepNonLocalManifoldParzen::bprop_to_bases ( const Mat R,
const Mat M,
const Vec v1,
const Vec v2,
real  alpha 
) [private]

Definition at line 1085 of file DeepNonLocalManifoldParzen.cc.

References PLearn::TVec< T >::data(), i, j, PLearn::TMat< T >::length(), PLearn::TVec< T >::length(), PLERROR, and PLearn::TMat< T >::width().

Referenced by fineTuningStep().

{
#ifdef BOUNDCHECK
    if (M.length() != R.length() || M.width() != R.width() 
        || v1.length()!=M.length() || M.width()!=v2.length() )
        PLERROR("DeepNonLocalManifoldParzen::bprop_to_bases(): incompatible "
                "arguments' sizes");
#endif

    const real* v_1=v1.data();
    const real* v_2=v2.data();
    for (int i=0;i<M.length();i++)
    {
        real* mi = M[i];
        real* ri = R[i];
        real v1i = v_1[i];
        for (int j=0;j<M.width();j++)
            ri[j] += alpha*(mi[j] - v1i * v_2[j]);
    }
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::DeepNonLocalManifoldParzen::build ( ) [virtual]

Finish building the object; just call inherited::build followed by build_()

Reimplemented from PLearn::PLearner.

Definition at line 465 of file DeepNonLocalManifoldParzen.cc.

References PLearn::PLearner::build(), and build_().

Here is the call graph for this function:

void PLearn::DeepNonLocalManifoldParzen::build_ ( ) [private]

This does the actual building.

Reimplemented from PLearn::PLearner.

Definition at line 234 of file DeepNonLocalManifoldParzen.cc.

References all_connections, all_layers, all_output_connections, all_reconstruction_connections, build_layers_and_connections(), PLearn::TVec< T >::clear(), connections, currently_trained_layer, PLearn::TVec< T >::deepCopy(), PLearn::endl(), greedy_stages, i, PLearn::PLearner::inputsize_, k_neighbors, layers, PLearn::TVec< T >::length(), min_sigma_noise, n_classes, n_layers, output_connections, PLERROR, reconstruction_connections, PLearn::TVec< T >::resize(), setTrainingSet(), PLearn::PLearner::stage, test_votes, train_one_network_per_class, PLearn::PLearner::train_set, training_schedule, use_test_centric_nlmp, and PLearn::PLearner::weightsize_.

Referenced by build().

{
    // ### This method should do the real building of the object,
    // ### according to set 'options', in *any* situation.
    // ### Typical situations include:
    // ###  - Initial building of an object from a few user-specified options
    // ###  - Building of a "reloaded" object: i.e. from the complete set of
    // ###    all serialised options.
    // ###  - Updating or "re-building" of an object after a few "tuning"
    // ###    options have been modified.
    // ### You should assume that the parent class' build_() has already been
    // ### called.

    MODULE_LOG << "build_() called" << endl;

    if(inputsize_ > 0 )
    {
        // Initialize some learnt variables
        n_layers = layers.length();
        
        // Builds some variables using the training set
        setTrainingSet(train_set, false);

        if( n_classes <= 0 )
            PLERROR("DeepNonLocalManifoldParzen::build_() - \n"
                    "n_classes should be > 0.\n");
        test_votes.resize(n_classes);

        if( k_neighbors <= 0 )
            PLERROR("DeepNonLocalManifoldParzen::build_() - \n"
                    "k_neighbors should be > 0.\n");

        if( weightsize_ > 0 )
            PLERROR("DeepNonLocalManifoldParzen::build_() - \n"
                    "usage of weighted samples (weight size > 0) is not\n"
                    "implemented yet.\n");

        if( training_schedule.length() != n_layers-1 )        
            PLERROR("DeepNonLocalManifoldParzen::build_() - \n"
                    "training_schedule should have %d elements.\n",
                    n_layers-1);
        
        if( n_components < 1 || n_components > inputsize_)
            PLERROR("DeepNonLocalManifoldParzen::build_() - \n"
                    "n_components should be > 0 and < or = to inputsize.\n");

        if( min_sigma_noise < 0)
            PLERROR("DeepNonLocalManifoldParzen::build_() - \n"
                    "min_sigma_noise should be > or = to 0.\n");

        if( use_test_centric_nlmp && !train_one_network_per_class )
            PLERROR("DeepNonLocalManifoldParzen::build_() - \n"
                    "train_one_network_per_class must be true for "
                    "Test-Centric NLMP variant.\n");
          
        if( use_test_centric_nlmp && n_classes <= 1)
            PLERROR("DeepNonLocalManifoldParzen::build_() - \n"
                    "n_classes must be > 1 for "
                    "Test-Centric NLMP variant.\n");
          

        if(greedy_stages.length() == 0)
        {
            greedy_stages.resize(n_layers-1);
            greedy_stages.clear();
        }        
        
        if(stage > 0)
            currently_trained_layer = n_layers;
        else
        {            
            currently_trained_layer = n_layers-1;
            while(currently_trained_layer>1
                  && greedy_stages[currently_trained_layer-1] <= 0)
                currently_trained_layer--;
        }

        build_layers_and_connections();

        if( train_one_network_per_class )
        {
            if( n_classes == 1 )
                PLERROR("DeepNonLocalManifoldParzen::build_() - \n"
                        "train_one_network_per_class is useless for\n"
                        "n_classes == 1.\n");
            if( all_layers.length() != n_classes )
            {
                all_layers.resize( n_classes);
                for( int i=0; i<all_layers.length(); i++ )
                {
                    CopiesMap copies;
                    all_layers[i] = layers->deepCopy(copies);
                }
            }
            if( all_connections.length() != n_classes )
            {
                all_connections.resize( n_classes);
                for( int i=0; i<all_connections.length(); i++ )
                {
                    CopiesMap copies;
                    all_connections[i] = connections->deepCopy(copies);
                }
            }
            if( all_reconstruction_connections.length() != n_classes )
            {
                all_reconstruction_connections.resize( n_classes);
                for( int i=0; i<all_reconstruction_connections.length(); i++ )
                {
                    CopiesMap copies;
                    all_reconstruction_connections[i] = 
                        reconstruction_connections->deepCopy(copies);
                }
            }
            if( all_output_connections.length() != n_classes )
            {
                all_output_connections.resize( n_classes);
                for( int i=0; i<all_output_connections.length(); i++ )
                {
                    CopiesMap copies;
                    all_output_connections[i] = 
                        output_connections->deepCopy(copies);
                }
            }
        }
    }
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::DeepNonLocalManifoldParzen::build_classification_cost ( ) [private]
void PLearn::DeepNonLocalManifoldParzen::build_layers_and_connections ( ) [private]

Definition at line 361 of file DeepNonLocalManifoldParzen.cc.

References activation_gradients, activations, all_outputs, connections, do_not_learn_sigma_noise, PLearn::endl(), expectation_gradients, expectations, PLearn::fast_exact_is_equal(), greedy_learning_rate, i, PLearn::PLearner::inputsize(), PLearn::PLearner::inputsize_, layers, PLearn::TVec< T >::length(), n_components, n_layers, output_connections, output_connections_l1_penalty_factor, output_connections_l2_penalty_factor, PLERROR, PLearn::PLearner::random_gen, reconstruction_connections, PLearn::TVec< T >::resize(), and PLearn::TVec< T >::size().

Referenced by build_().

{
    MODULE_LOG << "build_layers_and_connections() called" << endl;

    if( connections.length() != n_layers-1 )
        PLERROR("DeepNonLocalManifoldParzen::build_layers_and_connections() - \n"
                "there should be %d connections.\n",
                n_layers-1);

    if( !fast_exact_is_equal( greedy_learning_rate, 0 ) 
        && reconstruction_connections.length() != n_layers-1 )
        PLERROR("DeepNonLocalManifoldParzen::build_layers_and_connections() - \n"
                "there should be %d reconstruction connections.\n",
                n_layers-1);
    
    if(  !( reconstruction_connections.length() == 0
            || reconstruction_connections.length() == n_layers-1 ) )
        PLERROR("DeepNonLocalManifoldParzen::build_layers_and_connections() - \n"
                "there should be either 0 or %d reconstruction connections.\n",
                n_layers-1);
        

    if(layers[0]->size != inputsize_)
        PLERROR("DeepNonLocalManifoldParzen::build_layers_and_connections() - \n"
                "layers[0] should have a size of %d.\n",
                inputsize_);

    activations.resize( n_layers );
    expectations.resize( n_layers );
    activation_gradients.resize( n_layers );
    expectation_gradients.resize( n_layers );

    for( int i=0 ; i<n_layers-1 ; i++ )
    {
        if( layers[i]->size != connections[i]->down_size )
            PLERROR("DeepNonLocalManifoldParzen::build_layers_and_connections() "
                    "- \n"
                    "connections[%i] should have a down_size of %d.\n",
                    i, layers[i]->size);

        if( connections[i]->up_size != layers[i+1]->size )
            PLERROR("DeepNonLocalManifoldParzen::build_layers_and_connections() "
                    "- \n"
                    "connections[%i] should have a up_size of %d.\n",
                    i, layers[i+1]->size);

        if( !(layers[i]->random_gen) )
        {
            layers[i]->random_gen = random_gen;
            layers[i]->forget();
        }

        if( !(connections[i]->random_gen) )
        {
            connections[i]->random_gen = random_gen;
            connections[i]->forget();
        }

        if( reconstruction_connections.length() != 0
            && !(reconstruction_connections[i]->random_gen) )
        {
            reconstruction_connections[i]->random_gen = random_gen;
            reconstruction_connections[i]->forget();
        }        

        activations[i].resize( layers[i]->size );
        expectations[i].resize( layers[i]->size );
        activation_gradients[i].resize( layers[i]->size );
        expectation_gradients[i].resize( layers[i]->size );
    }

    if( !(layers[n_layers-1]->random_gen) )
    {
        layers[n_layers-1]->random_gen = random_gen;
        layers[n_layers-1]->forget();
    }
    activations[n_layers-1].resize( layers[n_layers-1]->size );
    expectations[n_layers-1].resize( layers[n_layers-1]->size );
    activation_gradients[n_layers-1].resize( layers[n_layers-1]->size );
    expectation_gradients[n_layers-1].resize( layers[n_layers-1]->size );

    int output_size = n_components*inputsize() + inputsize() + (do_not_learn_sigma_noise ? 0 : 1);
    all_outputs.resize( output_size );

    if( !output_connections || output_connections->output_size != output_size)
    {
        PP<GradNNetLayerModule> ow = new GradNNetLayerModule;
        ow->input_size = layers[n_layers-1]->size;
        ow->output_size = output_size;
        ow->L1_penalty_factor = output_connections_l1_penalty_factor;
        ow->L2_penalty_factor = output_connections_l2_penalty_factor;
        ow->random_gen = random_gen;
        ow->build();
        output_connections = ow;
    }

    if( !(output_connections->random_gen) )
    {
        output_connections->random_gen = random_gen;
        output_connections->forget();
    }
}

Here is the call graph for this function:

Here is the caller graph for this function:

string PLearn::DeepNonLocalManifoldParzen::classname ( ) const [virtual]

Reimplemented from PLearn::Object.

Definition at line 56 of file DeepNonLocalManifoldParzen.cc.

Referenced by train().

Here is the caller graph for this function:

void PLearn::DeepNonLocalManifoldParzen::computeCostsFromOutputs ( const Vec input,
const Vec output,
const Vec target,
Vec costs 
) const [virtual]

Computes the costs from already computed output.

Implements PLearn::PLearner.

Definition at line 1303 of file DeepNonLocalManifoldParzen.cc.

References all_connections, all_layers, all_output_connections, all_reconstruction_connections, c, class_datasets, connections, currently_trained_layer, expectations, PLearn::TVec< T >::fill(), getTestCostNames(), layers, PLearn::TVec< T >::length(), MISSING_VALUE, n_classes, n_layers, output_connections, pl_log, reconstruction_activations, reconstruction_connections, PLearn::TVec< T >::resize(), test_votes, train_one_network_per_class, and use_test_centric_nlmp.

{

    //Assumes that computeOutput has been called

    costs.resize( getTestCostNames().length() );
    costs.fill( MISSING_VALUE );

    if( train_one_network_per_class )
    {
        int c = (int) target[0];
        layers = all_layers[c];
        connections = all_connections[c];
        reconstruction_connections = all_reconstruction_connections[c];
        output_connections = all_output_connections[c];
    }

    if( currently_trained_layer<n_layers 
        && reconstruction_connections.length() != 0 )
    {
        reconstruction_connections[ currently_trained_layer-1 ]->fprop( 
            expectations[currently_trained_layer],
            reconstruction_activations);
        layers[ currently_trained_layer-1 ]->fprop( 
            reconstruction_activations,
            layers[ currently_trained_layer-1 ]->expectation);
        
        layers[ currently_trained_layer-1 ]->activation << 
            reconstruction_activations;
        layers[ currently_trained_layer-1 ]->setExpectationByRef( 
            layers[ currently_trained_layer-1 ]->expectation);
        costs[ currently_trained_layer-1 ]  = 
            layers[ currently_trained_layer-1 ]->fpropNLL(
                expectations[currently_trained_layer-1]);
    }
    else
    {
        if( n_classes > 1 )
        {
            int target_class = ((int)round(target[0]));
            if( ((int)round(output[0])) == target_class )
                costs[n_layers-1] = 0;
            else
                costs[n_layers-1] = 1;
            if( !use_test_centric_nlmp )
                costs[n_layers] = - test_votes[target_class]
                    +pl_log(class_datasets[target_class]->length()); // Must take into account the 1/n normalization
        }
        else
        {
            costs[n_layers] = - output[0]; // 1/n normalization already accounted for
        }
    }
}

Here is the call graph for this function:

void PLearn::DeepNonLocalManifoldParzen::computeManifoldParzenParameters ( const Vec input,
Mat F,
Vec mu,
Vec pre_sigma_noise,
Mat U,
Vec sm_svd,
int  target_class = -1 
) const

Definition at line 914 of file DeepNonLocalManifoldParzen.cc.

References all_connections, all_layers, all_output_connections, all_outputs, all_reconstruction_connections, PLearn::TVec< T >::clear(), computeRepresentation(), connections, do_not_learn_sigma_noise, F, F_copy, input_representation, PLearn::PLearner::inputsize(), PLearn::lapackSVD(), layers, PLearn::TMat< T >::length(), PLearn::mypow(), n_components, n_layers, output_connections, PLASSERT, reconstruction_connections, PLearn::TMat< T >::resize(), PLearn::TVec< T >::resize(), S, PLearn::TVec< T >::subVec(), PLearn::TVec< T >::toMat(), train_one_network_per_class, U, Ut, V, and PLearn::TMat< T >::width().

Referenced by computeOutput(), fineTuningStep(), and updateManifoldParzenParameters().

{
    if( train_one_network_per_class )
    {
        PLASSERT( target_class >= 0 );
        layers = all_layers[target_class];
        connections = all_connections[target_class];
        reconstruction_connections = all_reconstruction_connections[target_class];
        output_connections = all_output_connections[target_class];
    }

    // Get example representation
    computeRepresentation(input, input_representation, 
                          n_layers-1);

    // Get parameters
    output_connections->fprop( input_representation, all_outputs );

    F.resize(n_components, inputsize());
    mu.resize(inputsize());
    pre_sigma_noise.resize(1);

    F << all_outputs.subVec(0,n_components * inputsize()).toMat(
        n_components, inputsize());
    mu << all_outputs.subVec(n_components * inputsize(),inputsize());
    if( do_not_learn_sigma_noise )
        pre_sigma_noise.clear();
    else
        pre_sigma_noise << all_outputs.subVec( (n_components+1) * inputsize(), 1 );

    F_copy.resize(F.length(),F.width());
    sm_svd.resize(n_components);
    // N.B. this is the SVD of F'
    F_copy << F;
    lapackSVD(F_copy, Ut, S, V,'A',1.5);
    U.resize(n_components,inputsize());
    for (int k=0;k<n_components;k++)
    {
        sm_svd[k] = mypow(S[k],2);
        U(k) << Ut(k);
    }
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::DeepNonLocalManifoldParzen::computeOutput ( const Vec input,
Vec output 
) const [virtual]

Computes the output from the input.

Reimplemented from PLearn::PLearner.

Definition at line 1131 of file DeepNonLocalManifoldParzen.cc.

References PLearn::argmax(), class_datasets, PLearn::TVec< T >::clear(), computeManifoldParzenParameters(), computeRepresentation(), currently_trained_layer, diff, diff_neighbor_input, PLearn::dot(), eigenvalues, eigenvectors, F, PLearn::VMat::getExample(), i, input_representation, PLearn::PLearner::inputsize(), j, PLearn::VMat::length(), PLearn::TVec< T >::length(), Log2Pi, PLearn::logadd(), min_sigma_noise, mu, mus, n, n_classes, n_components, n_layers, pl_log, PLearn::pownorm(), pre_sigma_noise, reconstruction_connections, PLearn::TVec< T >::resize(), save_manifold_parzen_parameters, sigma_noises, sm_svd, PLearn::substract(), PLearn::PLearner::targetsize(), test_votes, PLearn::PLearner::train_set, U, uk, updateManifoldParzenParameters(), and use_test_centric_nlmp.

{

    if( currently_trained_layer<n_layers
        && reconstruction_connections.length() != 0 )
    {
        computeRepresentation(input, input_representation, 
                              currently_trained_layer);
        return;
    }

    test_votes.resize(n_classes);
    test_votes.clear();

    // Variables for probability computations
    real log_p_x_g_y = 0;
    real mahal = 0;
    real norm_term = 0;
    real n = inputsize();
    real dotp = 0;
    real coef = 0;
    real sigma_noise = 0;
    
    Vec input_j(inputsize());
    Vec target(targetsize());
    real weight;

    if( use_test_centric_nlmp )
    {
        for( int i=0; i<n_classes; i++ )
        {
            computeManifoldParzenParameters( input, F, mu, 
                                             pre_sigma_noise, U, sm_svd,
                                             i);
                    
            sigma_noise = pre_sigma_noise[0]*pre_sigma_noise[0] 
                + min_sigma_noise;
                    
            mahal = -0.5*pownorm(mu)/sigma_noise;      
            norm_term = - n/2.0 * Log2Pi - 0.5*(n-n_components)*
                pl_log(sigma_noise);
        
            for(int k=0; k<n_components; k++)
            { 
                uk = U(k);
                dotp = dot(mu,uk);
                coef = (1.0/(sm_svd[k]+sigma_noise) - 1.0/sigma_noise);
                mahal -= dotp*dotp*0.5*coef;
                norm_term -= 0.5*pl_log(sm_svd[k]+sigma_noise);
            }
            
            log_p_x_g_y = norm_term + mahal;
            test_votes[i] = log_p_x_g_y ;
        }        
    }
    else
    {
        if( save_manifold_parzen_parameters )
        {
            updateManifoldParzenParameters();
        
            int input_j_index;
            for( int i=0; i<n_classes; i++ )
            {
                for( int j=0; 
                     j<(n_classes > 1 ? 
                        class_datasets[i]->length() 
                        : train_set->length()); 
                     j++ )
                {
                    if( n_classes > 1 )
                    {
                        class_datasets[i]->getExample(j,input_j,target,weight);
                        input_j_index = class_datasets[i]->indices[j];
                    }
                    else
                    {
                        train_set->getExample(j,input_j,target,weight);
                        input_j_index = j;
                    }
        
                    U << eigenvectors[input_j_index];
                    sm_svd << eigenvalues(input_j_index);
                    sigma_noise = sigma_noises[input_j_index];
                    mu << mus(input_j_index);
        
                    substract(input,input_j,diff_neighbor_input); 
                    substract(diff_neighbor_input,mu,diff); 
                        
                    mahal = -0.5*pownorm(diff)/sigma_noise;      
                    norm_term = - n/2.0 * Log2Pi - 0.5*(n-n_components)*
                        pl_log(sigma_noise);
        
                    for(int k=0; k<n_components; k++)
                    { 
                        uk = U(k);
                        dotp = dot(diff,uk);
                        coef = (1.0/(sm_svd[k]+sigma_noise) - 1.0/sigma_noise);
                        mahal -= dotp*dotp*0.5*coef;
                        norm_term -= 0.5*pl_log(sm_svd[k]+sigma_noise);
                    }
                    
                    if( j==0 )
                        log_p_x_g_y = norm_term + mahal;
                    else
                        log_p_x_g_y = logadd(norm_term + mahal, log_p_x_g_y);
                }
        
                test_votes[i] = log_p_x_g_y;
            }
        }
        else
        {
        
            for( int i=0; i<n_classes; i++ )
            {
                for( int j=0; 
                     j<(n_classes > 1 ? 
                        class_datasets[i]->length() 
                        : train_set->length()); 
                     j++ )
                {
                    if( n_classes > 1 )
                    {
                        class_datasets[i]->getExample(j,input_j,target,weight);
                        computeManifoldParzenParameters( input_j, F, mu, 
                                                         pre_sigma_noise, U, sm_svd,
                                                         (int) target[0]);
                    }
                    else
                    {
                        train_set->getExample(j,input_j,target,weight);
                        computeManifoldParzenParameters( input_j, F, mu, 
                                                         pre_sigma_noise, U, sm_svd );
                    }
        
                    
                    sigma_noise = pre_sigma_noise[0]*pre_sigma_noise[0] 
                        + min_sigma_noise;
                    
                    substract(input,input_j,diff_neighbor_input); 
                    substract(diff_neighbor_input,mu,diff); 
                        
                    mahal = -0.5*pownorm(diff)/sigma_noise;      
                    norm_term = - n/2.0 * Log2Pi - 0.5*(n-n_components)*
                        pl_log(sigma_noise);
        
                    for(int k=0; k<n_components; k++)
                    { 
                        uk = U(k);
                        dotp = dot(diff,uk);
                        coef = (1.0/(sm_svd[k]+sigma_noise) - 1.0/sigma_noise);
                        mahal -= dotp*dotp*0.5*coef;
                        norm_term -= 0.5*pl_log(sm_svd[k]+sigma_noise);
                    }
                    
                    if( j==0 )
                        log_p_x_g_y = norm_term + mahal;
                    else
                        log_p_x_g_y = logadd(norm_term + mahal, log_p_x_g_y);
                }
        
                test_votes[i] = log_p_x_g_y;
            }
        }
    }
    if( n_classes > 1 )
        output[0] = argmax(test_votes);
    else
        output[0] = test_votes[0]-pl_log(train_set->length());
}

Here is the call graph for this function:

void PLearn::DeepNonLocalManifoldParzen::computeRepresentation ( const Vec input,
Vec representation,
int  layer 
) const

Definition at line 1109 of file DeepNonLocalManifoldParzen.cc.

References activations, connections, expectations, i, layers, PLearn::TVec< T >::length(), and PLearn::TVec< T >::resize().

Referenced by computeManifoldParzenParameters(), computeOutput(), and greedyStep().

{
    if(layer == 0)
    {
        representation.resize(input.length());
        expectations[0] << input;
        representation << input;
        return;
    }

    expectations[0] << input;
    for( int i=0 ; i<layer; i++ )
    {
        connections[i]->fprop( expectations[i], activations[i+1] );
        layers[i+1]->fprop(activations[i+1],expectations[i+1]);
    }
    representation.resize(expectations[layer].length());
    representation << expectations[layer];
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::DeepNonLocalManifoldParzen::declareOptions ( OptionList ol) [static, protected]

Declares the class options.

Reimplemented from PLearn::PLearner.

Definition at line 82 of file DeepNonLocalManifoldParzen.cc.

References PLearn::OptionBase::buildoption, cd_decrease_ct, cd_learning_rate, connections, PLearn::declareOption(), PLearn::PLearner::declareOptions(), do_not_learn_sigma_noise, fine_tuning_decrease_ct, fine_tuning_learning_rate, greedy_decrease_ct, greedy_learning_rate, greedy_stages, k_neighbors, layers, PLearn::OptionBase::learntoption, min_sigma_noise, n_classes, n_components, n_layers, output_connections, output_connections_l1_penalty_factor, output_connections_l2_penalty_factor, reconstruction_connections, save_manifold_parzen_parameters, train_one_network_per_class, PLearn::PLearner::train_set, training_schedule, and use_test_centric_nlmp.

{
    declareOption(ol, "cd_learning_rate", 
                  &DeepNonLocalManifoldParzen::cd_learning_rate,
                  OptionBase::buildoption,
                  "The learning rate used during the RBM "
                  "contrastive divergence training.\n");

    declareOption(ol, "cd_decrease_ct", 
                  &DeepNonLocalManifoldParzen::cd_decrease_ct,
                  OptionBase::buildoption,
                  "The decrease constant of the learning rate used during "
                  "the RBMs contrastive\n"
                  "divergence training. When a hidden layer has finished "
                  "its training,\n"
                  "the learning rate is reset to it's initial value.\n");

    declareOption(ol, "greedy_learning_rate", 
                  &DeepNonLocalManifoldParzen::greedy_learning_rate,
                  OptionBase::buildoption,
                  "The learning rate used during the autoassociator "
                  "gradient descent training.\n");

    declareOption(ol, "greedy_decrease_ct", 
                  &DeepNonLocalManifoldParzen::greedy_decrease_ct,
                  OptionBase::buildoption,
                  "The decrease constant of the learning rate used during "
                  "the autoassociator\n"
                  "gradient descent training. When a hidden layer has finished "
                  "its training,\n"
                  "the learning rate is reset to it's initial value.\n");

    declareOption(ol, "fine_tuning_learning_rate", 
                  &DeepNonLocalManifoldParzen::fine_tuning_learning_rate,
                  OptionBase::buildoption,
                  "The learning rate used during the fine tuning gradient descent.\n");

    declareOption(ol, "fine_tuning_decrease_ct", 
                  &DeepNonLocalManifoldParzen::fine_tuning_decrease_ct,
                  OptionBase::buildoption,
                  "The decrease constant of the learning rate used during "
                  "fine tuning\n"
                  "gradient descent.\n");

    declareOption(ol, "training_schedule", 
                  &DeepNonLocalManifoldParzen::training_schedule,
                  OptionBase::buildoption,
                  "Number of examples to use during each phase of greedy pre-training.\n"
                  "The number of fine-tunig steps is defined by nstages.\n"
        );

    declareOption(ol, "layers", &DeepNonLocalManifoldParzen::layers,
                  OptionBase::buildoption,
                  "The layers of units in the network. The first element\n"
                  "of this vector should be the input layer and the\n"
                  "subsequent elements should be the hidden layers. The\n"
                  "output layer should not be included in layers.\n");

    declareOption(ol, "connections", &DeepNonLocalManifoldParzen::connections,
                  OptionBase::buildoption,
                  "The weights of the connections between the layers.\n");

    declareOption(ol, "reconstruction_connections", 
                  &DeepNonLocalManifoldParzen::reconstruction_connections,
                  OptionBase::buildoption,
                  "The reconstruction weights of the autoassociators.\n");

    declareOption(ol, "k_neighbors", 
                  &DeepNonLocalManifoldParzen::k_neighbors,
                  OptionBase::buildoption,
                  "Number of nearest neighbors to use to learn "
                  "the manifold structure..\n");

    declareOption(ol, "n_components", 
                  &DeepNonLocalManifoldParzen::n_components,
                  OptionBase::buildoption,
                  "Dimensionality of the manifold.\n");

    declareOption(ol, "min_sigma_noise", 
                  &DeepNonLocalManifoldParzen::min_sigma_noise,
                  OptionBase::buildoption,
                  "Minimum value for the noise variance.\n");

    declareOption(ol, "n_classes", 
                  &DeepNonLocalManifoldParzen::n_classes,
                  OptionBase::buildoption,
                  "Number of classes. If n_classes = 1, learner will output\n"
                  "log likelihood of a given input. If n_classes > 1,\n"
                  "classification will be performed.\n");

    declareOption(ol, "train_one_network_per_class", 
                  &DeepNonLocalManifoldParzen::train_one_network_per_class,
                  OptionBase::buildoption,
                  "Indication that one network per class should be trained.\n");

    declareOption(ol, "output_connections_l1_penalty_factor", 
                  &DeepNonLocalManifoldParzen::output_connections_l1_penalty_factor,
                  OptionBase::buildoption,
                  "Output weights L1 penalty factor.\n");

    declareOption(ol, "output_connections_l2_penalty_factor", 
                  &DeepNonLocalManifoldParzen::output_connections_l2_penalty_factor,
                  OptionBase::buildoption,
                  "Output weights L2 penalty factor.\n");

    declareOption(ol, "save_manifold_parzen_parameters", 
                  &DeepNonLocalManifoldParzen::save_manifold_parzen_parameters,
                  OptionBase::buildoption,
                  "Indication that the parameters for the manifold parzen\n"
                  "windows estimator should be saved during test, to speed up "
                  "testing.\n");

    declareOption(ol, "do_not_learn_sigma_noise", 
                  &DeepNonLocalManifoldParzen::do_not_learn_sigma_noise,
                  OptionBase::buildoption,
                  "Indication that the value of sigma noise should not be learned.\n");

    declareOption(ol, "use_test_centric_nlmp", 
                  &DeepNonLocalManifoldParzen::use_test_centric_nlmp,
                  OptionBase::buildoption,
                  "Indication that the Test-Centric NLMP variant should "
                  "be used.\n"
                  "In this case, train_one_network_per_class must be true.\n");

    declareOption(ol, "greedy_stages", 
                  &DeepNonLocalManifoldParzen::greedy_stages,
                  OptionBase::learntoption,
                  "Number of training samples seen in the different greedy "
                  "phases.\n"
        );

    declareOption(ol, "n_layers", &DeepNonLocalManifoldParzen::n_layers,
                  OptionBase::learntoption,
                  "Number of layers.\n"
        );

    declareOption(ol, "output_connections", 
                  &DeepNonLocalManifoldParzen::output_connections,
                  OptionBase::learntoption,
                  "Output weights.\n"
        );

    declareOption(ol, "train_set", 
                  &DeepNonLocalManifoldParzen::train_set,
                  OptionBase::learntoption,
                  "Training set.\n"
        );

    // Now call the parent class' declareOptions
    inherited::declareOptions(ol);
}

Here is the call graph for this function:

static const PPath& PLearn::DeepNonLocalManifoldParzen::declaringFile ( ) [inline, static]

Reimplemented from PLearn::PLearner.

Definition at line 211 of file DeepNonLocalManifoldParzen.h.

:
    //#####  Not Options  #####################################################
DeepNonLocalManifoldParzen * PLearn::DeepNonLocalManifoldParzen::deepCopy ( CopiesMap copies) const [virtual]

Reimplemented from PLearn::PLearner.

Definition at line 56 of file DeepNonLocalManifoldParzen.cc.

void PLearn::DeepNonLocalManifoldParzen::fineTuningStep ( const Vec input,
const Vec target,
Vec train_costs,
Mat  nearest_neighbors 
)

Definition at line 960 of file DeepNonLocalManifoldParzen.cc.

References activation_gradients, activations, all_outputs, all_outputs_gradient, bprop_to_bases(), PLearn::TVec< T >::clear(), PLearn::TMat< T >::clear(), computeManifoldParzenParameters(), connections, diff_neighbor_input, do_not_learn_sigma_noise, PLearn::dot(), expectation_gradients, expectations, F, fk, i, input_representation, PLearn::PLearner::inputsize(), inv_Sigma_F, inv_sigma_fk, inv_Sigma_z, inv_sigma_zj, j, k_neighbors, PLearn::TVec< T >::last(), layers, Log2Pi, manifold_parzen_parameters_are_up_to_date, min_sigma_noise, mu, PLearn::multiplyAcc(), n, n_classes, n_components, n_layers, output_connections, pl_log, PLearn::pownorm(), pre_sigma_noise, PLearn::product(), PLearn::TVec< T >::resize(), PLearn::TMat< T >::resize(), sm_svd, PLearn::substract(), PLearn::TVec< T >::subVec(), temp_ncomp, PLearn::TVec< T >::toMat(), U, uk, uk2, z, and zj.

Referenced by train().

{
    manifold_parzen_parameters_are_up_to_date = false;

    if( n_classes > 1 )
        computeManifoldParzenParameters( input, F, mu, pre_sigma_noise, U, sm_svd,
                                         (int)target[0]);
    else
        computeManifoldParzenParameters( input, F, mu, pre_sigma_noise, U, sm_svd);

    real sigma_noise = pre_sigma_noise[0]* pre_sigma_noise[0] + min_sigma_noise;

    real mahal = 0;
    real norm_term = 0;
    real dotp = 0;
    real coef = 0;
    real n = inputsize();
    z.resize(k_neighbors,inputsize());
    temp_ncomp.resize(n_components);
    inv_Sigma_z.resize(k_neighbors,inputsize());
    inv_Sigma_z.clear();
    real tr_inv_Sigma = 0;
    train_costs.last() = 0;
    for(int j=0; j<k_neighbors;j++)
    {
        zj = z(j);
        substract(nearest_neighbors(j),input,diff_neighbor_input); 
        substract(diff_neighbor_input,mu,zj); 
      
        mahal = -0.5*pownorm(zj)/sigma_noise;      
        norm_term = - n/2.0 * Log2Pi - 0.5*(n-n_components)*pl_log(sigma_noise);

        inv_sigma_zj = inv_Sigma_z(j);
        inv_sigma_zj << zj; 
        inv_sigma_zj /= sigma_noise;

        if(j==0)
            tr_inv_Sigma = n/sigma_noise;

        for(int k=0; k<n_components; k++)
        { 
            uk = U(k);
            dotp = dot(zj,uk);
            coef = (1.0/(sm_svd[k]+sigma_noise) - 1.0/sigma_noise);
            multiplyAcc(inv_sigma_zj,uk,dotp*coef);
            mahal -= dotp*dotp*0.5*coef;
            norm_term -= 0.5*pl_log(sm_svd[k]+sigma_noise);
            if(j==0)
                tr_inv_Sigma += coef;
        }

        train_costs.last() += -1*(norm_term + mahal);
    }

    train_costs.last() /= k_neighbors;

    inv_Sigma_F.resize( n_components, inputsize() );
    inv_Sigma_F.clear();
    for(int k=0; k<n_components; k++)
    { 
        fk = F(k);
        inv_sigma_fk = inv_Sigma_F(k);
        inv_sigma_fk << fk;
        inv_sigma_fk /= sigma_noise;
        for(int k2=0; k2<n_components;k2++)
        {
            uk2 = U(k2);
            multiplyAcc(inv_sigma_fk,uk2,
                        (1.0/(sm_svd[k2]+sigma_noise) - 1.0/sigma_noise)*
                        dot(fk,uk2));
        }
    }

    all_outputs_gradient.resize((n_components+1) * inputsize()+ 
                                (do_not_learn_sigma_noise ? 0 : 1));
    all_outputs_gradient.clear();
    //coef = 1.0/train_set->length();
    coef = 1.0/k_neighbors;
    for(int neighbor=0; neighbor<k_neighbors; neighbor++)
    {
        // dNLL/dF
        product(temp_ncomp,F,inv_Sigma_z(neighbor));
        bprop_to_bases(all_outputs_gradient.subVec(0,n_components * inputsize()).toMat(n_components,inputsize()),
                       inv_Sigma_F,
                       temp_ncomp,inv_Sigma_z(neighbor),
                       coef);

        // dNLL/dmu
        multiplyAcc(all_outputs_gradient.subVec(n_components * inputsize(),
                                                inputsize()), 
                    inv_Sigma_z(neighbor),
                    -coef) ;

        if( !do_not_learn_sigma_noise )
        {
            // dNLL/dsn
            all_outputs_gradient[(n_components + 1 )* inputsize()] += coef* 
                0.5*(tr_inv_Sigma - pownorm(inv_Sigma_z(neighbor))) * 
                2 * pre_sigma_noise[0];
        }
    }

    // Propagating supervised gradient
    output_connections->bpropUpdate( input_representation, all_outputs,
                                     expectation_gradients[n_layers-1], 
                                     all_outputs_gradient);

    for( int i=n_layers-1 ; i>0 ; i-- )
    {
        layers[i]->bpropUpdate( activations[i],
                                expectations[i],
                                activation_gradients[i],
                                expectation_gradients[i] );
        
        
        connections[i-1]->bpropUpdate( expectations[i-1],
                                       activations[i],
                                       expectation_gradients[i-1],
                                       activation_gradients[i] );
    }        
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::DeepNonLocalManifoldParzen::forget ( ) [virtual]

(Re-)initializes the PLearner in its fresh state (that state may depend on the 'seed' option) and sets 'stage' back to 0 (this is the stage of a fresh learner!).

(Re-)initialize the PLearner in its fresh state (that state may depend on the 'seed' option) and sets 'stage' back to 0 (this is the stage of a fresh learner!)

A typical forget() method should do the following:

  • call inherited::forget() to initialize its random number generator with the 'seed' option
  • initialize the learner's parameters, using this random generator
  • stage = 0

Reimplemented from PLearn::PLearner.

Definition at line 545 of file DeepNonLocalManifoldParzen.cc.

References all_connections, all_layers, all_output_connections, all_reconstruction_connections, c, PLearn::TVec< T >::clear(), connections, PLearn::PLearner::forget(), greedy_stages, i, layers, PLearn::TVec< T >::length(), manifold_parzen_parameters_are_up_to_date, n_classes, n_layers, output_connections, reconstruction_connections, PLearn::PLearner::stage, and train_one_network_per_class.

{

    inherited::forget();

    manifold_parzen_parameters_are_up_to_date = false;

    if( train_one_network_per_class )
    {
        for(int c = 0; c<n_classes; c++ )
        {
            for( int i=0 ; i<n_layers-1 ; i++ )
                all_connections[c][i]->forget();
            
            for( int i=0 ; i<n_layers ; i++ )
                all_layers[c][i]->forget();
            
            for( int i=0; i<all_reconstruction_connections[c].length(); i++)
                all_reconstruction_connections[c][i]->forget();
            
            if( all_output_connections[c] )
                all_output_connections[c]->forget();
        }
    }
    else
    {
        for( int i=0 ; i<n_layers-1 ; i++ )
            connections[i]->forget();
        
        for( int i=0 ; i<n_layers ; i++ )
            layers[i]->forget();
        
        for( int i=0; i<reconstruction_connections.length(); i++)
            reconstruction_connections[i]->forget();
        
        if( output_connections )
            output_connections->forget();

    }

    stage = 0;
    greedy_stages.clear();
}

Here is the call graph for this function:

OptionList & PLearn::DeepNonLocalManifoldParzen::getOptionList ( ) const [virtual]

Reimplemented from PLearn::Object.

Definition at line 56 of file DeepNonLocalManifoldParzen.cc.

OptionMap & PLearn::DeepNonLocalManifoldParzen::getOptionMap ( ) const [virtual]

Reimplemented from PLearn::Object.

Definition at line 56 of file DeepNonLocalManifoldParzen.cc.

RemoteMethodMap & PLearn::DeepNonLocalManifoldParzen::getRemoteMethodMap ( ) const [virtual]

Reimplemented from PLearn::Object.

Definition at line 56 of file DeepNonLocalManifoldParzen.cc.

TVec< string > PLearn::DeepNonLocalManifoldParzen::getTestCostNames ( ) const [virtual]

Returns the names of the costs computed by computeCostsFromOutpus (and thus the test method).

Implements PLearn::PLearner.

Definition at line 1402 of file DeepNonLocalManifoldParzen.cc.

References PLearn::TVec< T >::append(), i, layers, PLearn::TVec< T >::push_back(), PLearn::TVec< T >::size(), and PLearn::tostring().

Referenced by computeCostsFromOutputs(), and getTrainCostNames().

{
    // Return the names of the costs computed by computeCostsFromOutputs
    // (these may or may not be exactly the same as what's returned by
    // getTrainCostNames).

    TVec<string> cost_names(0);

    for( int i=0; i<layers.size()-1; i++)
        cost_names.push_back("reconstruction_error_" + tostring(i+1));
    
    cost_names.append( "class_error" );
    cost_names.append( "NLL" );

    return cost_names;
}

Here is the call graph for this function:

Here is the caller graph for this function:

TVec< string > PLearn::DeepNonLocalManifoldParzen::getTrainCostNames ( ) const [virtual]

Returns the names of the objective costs that the train method computes and for which it updates the VecStatsCollector train_stats.

Implements PLearn::PLearner.

Definition at line 1419 of file DeepNonLocalManifoldParzen.cc.

References PLearn::TVec< T >::append(), and getTestCostNames().

Referenced by train().

{
    TVec<string> cost_names = getTestCostNames();
    cost_names.append( "NLL_neighbors" );
    return cost_names ;    
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::DeepNonLocalManifoldParzen::greedyStep ( const Vec input,
const Vec target,
int  index,
Vec  train_costs,
int  stage 
)

Definition at line 794 of file DeepNonLocalManifoldParzen.cc.

References activations, cd_decrease_ct, cd_learning_rate, computeRepresentation(), connections, expectations, PLearn::fast_exact_is_equal(), greedy_decrease_ct, greedy_learning_rate, layers, manifold_parzen_parameters_are_up_to_date, n_layers, neg_down_val, neg_up_val, PLASSERT, pos_down_val, pos_up_val, previous_input_representation, reconstruction_activation_gradients, reconstruction_activations, reconstruction_connections, reconstruction_expectation_gradients, and PLearn::sample().

Referenced by train().

{
    PLASSERT( index < n_layers );
    real lr;
    manifold_parzen_parameters_are_up_to_date = false;

    // Get example representation

    computeRepresentation(input, previous_input_representation, 
                          index);
    connections[index]->fprop(previous_input_representation,
                              activations[index+1]);
    layers[index+1]->fprop(activations[index+1],
                           expectations[index+1]);

    // Autoassociator learning
    if( !fast_exact_is_equal( greedy_learning_rate, 0 ) )
    {
        if( !fast_exact_is_equal( greedy_decrease_ct , 0 ) )
            lr = greedy_learning_rate/(1 + greedy_decrease_ct 
                                       * this_stage); 
        else
            lr = greedy_learning_rate;

        layers[index]->setLearningRate( lr );
        connections[index]->setLearningRate( lr );
        reconstruction_connections[index]->setLearningRate( lr );
        layers[index+1]->setLearningRate( lr );

        reconstruction_connections[ index ]->fprop( expectations[index+1],
                                                    reconstruction_activations);
        layers[ index ]->fprop( reconstruction_activations,
                                layers[ index ]->expectation);
        
        layers[ index ]->activation << reconstruction_activations;
        layers[ index ]->setExpectationByRef(layers[ index ]->expectation);
        real rec_err = layers[ index ]->fpropNLL(previous_input_representation);
        train_costs[index] = rec_err;
        
        layers[ index ]->bpropNLL(previous_input_representation, rec_err,
                                  reconstruction_activation_gradients);
    }

    // RBM learning
    if( !fast_exact_is_equal( cd_learning_rate, 0 ) )
    {
        layers[index+1]->setExpectation( expectations[index+1] );
        layers[index+1]->generateSample();
        
        // accumulate positive stats using the expectation
        // we deep-copy because the value will change during negative phase
        pos_down_val = expectations[index];
        pos_up_val << layers[index+1]->expectation;
        
        // down propagation, starting from a sample of layers[index+1]
        connections[index]->setAsUpInput( layers[index+1]->sample );
        
        layers[index]->getAllActivations( connections[index] );
        layers[index]->computeExpectation();
        layers[index]->generateSample();
        
        // negative phase
        connections[index]->setAsDownInput( layers[index]->sample );
        layers[index+1]->getAllActivations( connections[index] );
        layers[index+1]->computeExpectation();
        // accumulate negative stats
        // no need to deep-copy because the values won't change before update
        neg_down_val = layers[index]->sample;
        neg_up_val = layers[index+1]->expectation;
    }
    
    // Update hidden layer bias and weights

    if( !fast_exact_is_equal( greedy_learning_rate, 0 ) )
    {
        layers[ index ]->update(reconstruction_activation_gradients);
    
        reconstruction_connections[ index ]->bpropUpdate( 
            expectations[index+1],
            reconstruction_activations, 
            reconstruction_expectation_gradients, 
            reconstruction_activation_gradients);

        layers[ index+1 ]->bpropUpdate( 
            activations[index+1],
            expectations[index+1],
            // reused
            reconstruction_activation_gradients,
            reconstruction_expectation_gradients);
        
        connections[ index ]->bpropUpdate( 
            previous_input_representation,
            activations[index+1],
            reconstruction_expectation_gradients, //reused
            reconstruction_activation_gradients);
    }
     
    // RBM updates

    if( !fast_exact_is_equal( cd_learning_rate, 0 ) )
    {
        if( !fast_exact_is_equal( cd_decrease_ct , 0 ) )
            lr = cd_learning_rate/(1 + cd_decrease_ct 
                                   * this_stage); 
        else
            lr = cd_learning_rate;

        layers[index]->setLearningRate( lr );
        connections[index]->setLearningRate( lr );
        layers[index+1]->setLearningRate( lr );

        layers[index]->update( pos_down_val, neg_down_val );
        connections[index]->update( pos_down_val, pos_up_val,
                                    neg_down_val, neg_up_val );
        layers[index+1]->update( pos_up_val, neg_up_val );
    }
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::DeepNonLocalManifoldParzen::makeDeepCopyFromShallowCopy ( CopiesMap copies) [virtual]

Transforms a shallow copy into a deep copy.

Reimplemented from PLearn::PLearner.

Definition at line 472 of file DeepNonLocalManifoldParzen.cc.

References activation_gradients, activations, all_connections, all_layers, all_output_connections, all_outputs, all_outputs_gradient, all_reconstruction_connections, class_datasets, connections, PLearn::deepCopyField(), diff, diff_neighbor_input, eigenvalues, eigenvectors, expectation_gradients, expectations, F, F_copy, fk, greedy_stages, input_representation, inv_Sigma_F, inv_sigma_fk, inv_Sigma_z, inv_sigma_zj, layers, PLearn::PLearner::makeDeepCopyFromShallowCopy(), mu, mus, nearest_neighbors_indices, neg_down_val, neg_up_val, output_connections, pos_down_val, pos_up_val, pre_sigma_noise, previous_input_representation, reconstruction_activation_gradients, reconstruction_activations, reconstruction_connections, reconstruction_expectation_gradients, S, sigma_noises, sm_svd, temp_ncomp, test_votes, training_schedule, U, uk, uk2, Ut, V, z, and zj.

{
    inherited::makeDeepCopyFromShallowCopy(copies);

    // deepCopyField(, copies);

    // Public options
    deepCopyField(training_schedule, copies);
    deepCopyField(layers, copies);
    deepCopyField(connections, copies);
    deepCopyField(reconstruction_connections, copies);

    // Protected options
    deepCopyField(activations, copies);
    deepCopyField(expectations, copies);
    deepCopyField(activation_gradients, copies);
    deepCopyField(expectation_gradients, copies);
    deepCopyField(reconstruction_activations, copies);
    deepCopyField(reconstruction_activation_gradients, copies);
    deepCopyField(reconstruction_expectation_gradients, copies);
    deepCopyField(output_connections, copies);
    deepCopyField(all_layers, copies);
    deepCopyField(all_connections, copies);
    deepCopyField(all_reconstruction_connections, copies);
    deepCopyField(all_output_connections, copies);
    deepCopyField(input_representation, copies);
    deepCopyField(previous_input_representation, copies);
    deepCopyField(all_outputs, copies);
    deepCopyField(all_outputs_gradient, copies);
    deepCopyField(F, copies);
    deepCopyField(F_copy, copies);
    deepCopyField(mu, copies);
    deepCopyField(pre_sigma_noise, copies);
    deepCopyField(Ut, copies);
    deepCopyField(U, copies);
    deepCopyField(V, copies);
    deepCopyField(z, copies);
    deepCopyField(inv_Sigma_F, copies);
    deepCopyField(inv_Sigma_z, copies);
    deepCopyField(temp_ncomp, copies);
    deepCopyField(diff_neighbor_input, copies);
    deepCopyField(sm_svd, copies);
    deepCopyField(S, copies);
    deepCopyField(uk, copies);
    deepCopyField(fk, copies);
    deepCopyField(uk2, copies);
    deepCopyField(inv_sigma_zj, copies);
    deepCopyField(zj, copies);
    deepCopyField(inv_sigma_fk, copies);
    deepCopyField(diff, copies);
    deepCopyField(pos_down_val, copies);
    deepCopyField(pos_up_val, copies);
    deepCopyField(neg_down_val, copies);
    deepCopyField(neg_up_val, copies);
    deepCopyField(eigenvectors, copies);
    deepCopyField(eigenvalues, copies);
    deepCopyField(sigma_noises, copies);
    deepCopyField(mus, copies);
    deepCopyField(class_datasets, copies);
    deepCopyField(nearest_neighbors_indices, copies);
    deepCopyField(test_votes, copies);
    deepCopyField(greedy_stages, copies);
}

Here is the call graph for this function:

int PLearn::DeepNonLocalManifoldParzen::outputsize ( ) const [virtual]

Returns the size of this learner's output, (which typically may depend on its inputsize(), targetsize() and set options).

Implements PLearn::PLearner.

Definition at line 537 of file DeepNonLocalManifoldParzen.cc.

{
    //if(currently_trained_layer < n_layers)
    //    return layers[currently_trained_layer]->size;
    //return layers[n_layers-1]->size;
    return 1;
}
void PLearn::DeepNonLocalManifoldParzen::setLearningRate ( real  the_learning_rate) [private]

Definition at line 1461 of file DeepNonLocalManifoldParzen.cc.

References connections, i, layers, n_layers, and output_connections.

Referenced by train().

{
    for( int i=0 ; i<n_layers-1 ; i++ )
    {
        layers[i]->setLearningRate( the_learning_rate );
        connections[i]->setLearningRate( the_learning_rate );
    }
    layers[n_layers-1]->setLearningRate( the_learning_rate );
    output_connections->setLearningRate( the_learning_rate );
}

Here is the caller graph for this function:

void PLearn::DeepNonLocalManifoldParzen::setTrainingSet ( VMat  training_set,
bool  call_forget = true 
) [virtual]

Declares the training set.

Then calls build() and forget() if necessary. Also sets this learner's inputsize_ targetsize_ weightsize_ from those of the training_set. Note: You shouldn't have to override this in subclasses, except in maybe to forward the call to an underlying learner.

Reimplemented from PLearn::PLearner.

Definition at line 1426 of file DeepNonLocalManifoldParzen.cc.

References class_datasets, manifold_parzen_parameters_are_up_to_date, n_classes, PLearn::TVec< T >::resize(), and PLearn::PLearner::setTrainingSet().

Referenced by build_().

{
    inherited::setTrainingSet(training_set,call_forget);
    
    manifold_parzen_parameters_are_up_to_date = false;

    // Separate classes
    if( n_classes > 1 )
    {
        class_datasets.resize(n_classes);
        for(int k=0; k<n_classes; k++)
        {
            class_datasets[k] = new ClassSubsetVMatrix();
            class_datasets[k]->classes.resize(1);
            class_datasets[k]->classes[0] = k;
            class_datasets[k]->source = training_set;
            class_datasets[k]->build();
        }
    }

    //class_proportions.resize(n_classes);
    //class_proportions.fill(0);
    //real sum = 0;
    //for(int k=0; k<n_classes; k++)
    //{
    //    class_proportions[k] = class_datasets[k]->length();
    //    sum += class_datasets[k]->length();
    //}
    //class_proportions /= sum;
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::DeepNonLocalManifoldParzen::train ( ) [virtual]

The role of the train method is to bring the learner up to stage==nstages, updating the train_stats collector with training costs measured on-line in the process.

Implements PLearn::PLearner.

Definition at line 598 of file DeepNonLocalManifoldParzen.cc.

References all_connections, all_layers, all_output_connections, all_reconstruction_connections, c, class_datasets, classname(), PLearn::computeNearestNeighbors(), connections, currently_trained_layer, PLearn::TVec< T >::data(), PLearn::endl(), PLearn::fast_exact_is_equal(), PLearn::TVec< T >::fill(), fine_tuning_decrease_ct, fine_tuning_learning_rate, fineTuningStep(), PLearn::VMat::getExample(), getTrainCostNames(), greedy_learning_rate, greedy_stages, greedyStep(), i, PLearn::PLearner::inputsize(), k_neighbors, layers, PLearn::VMat::length(), PLearn::TVec< T >::length(), MISSING_VALUE, n_classes, n_layers, nearest_neighbors_indices, neg_down_val, neg_up_val, PLearn::PLearner::nstages, output_connections, PLERROR, pos_down_val, pos_up_val, reconstruction_activation_gradients, reconstruction_activations, reconstruction_connections, reconstruction_expectation_gradients, PLearn::PLearner::report_progress, PLearn::TMat< T >::resize(), PLearn::TVec< T >::resize(), PLearn::sample(), setLearningRate(), PLearn::PLearner::stage, PLearn::TVec< T >::subVec(), PLearn::PLearner::targetsize(), PLearn::tostring(), train_one_network_per_class, PLearn::PLearner::train_set, PLearn::PLearner::train_stats, and training_schedule.

{
    MODULE_LOG << "train() called " << endl;
    MODULE_LOG << "  training_schedule = " << training_schedule << endl;

    Vec input( inputsize() );
    Vec nearest_neighbor( inputsize() );
    Mat nearest_neighbors( k_neighbors, inputsize() );
    Vec target( targetsize() );
    Vec target2( targetsize() );
    real weight; // unused
    real weight2; // unused

    TVec<string> train_cost_names = getTrainCostNames() ;
    Vec train_costs( train_cost_names.length() );
    train_costs.fill(MISSING_VALUE) ;

    int nsamples = train_set->length();
    int sample;

    PP<ProgressBar> pb;

    // clear stats of previous epoch
    train_stats->forget();

    int init_stage;

    /***** initial greedy training *****/
    for( int i=0 ; i<n_layers-1 ; i++ )
    {
        MODULE_LOG << "Training connection weights between layers " << i
                   << " and " << i+1 << endl;

        int end_stage = training_schedule[i];
        int* this_stage = greedy_stages.subVec(i,1).data();
        init_stage = *this_stage;

        MODULE_LOG << "  stage = " << *this_stage << endl;
        MODULE_LOG << "  end_stage = " << end_stage << endl;
        MODULE_LOG << "  greedy_learning_rate = " << greedy_learning_rate << endl;

        if( report_progress && *this_stage < end_stage )
            pb = new ProgressBar( "Training layer "+tostring(i)
                                  +" of "+classname(),
                                  end_stage - init_stage );

        train_costs.fill(MISSING_VALUE);
        reconstruction_activations.resize(layers[i]->size);
        reconstruction_activation_gradients.resize(layers[i]->size);
        reconstruction_expectation_gradients.resize(layers[i]->size);

        pos_down_val.resize(layers[i]->size);
        pos_up_val.resize(layers[i+1]->size);
        neg_down_val.resize(layers[i]->size);
        neg_up_val.resize(layers[i+1]->size);

        for( ; *this_stage<end_stage ; (*this_stage)++ )
        {
            sample = *this_stage % nsamples;
            train_set->getExample(sample, input, target, weight);

            if( train_one_network_per_class )
            {
                int c = (int) target[0];
                layers = all_layers[c];
                connections = all_connections[c];
                reconstruction_connections = all_reconstruction_connections[c];
                output_connections = all_output_connections[c];
            }
            greedyStep( input, target, i, train_costs, *this_stage);
            train_stats->update( train_costs );

            if( pb )
                pb->update( *this_stage - init_stage + 1 );
        }
    }

    /***** fine-tuning by gradient descent *****/
    if( stage < nstages )
    {

        if( stage == 0 )
        {
            MODULE_LOG << "Finding the nearest neighbors" << endl;
            // Find training nearest neighbors
            TVec<int> nearest_neighbors_indices_row;
            nearest_neighbors_indices.resize(train_set->length(), k_neighbors);
            if( n_classes > 1 )
                for(int k=0; k<n_classes; k++)
                {
                    for(int i=0; i<class_datasets[k]->length(); i++)
                    {
                        class_datasets[k]->getExample(i,input,target,weight);
                        nearest_neighbors_indices_row = nearest_neighbors_indices(
                            class_datasets[k]->indices[i]);
                        
                        computeNearestNeighbors(
                            new GetInputVMatrix((VMatrix *)class_datasets[k]),input,
                            nearest_neighbors_indices_row,
                            i);
                    }
                }
            else
                for(int i=0; i<train_set->length(); i++)
                {
                    train_set->getExample(i,input,target,weight);
                    nearest_neighbors_indices_row = nearest_neighbors_indices(i);
                    computeNearestNeighbors(
                        train_set,input,
                        nearest_neighbors_indices_row,
                        i);
                }
                
        }

        MODULE_LOG << "Fine-tuning all parameters, by gradient descent" << endl;
        MODULE_LOG << "  stage = " << stage << endl;
        MODULE_LOG << "  nstages = " << nstages << endl;
        MODULE_LOG << "  fine_tuning_learning_rate = " << 
            fine_tuning_learning_rate << endl;

        init_stage = stage;
        if( report_progress && stage < nstages )
            pb = new ProgressBar( "Fine-tuning parameters of all layers of "
                                  + classname(),
                                  nstages - init_stage );

        train_costs.fill(MISSING_VALUE);

        for( ; stage<nstages ; stage++ )
        {
            sample = stage % nsamples;
            train_set->getExample( sample, input, target, weight );

            // Find nearest neighbors
            if( n_classes > 1 )
                for( int k=0; k<k_neighbors; k++ )
                {
                    class_datasets[(int)round(target[0])]->getExample(
                        nearest_neighbors_indices(sample,k),
                        nearest_neighbor, target2, weight2);
                    
                    if(round(target[0]) != round(target2[0]))
                        PLERROR("DeepNonLocalManifoldParzen::train(): similar"
                                " example is not from same class!");
                    nearest_neighbors(k) << nearest_neighbor;
                }
            else
                for( int k=0; k<k_neighbors; k++ )
                {
                    train_set->getExample(
                        nearest_neighbors_indices(sample,k),
                        nearest_neighbor, target2, weight2);
                    nearest_neighbors(k) << nearest_neighbor;
                }
                

            if( train_one_network_per_class )
            {
                int c = (int) target[0];
                layers = all_layers[c];
                connections = all_connections[c];
                reconstruction_connections = all_reconstruction_connections[c];
                output_connections = all_output_connections[c];
            }

            if( !fast_exact_is_equal( fine_tuning_decrease_ct, 0. ) )
                setLearningRate( fine_tuning_learning_rate
                                 / (1. + fine_tuning_decrease_ct * stage ) );
            else
                setLearningRate( fine_tuning_learning_rate );

            fineTuningStep( input, target, train_costs, 
                            nearest_neighbors);
            train_stats->update( train_costs );

            if( pb )
                pb->update( stage - init_stage + 1 );
        }
    }
    
    train_stats->finalize();
    MODULE_LOG << "  train costs = " << train_stats->getMean() << endl;

    // Update currently_trained_layer
    if(stage > 0)
        currently_trained_layer = n_layers;
    else
    {            
        currently_trained_layer = n_layers-1;
        while(currently_trained_layer>1 
              && greedy_stages[currently_trained_layer-1] <= 0)
            currently_trained_layer--;
    }
}

Here is the call graph for this function:

void PLearn::DeepNonLocalManifoldParzen::updateManifoldParzenParameters ( ) const [virtual]

Precomputes the representations of the training set examples, to speed up nearest neighbors searches in that space.

Definition at line 1362 of file DeepNonLocalManifoldParzen.cc.

References computeManifoldParzenParameters(), eigenvalues, eigenvectors, F, PLearn::VMat::getExample(), i, PLearn::PLearner::inputsize(), PLearn::VMat::length(), manifold_parzen_parameters_are_up_to_date, min_sigma_noise, mu, mus, n_classes, n_components, pre_sigma_noise, PLearn::TMat< T >::resize(), PLearn::TVec< T >::resize(), sigma_noises, sm_svd, PLearn::PLearner::targetsize(), PLearn::PLearner::train_set, and U.

Referenced by computeOutput().

{
    if(!manifold_parzen_parameters_are_up_to_date)
    {
        // Precompute manifold parzen parameters
        Vec input( inputsize() );
        Vec target( targetsize() );
        real weight;
        real sigma_noise;

        eigenvectors.resize(train_set->length());
        eigenvalues.resize(train_set->length(),n_components);
        sigma_noises.resize(train_set->length());
        mus.resize(train_set->length(), inputsize());

        for( int i=0; i<train_set->length(); i++ )
        {
            train_set->getExample(i,input,target,weight);

            if( n_classes > 1 )
                computeManifoldParzenParameters( input, F, mu, 
                                                 pre_sigma_noise, U, sm_svd,
                                                 (int) target[0]);
            else
                computeManifoldParzenParameters( input, F, mu, 
                                                 pre_sigma_noise, U, sm_svd);
            
            sigma_noise = pre_sigma_noise[0]*pre_sigma_noise[0] + min_sigma_noise;

            eigenvectors[i].resize(n_components,inputsize());
            eigenvectors[i] << U;
            eigenvalues(i) << sm_svd;
            sigma_noises[i] = sigma_noise;
            mus(i) << mu;
        }
        
        manifold_parzen_parameters_are_up_to_date = true;
    }
}

Here is the call graph for this function:

Here is the caller graph for this function:


Member Data Documentation

Reimplemented from PLearn::PLearner.

Definition at line 211 of file DeepNonLocalManifoldParzen.h.

Stores the gradient of the cost wrt the activations of the input and hidden layers (at the input of the layers)

Definition at line 234 of file DeepNonLocalManifoldParzen.h.

Referenced by build_layers_and_connections(), fineTuningStep(), and makeDeepCopyFromShallowCopy().

Stores the activations of the input and hidden layers (at the input of the layers)

Definition at line 225 of file DeepNonLocalManifoldParzen.h.

Referenced by build_layers_and_connections(), computeRepresentation(), fineTuningStep(), greedyStep(), and makeDeepCopyFromShallowCopy().

Parameters for all networks, when training one network per class.

Definition at line 254 of file DeepNonLocalManifoldParzen.h.

Referenced by build_(), computeCostsFromOutputs(), computeManifoldParzenParameters(), forget(), makeDeepCopyFromShallowCopy(), and train().

All outputs that give the components and sigma_noise values.

Definition at line 266 of file DeepNonLocalManifoldParzen.h.

Referenced by build_layers_and_connections(), computeManifoldParzenParameters(), fineTuningStep(), and makeDeepCopyFromShallowCopy().

All outputs' gradients.

Definition at line 269 of file DeepNonLocalManifoldParzen.h.

Referenced by fineTuningStep(), and makeDeepCopyFromShallowCopy().

Contrastive divergence decrease constant.

Definition at line 71 of file DeepNonLocalManifoldParzen.h.

Referenced by declareOptions(), and greedyStep().

Contrastive divergence learning rate.

Definition at line 68 of file DeepNonLocalManifoldParzen.h.

Referenced by declareOptions(), and greedyStep().

Datasets for each class.

Definition at line 302 of file DeepNonLocalManifoldParzen.h.

Referenced by computeCostsFromOutputs(), computeOutput(), makeDeepCopyFromShallowCopy(), setTrainingSet(), and train().

Currently trained layer (1 means the first hidden layer, n_layers means the output layer)

Definition at line 319 of file DeepNonLocalManifoldParzen.h.

Referenced by build_(), computeCostsFromOutputs(), computeOutput(), and train().

Definition at line 279 of file DeepNonLocalManifoldParzen.h.

Referenced by computeOutput(), and makeDeepCopyFromShallowCopy().

Indication that the value of sigma noise should not be learned.

Definition at line 128 of file DeepNonLocalManifoldParzen.h.

Referenced by build_layers_and_connections(), computeManifoldParzenParameters(), declareOptions(), and fineTuningStep().

Stores the gradient of the cost wrt the expectations of the input and hidden layers (at the output of the layers)

Definition at line 239 of file DeepNonLocalManifoldParzen.h.

Referenced by build_layers_and_connections(), fineTuningStep(), and makeDeepCopyFromShallowCopy().

Stores the expectations of the input and hidden layers (at the output of the layers)

Definition at line 229 of file DeepNonLocalManifoldParzen.h.

Referenced by build_layers_and_connections(), computeCostsFromOutputs(), computeRepresentation(), fineTuningStep(), greedyStep(), and makeDeepCopyFromShallowCopy().

The decrease constant of the learning rate used during fine tuning gradient descent.

Definition at line 86 of file DeepNonLocalManifoldParzen.h.

Referenced by declareOptions(), and train().

The learning rate used during the fine tuning gradient descent.

Definition at line 82 of file DeepNonLocalManifoldParzen.h.

Referenced by declareOptions(), and train().

Definition at line 279 of file DeepNonLocalManifoldParzen.h.

Referenced by fineTuningStep(), and makeDeepCopyFromShallowCopy().

The decrease constant of the learning rate used during the autoassociator gradient descent training.

When a hidden layer has finished its training, the learning rate is reset to it's initial value.

Definition at line 79 of file DeepNonLocalManifoldParzen.h.

Referenced by declareOptions(), and greedyStep().

The learning rate used during the autoassociator gradient descent training.

Definition at line 74 of file DeepNonLocalManifoldParzen.h.

Referenced by build_layers_and_connections(), declareOptions(), greedyStep(), and train().

Stages of the different greedy phases.

Definition at line 315 of file DeepNonLocalManifoldParzen.h.

Referenced by build_(), declareOptions(), forget(), makeDeepCopyFromShallowCopy(), and train().

Definition at line 277 of file DeepNonLocalManifoldParzen.h.

Referenced by fineTuningStep(), and makeDeepCopyFromShallowCopy().

Definition at line 279 of file DeepNonLocalManifoldParzen.h.

Referenced by fineTuningStep(), and makeDeepCopyFromShallowCopy().

Definition at line 277 of file DeepNonLocalManifoldParzen.h.

Referenced by fineTuningStep(), and makeDeepCopyFromShallowCopy().

Definition at line 279 of file DeepNonLocalManifoldParzen.h.

Referenced by fineTuningStep(), and makeDeepCopyFromShallowCopy().

Number of nearest neighbors to use to learn the manifold structure.

Definition at line 103 of file DeepNonLocalManifoldParzen.h.

Referenced by build_(), declareOptions(), fineTuningStep(), and train().

Indication that the saved manifold parzen parameters are up to date.

Definition at line 322 of file DeepNonLocalManifoldParzen.h.

Referenced by fineTuningStep(), forget(), greedyStep(), setTrainingSet(), and updateManifoldParzenParameters().

Minimum value for the noise variance.

Definition at line 109 of file DeepNonLocalManifoldParzen.h.

Referenced by build_(), computeOutput(), declareOptions(), fineTuningStep(), and updateManifoldParzenParameters().

Proportions of examples from the other classes (columns), for each class (rows)

Nearest neighbors for each training example

Definition at line 309 of file DeepNonLocalManifoldParzen.h.

Referenced by makeDeepCopyFromShallowCopy(), and train().

Negative down statistic.

Definition at line 286 of file DeepNonLocalManifoldParzen.h.

Referenced by greedyStep(), makeDeepCopyFromShallowCopy(), and train().

Negative up statistic.

Definition at line 288 of file DeepNonLocalManifoldParzen.h.

Referenced by greedyStep(), makeDeepCopyFromShallowCopy(), and train().

Output weights L1 penalty factor.

Definition at line 118 of file DeepNonLocalManifoldParzen.h.

Referenced by build_layers_and_connections(), and declareOptions().

Output weights L2 penalty factor.

Definition at line 121 of file DeepNonLocalManifoldParzen.h.

Referenced by build_layers_and_connections(), and declareOptions().

Positive down statistic.

Definition at line 282 of file DeepNonLocalManifoldParzen.h.

Referenced by greedyStep(), makeDeepCopyFromShallowCopy(), and train().

Positive up statistic.

Definition at line 284 of file DeepNonLocalManifoldParzen.h.

Referenced by greedyStep(), makeDeepCopyFromShallowCopy(), and train().

Example representation at the previous layer, in a greedy step.

Definition at line 263 of file DeepNonLocalManifoldParzen.h.

Referenced by greedyStep(), and makeDeepCopyFromShallowCopy().

Reconstruction activation gradients.

Definition at line 245 of file DeepNonLocalManifoldParzen.h.

Referenced by greedyStep(), makeDeepCopyFromShallowCopy(), and train().

Reconstruction activations.

Definition at line 242 of file DeepNonLocalManifoldParzen.h.

Referenced by computeCostsFromOutputs(), greedyStep(), makeDeepCopyFromShallowCopy(), and train().

Reconstruction expectation gradients.

Definition at line 248 of file DeepNonLocalManifoldParzen.h.

Referenced by greedyStep(), makeDeepCopyFromShallowCopy(), and train().

Indication that the parameters for the manifold parzen windows estimator should be saved during test, to speed up testing.

Definition at line 125 of file DeepNonLocalManifoldParzen.h.

Referenced by computeOutput(), and declareOptions().

Definition at line 278 of file DeepNonLocalManifoldParzen.h.

Referenced by fineTuningStep(), and makeDeepCopyFromShallowCopy().

Nearest neighbor votes for test example.

Definition at line 312 of file DeepNonLocalManifoldParzen.h.

Referenced by build_(), computeCostsFromOutputs(), computeOutput(), and makeDeepCopyFromShallowCopy().

Indication that one network per class should be trained.

Definition at line 115 of file DeepNonLocalManifoldParzen.h.

Referenced by build_(), computeCostsFromOutputs(), computeManifoldParzenParameters(), declareOptions(), forget(), and train().

Number of examples to use during each phase of greedy pre-training.

The number of fine-tunig steps is defined by nstages.

Definition at line 90 of file DeepNonLocalManifoldParzen.h.

Referenced by build_(), declareOptions(), makeDeepCopyFromShallowCopy(), and train().

Definition at line 279 of file DeepNonLocalManifoldParzen.h.

Referenced by fineTuningStep(), and makeDeepCopyFromShallowCopy().

Indication that the Test-Centric NLMP variant should be used.

In this case, train_one_network_per_class must be true.

Definition at line 132 of file DeepNonLocalManifoldParzen.h.

Referenced by build_(), computeCostsFromOutputs(), computeOutput(), and declareOptions().

Variables for the SVD and gradient computation.

Definition at line 277 of file DeepNonLocalManifoldParzen.h.

Referenced by computeManifoldParzenParameters(), and makeDeepCopyFromShallowCopy().

Definition at line 277 of file DeepNonLocalManifoldParzen.h.

Referenced by fineTuningStep(), and makeDeepCopyFromShallowCopy().

Definition at line 279 of file DeepNonLocalManifoldParzen.h.

Referenced by fineTuningStep(), and makeDeepCopyFromShallowCopy().


The documentation for this class was generated from the following files:
 All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Friends Defines