PLearn 0.1
Public Member Functions | Static Public Member Functions | Public Attributes | Static Public Attributes | Static Protected Member Functions | Protected Attributes | Private Types | Private Member Functions | Private Attributes
PLearn::NnlmOnlineLearner Class Reference

Trains a Neural Network Language Model (NNLM). More...

#include <NnlmOnlineLearner.h>

Inheritance diagram for PLearn::NnlmOnlineLearner:
Inheritance graph
[legend]
Collaboration diagram for PLearn::NnlmOnlineLearner:
Collaboration graph
[legend]

List of all members.

Public Member Functions

 NnlmOnlineLearner ()
 Default constructor.
void buildLayers ()
 builds the layers, ie modules and output_modules
void buildCandidates ()
 Specific to the gaussian model.
void reevaluateGaussianParameters () const
 Reevaluates "fresh" gaussian mus and sigmas - make sure you want to do this.
void myGetExample (const VMat &example_set, int &sample, Vec &input, Vec &target, real &weight) const
 Interfaces with the ProcessSymbolicSequenceVMatrix's getRow()
virtual int outputsize () const
 Returns the size of this learner's output, (which typically may depend on its inputsize(), targetsize() and set options).
virtual void forget ()
 (Re-)initializes the PLearner in its fresh state (that state may depend on the 'seed' option) and sets 'stage' back to 0 (this is the stage of a fresh learner!).
virtual void train ()
 The role of the train method is to bring the learner up to stage==nstages, updating the train_stats collector with training costs measured on-line in the process.
void test (VMat testset, PP< VecStatsCollector > test_stats, VMat testoutputs, VMat testcosts) const
 Performs test on testset, updating test cost statistics, and optionally filling testoutputs and testcosts.
virtual void computeOutput (const Vec &input, Vec &output) const
 Computes the output from the input.
virtual void computeCostsFromOutputs (const Vec &input, const Vec &output, const Vec &target, Vec &costs) const
 Computes the costs from already computed output.
virtual void computeTrainCostsFromOutputs (const Vec &input, const Vec &output, const Vec &target, Vec &costs) const
virtual TVec< std::string > getTestCostNames () const
 Returns the names of the costs computed by computeCostsFromOutpus (and thus the test method).
virtual TVec< std::string > getTrainCostNames () const
 Returns the names of the objective costs that the train method computes and for which it updates the VecStatsCollector train_stats.
virtual string classname () const
virtual OptionListgetOptionList () const
virtual OptionMapgetOptionMap () const
virtual RemoteMethodMapgetRemoteMethodMap () const
virtual NnlmOnlineLearnerdeepCopy (CopiesMap &copies) const
virtual void build ()
 Finish building the object; just call inherited::build followed by build_()
virtual void makeDeepCopyFromShallowCopy (CopiesMap &copies)
 Transforms a shallow copy into a deep copy.

Static Public Member Functions

static string _classname_ ()
static OptionList_getOptionList_ ()
static RemoteMethodMap_getRemoteMethodMap_ ()
static Object_new_instance_for_typemap_ ()
static bool _isa_ (const Object *o)
static void _static_initialize_ ()
static const PPathdeclaringFile ()

Public Attributes

string str_input_model
 Defines which model is used.
string str_output_model
int word_representation_size
 Size of the real ditributed word representations.
int semantic_layer_size
 Size of the semantic layer.
real wrl_slr
 Neural part parameters.
real wrl_dc
real wrl_wd_l1
real wrl_wd_l2
real sl_slr
real sl_dc
real sl_wd_l1
real sl_wd_l2
string str_gaussian_model_train_cost
 Define behavior.
string str_gaussian_model_learning
real gaussian_model_sigma2_min
real gaussian_model_dl_slr
real gaussian_model_dl_dc
int shared_candidates_size
 Number of candidates to use from different sources in the gaussian model when we use the approx_discriminant cost.
int ngram_candidates_size
int self_candidates_size
VMat ngram_train_set
 Used in determining the C sets of candidate words for normalization in the evaluated discriminant cost.
real sm_slr
real sm_dc
real sm_wd_l1
real sm_wd_l2
TVec< PP< OnlineLearningModule > > modules
 Layers of the learner Separated between the fixed part which computes up to the "semantic layer" and the variable part (gaussian or softmax)
TVec< PP< OnlineLearningModule > > output_modules
int vocabulary_size
 NNLM related - determined from train_set.
int context_size
PP< NGramDistributiontheNGram
 Used in determining the C sets of candidate words for normalization in the evaluated discriminant cost.
TVec< intshared_candidates
 Holds candidates.
TVec< TVec< int > > candidates

Static Public Attributes

static StaticInitializer _static_initializer_

Static Protected Member Functions

static void declareOptions (OptionList &ol)
 Declares the class options.

Protected Attributes

TVec< Vecvalues
 stores the input and output values of the functions
TVec< Vecgradients
 stores the gradients
TVec< Vecoutput_values
 for the second variable part of the model (starts from 'r', the semantic layer, on)
TVec< Vecoutput_gradients

Private Types

enum  { MODEL_TYPE_GAUSSIAN = 0, MODEL_TYPE_SOFTMAX = 1 }
enum  { GAUSSIAN_COST_DISCR = 0, GAUSSIAN_COST_APPROX_DISCR = 1, GAUSSIAN_COST_NON_DISCR = 2 }
enum  { GAUSSIAN_LEARNING_DISCR = 0, GAUSSIAN_LEARNING_EMPIRICAL = 1 }
typedef PLearner inherited

Private Member Functions

void build_ ()
 This does the actual building.

Private Attributes

int nmodules
 Used for loops.
int output_nmodules
int model_type
 Holds model type.
int gaussian_model_cost
int gaussian_model_learning

Detailed Description

Trains a Neural Network Language Model (NNLM).

This learner is based upon the online module architecture.

Todo:
Deprecated:

Definition at line 61 of file NnlmOnlineLearner.h.


Member Typedef Documentation

Reimplemented from PLearn::PLearner.

Definition at line 63 of file NnlmOnlineLearner.h.


Member Enumeration Documentation

anonymous enum [private]
Enumerator:
MODEL_TYPE_GAUSSIAN 
MODEL_TYPE_SOFTMAX 

Definition at line 274 of file NnlmOnlineLearner.h.

anonymous enum [private]
Enumerator:
GAUSSIAN_COST_DISCR 
GAUSSIAN_COST_APPROX_DISCR 
GAUSSIAN_COST_NON_DISCR 

Definition at line 275 of file NnlmOnlineLearner.h.

anonymous enum [private]
Enumerator:
GAUSSIAN_LEARNING_DISCR 
GAUSSIAN_LEARNING_EMPIRICAL 

Definition at line 276 of file NnlmOnlineLearner.h.


Constructor & Destructor Documentation

PLearn::NnlmOnlineLearner::NnlmOnlineLearner ( )

Default constructor.

Definition at line 84 of file NnlmOnlineLearner.cc.

References PLearn::PLearner::random_gen.

    :   PLearner(),
        str_input_model( "wrl" ),
        str_output_model( "gaussian" ),
        word_representation_size( 30 ),
        semantic_layer_size( 100 ),
        wrl_slr( 0.001 ),
        wrl_dc( 0.0 ),
        wrl_wd_l1( 0.0 ),
        wrl_wd_l2( 0.0 ),
        sl_slr( 0.001 ),
        sl_dc( 0.0 ),
        sl_wd_l1( 0.0 ),
        sl_wd_l2( 0.0 ),
        str_gaussian_model_train_cost( "approx_discriminant" ),
        str_gaussian_model_learning( "non_discriminant" ),
        gaussian_model_sigma2_min(0.000001),
        gaussian_model_dl_slr(0.001),
        shared_candidates_size( 0 ),
        ngram_candidates_size( 50 ),
        self_candidates_size( 0 ),
        sm_slr( 0.001 ),
        sm_dc( 0.0 ),
        sm_wd_l1( 0.0 ),
        sm_wd_l2( 0.0 ),
        vocabulary_size( -1 ),
        context_size( -1 ),
        nmodules( -1 ),
        output_nmodules( -1 ),
        model_type( -1 ),
        gaussian_model_cost( -1 ),
        gaussian_model_learning( -1 )
{
    // ### You may (or not) want to call build_() to finish building the object
    // ### (doing so assumes the parent classes' build_() have been called too
    // ### in the parent classes' constructors, something that you must ensure)

    random_gen = new PRandom();
}

Member Function Documentation

string PLearn::NnlmOnlineLearner::_classname_ ( ) [static]

Reimplemented from PLearn::PLearner.

Definition at line 63 of file NnlmOnlineLearner.cc.

{
OptionList & PLearn::NnlmOnlineLearner::_getOptionList_ ( ) [static]

Reimplemented from PLearn::PLearner.

Definition at line 63 of file NnlmOnlineLearner.cc.

{
RemoteMethodMap & PLearn::NnlmOnlineLearner::_getRemoteMethodMap_ ( ) [static]

Reimplemented from PLearn::PLearner.

Definition at line 63 of file NnlmOnlineLearner.cc.

{
bool PLearn::NnlmOnlineLearner::_isa_ ( const Object o) [static]

Reimplemented from PLearn::PLearner.

Definition at line 63 of file NnlmOnlineLearner.cc.

{
Object * PLearn::NnlmOnlineLearner::_new_instance_for_typemap_ ( ) [static]

Reimplemented from PLearn::Object.

Definition at line 63 of file NnlmOnlineLearner.cc.

{
StaticInitializer NnlmOnlineLearner::_static_initializer_ & PLearn::NnlmOnlineLearner::_static_initialize_ ( ) [static]

Reimplemented from PLearn::PLearner.

Definition at line 63 of file NnlmOnlineLearner.cc.

{
void PLearn::NnlmOnlineLearner::build ( ) [virtual]

Finish building the object; just call inherited::build followed by build_()

Reimplemented from PLearn::PLearner.

Definition at line 278 of file NnlmOnlineLearner.cc.

References PLearn::PLearner::build(), and build_().

Here is the call graph for this function:

void PLearn::NnlmOnlineLearner::build_ ( ) [private]

This does the actual building.

Reimplemented from PLearn::PLearner.

Definition at line 288 of file NnlmOnlineLearner.cc.

References buildLayers(), context_size, PLearn::endl(), GAUSSIAN_COST_APPROX_DISCR, GAUSSIAN_COST_DISCR, GAUSSIAN_COST_NON_DISCR, GAUSSIAN_LEARNING_DISCR, GAUSSIAN_LEARNING_EMPIRICAL, gaussian_model_cost, gaussian_model_learning, PLearn::PLearner::inputsize(), PLearn::lowerstring(), model_type, MODEL_TYPE_GAUSSIAN, MODEL_TYPE_SOFTMAX, ngram_train_set, PLERROR, str_gaussian_model_learning, str_gaussian_model_train_cost, str_output_model, PLearn::PLearner::train_set, PLearn::PLearner::verbosity, and vocabulary_size.

Referenced by build().

{
    cout << "NnlmOnlineLearner::build_()" << endl;

    if( !train_set )  {
        return;
    }

    // *** Sanity Checks ***
    // *** Sanity Checks ***
    /*int word_representation_size
    int semantic_layer_size
    real wrl_slr;
    real wrl_dc;
    real wrl_wd_l1;
    real wrl_wd_l2;
    real sl_slr;
    real sl_dc;
    real sl_wd_l1;
    real sl_wd_l2;
    real gaussian_model_sigma2_min
    int shared_candidates_size;
    int ngram_candidates_size;
    int self_candidates_size;
    real sm_slr;
    real sm_dc;
    real sm_wd_l1;
    real sm_wd_l2;*/


    // *** Determine Model ***
    // *** Determine Model  ***

    // * Model type *
    string mt = lowerstring( str_output_model );
    if(  mt == "gaussian" || mt == "" )  {
        model_type = MODEL_TYPE_GAUSSIAN;
    } else if( mt == "softmax" )  {
        model_type = MODEL_TYPE_SOFTMAX;
    } else  {
        PLERROR( "'%s' model type is unknown.\n", mt.c_str() );
    }


    if( model_type == MODEL_TYPE_GAUSSIAN ) {

        // * Gaussian model cost *
        string gmc = lowerstring( str_gaussian_model_train_cost );
        if( gmc == "approx_discriminant" || gmc == "" )  {
            gaussian_model_cost = GAUSSIAN_COST_APPROX_DISCR;
        } else if( gmc == "non_discriminant" )  {
            gaussian_model_cost = GAUSSIAN_COST_NON_DISCR;
        } else if( gmc == "discriminant" )  {
            gaussian_model_cost = GAUSSIAN_COST_DISCR;
        } else  {
            PLERROR( "'%s' gaussian model train cost is unknown.\n", gmc.c_str() );
        }

        // * Gaussian model learning *
        string gml = lowerstring( str_gaussian_model_learning );
        if( gml == "non_discriminant" || gml == "" )  {
            gaussian_model_learning = GAUSSIAN_LEARNING_EMPIRICAL;
        } else if( gml == "discriminant" )  {
            gaussian_model_learning = GAUSSIAN_LEARNING_DISCR;
        } else  {
            PLERROR( "'%s' gaussian model learning is unknown.\n", gml.c_str() );
        }
    }


    // *** Vocabulary size ***
    // *** Vocabulary size ***

    // the train set's dictionary_size +1 for the 'OOV' tag (tag 0) +1 for the 'missing' tag (tag 'dict_size+1')
    vocabulary_size = (train_set->getDictionary(0))->size()+2;

    if( verbosity > 0 ) {
        cout << "\tvocabulary_size = " << vocabulary_size << endl;
    }

    // Ensure MINIMAL dictionary coherence, ie size, with ngram set
    if( model_type == MODEL_TYPE_GAUSSIAN ) {
        if( vocabulary_size != (ngram_train_set->getDictionary(0))->size()+2 )  {
            PLERROR("train_set and ngram_train_set have dictionaries of different sizes.\n");
        }
    }


    // *** Context size ***
    // *** Context size ***

    // The ProcessSymbolicSequenceVMatrix has only input. Last input is used as target.
    context_size = inputsize()-1;

    if( verbosity > 0 ) {
        cout << "\tcontext_size = " << context_size << endl;
    }


    // *** Build modules and output_module ***
    // *** Build modules and output_module ***
    buildLayers();

    cout << "NnlmOnlineLearner::build_() - DONE!" << endl;
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::NnlmOnlineLearner::buildCandidates ( )

Specific to the gaussian model.

Definition at line 646 of file NnlmOnlineLearner.cc.

References candidates, PLearn::endl(), PLearn::find(), i, model_type, MODEL_TYPE_GAUSSIAN, ngram_candidates_size, ngram_train_set, PLWARNING, PLearn::TVec< T >::push_back(), PLearn::TVec< T >::resize(), shared_candidates, shared_candidates_size, theNGram, PLearn::PLearner::train_set, vocabulary_size, and PLearn::wordAndFreqGT().

Referenced by buildLayers().

{
    if( model_type != MODEL_TYPE_GAUSSIAN )  {
        PLWARNING("NnlmOnlineLearner::buildCandidates() - model is not of gaussian type. Ignoring call.\n");
        return;
    }

    // *** Train ngram ***
    // *** Train ngram ***

    cout << "NnlmOnlineLearner::buildCandidates()" << endl;
    cout << "\ttraining ngram..." << endl;
    theNGram = new NGramDistribution();

    theNGram->n = ngram_train_set->inputsize();
    theNGram->smoothing = "no_smoothing";
    theNGram->nan_replace = true;
    theNGram->setTrainingSet( ngram_train_set );
    //theNGram->build(); Done in setTrainingSet

    theNGram->train();


    // *** Effective building ***
    // *** Effective building ***

    cout << "\tbuilding candidates..." << endl;

    shared_candidates.resize( shared_candidates_size );
    candidates.resize( vocabulary_size );

    std::vector< wordAndFreq > tmp;
    // temporary list containing the shared candidates
    list<int> l_tmp_shared_candidates;
    list<int>::iterator itr_tmp_shared_candidates;


    // * Determine most frequent words and so the shared_candidates
    TVec<int> unigram( 1 );
    TVec<int> unifreq( 1 );

    // wt means "word tag"
    // Note -> wt=vocabulary_size-1 corresponds to the (-1) tag in the NGramDistribution
    // we skip this tag, the 'missing' tag
    // NOTE Is this appropriate treatment?
    // I don't see how the missing values could occur anywhere except at the beginning so yes.
    for(int wt=0; wt<vocabulary_size-1; wt++)  {
        unigram[0] = wt;
        unifreq = (theNGram->tree)->freq(unigram);
        tmp.push_back( wordAndFreq(wt, unifreq[0]) );
    }

    std::sort(tmp.begin(), tmp.end(), wordAndFreqGT);

    //cout << "These are the shared candidates:" << endl;

    // HACK we don't check if itr has hit the end... unlikely vocabulary_size is smaller
    // than shared_candidates_size
    std::vector< wordAndFreq >::iterator itr_vec;
    itr_vec=tmp.begin();
    for(int i=0; i< shared_candidates_size; i++) {

        cout << (train_set->getDictionary(0))->getSymbol( itr_vec->wordtag ) << "\t";

        shared_candidates[i] = itr_vec->wordtag;
        l_tmp_shared_candidates.push_back(itr_vec->wordtag);
        itr_vec++;
    }

    tmp.clear();

    cout << endl;



    // * Add best candidates according to a bigram
    // wt means "word tag"
    // Note -> wt=vocabulary_size-1 corresponds to the (-1) tag in the NGramDistribution
    // we skip this tag, the 'missing' tag
    // NOTE Is this appropriate treatment?
    map<int, int> frequenciesCopy;
    map<int,int>::iterator itr;
    int n_candidates;

    for(int wt=-1; wt<vocabulary_size-1; wt++)  {

        // - fill list of candidates, then sort
        PP<SymbolNode> node = ((theNGram->tree)->getRoot())->child(wt);
        if(node)  {
            frequenciesCopy = node->getFrequencies();

            itr = frequenciesCopy.begin();
            while( itr != frequenciesCopy.end() ) {
                // -1 is the NGram's missing tag, our vocabulary_size-1 tag
                // Actually, we should not see it as a follower to anything except itself...
                if( itr->first != -1) {
                    tmp.push_back( wordAndFreq( itr->first, itr->second ) );
                } else  {
                    tmp.push_back( wordAndFreq( vocabulary_size-1, itr->second ) );
                }
                itr++;
            }
            std::sort(tmp.begin(), tmp.end(), wordAndFreqGT);

            // - resize candidates entry
            if( ngram_candidates_size < (int) tmp.size() )  {
                n_candidates = ngram_candidates_size; 
            } else  {
                n_candidates = tmp.size();
            }

            if(wt!=-1)  {
                candidates[wt].resize( n_candidates );
            } else  {
                candidates[ vocabulary_size-1 ].resize( n_candidates );
            }

            // - fill candidates entry

            itr_vec=tmp.begin();
            for(int i=0; i< n_candidates; i++) {
                //cout << (train_set->getDictionary(0))->getSymbol( itr_vec->wordtag ) << "\t";

                // ONLY ADD IF NOT IN THE SHARED CANDIDATES
                // Search the list.
                itr_tmp_shared_candidates = find( l_tmp_shared_candidates.begin(), l_tmp_shared_candidates.end(), itr_vec->wordtag);

                // if not found -> add it
                if (itr_tmp_shared_candidates == l_tmp_shared_candidates.end())
                {
if( itr_vec->wordtag > vocabulary_size -1 )
  cout << "NnlmOnlineLearner::buildCandidates() - problem " << itr_vec->wordtag <<endl;

                    if(wt!=-1)  {
                        candidates[wt][i] = itr_vec->wordtag;
                    } else  {
                        candidates[ vocabulary_size-1 ][i] = itr_vec->wordtag;
                    }
                // compensate for not adding this word
                } else  {
                    i--;
                    n_candidates--;
                }
                itr_vec++;
            }
            // compensate for not adding words
            if(wt!=-1)  {
                candidates[wt].resize( n_candidates );
            } else  {
                candidates[ vocabulary_size-1 ].resize( n_candidates );
            }

            tmp.clear();
        }
    }
    l_tmp_shared_candidates.clear();

}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::NnlmOnlineLearner::buildLayers ( )

builds the layers, ie modules and output_modules

Definition at line 400 of file NnlmOnlineLearner.cc.

References buildCandidates(), candidates, context_size, GAUSSIAN_COST_APPROX_DISCR, GAUSSIAN_COST_DISCR, GAUSSIAN_COST_NON_DISCR, gaussian_model_cost, gaussian_model_learning, gaussian_model_sigma2_min, gradients, i, PLearn::PLearner::inputsize(), PLearn::lowerstring(), model_type, MODEL_TYPE_GAUSSIAN, modules, nmodules, output_gradients, output_modules, output_nmodules, output_values, PLERROR, PLWARNING, PLearn::PLearner::random_gen, PLearn::TVec< T >::resize(), semantic_layer_size, shared_candidates, sl_dc, sl_slr, sl_wd_l1, sl_wd_l2, sm_dc, sm_slr, sm_wd_l1, sm_wd_l2, PLearn::sqrt(), str_input_model, values, vocabulary_size, word_representation_size, wrl_dc, wrl_slr, wrl_wd_l1, and wrl_wd_l2.

Referenced by build_().

{

    // *** Do we have to build the layers, or did we load them? ***

    if( nmodules <= 0 ) {

        //------------------------------------------
        // 1) Fixed part - up to the semantic layer
        //------------------------------------------
        nmodules = 3;
        modules.resize( nmodules );

        // *** First layer ***
        string ilm = lowerstring( str_input_model );
        if( ilm == "wrl" || ilm == "" )  {
            // *** Word representation layer ***
            // *** Word representation layer ***
            PP< NnlmWordRepresentationLayer > p_wrl = new NnlmWordRepresentationLayer();

            p_wrl->input_size = context_size;
            p_wrl->output_size = context_size * word_representation_size;

            p_wrl->start_learning_rate = wrl_slr;
            p_wrl->decrease_constant = wrl_dc;
            //TODO
            //p_wrl->L1_penalty_factor = wrl_wd_l1;
            //p_wrl->L2_penalty_factor = wrl_wd_l2;
            p_wrl->vocabulary_size = vocabulary_size;
            p_wrl->word_representation_size = word_representation_size;
            p_wrl->context_size = context_size;
            p_wrl->random_gen = random_gen;

            modules[0] = p_wrl;

        } else if( ilm == "gnnl" )  {
            PP< GradNNetLayerModule > p_nnl = new GradNNetLayerModule();

            p_nnl->input_size = inputsize();
            p_nnl->output_size = inputsize() * word_representation_size;

            p_nnl->start_learning_rate = wrl_slr;
            p_nnl->decrease_constant = wrl_dc;
            p_nnl->L1_penalty_factor = wrl_wd_l1;
            p_nnl->L2_penalty_factor = wrl_wd_l2;

            p_nnl->init_weights_random_scale=sqrt(p_nnl->input_size);
            p_nnl->random_gen = random_gen;

            modules[0] = p_nnl;

        } else  {
            PLERROR( "'%s' input layer model is unknown.\n", ilm.c_str() );
        }



        // *** GradNNetLayer ***
        // *** GradNNetLayer ***
        PP< GradNNetLayerModule > p_nnl = new GradNNetLayerModule();

        p_nnl->input_size = context_size * word_representation_size;
        p_nnl->output_size = semantic_layer_size;

        p_nnl->start_learning_rate = sl_slr;
        p_nnl->decrease_constant = sl_dc;
        p_nnl->L1_penalty_factor = sl_wd_l1;
        p_nnl->L2_penalty_factor = sl_wd_l2;
        p_nnl->init_weights_random_scale=3.0*sqrt(p_nnl->input_size);
        p_nnl->random_gen = random_gen;

        modules[1] = p_nnl;


        // *** Tanh layer ***
        // *** Tanh layer ***
        PP< TanhModule > p_thm = new TanhModule();

        p_thm->input_size = semantic_layer_size;
        p_thm->output_size = semantic_layer_size;

        modules[2] = p_thm;


        //------------------------------------------
        // 2) Variable part - over semantic layer
        //------------------------------------------

        if( model_type == MODEL_TYPE_GAUSSIAN )  {

            output_nmodules = 1;
            output_modules.resize( output_nmodules );


            // *** NnlmOutputLayer ***
            PP< NnlmOutputLayer > p_nol = new NnlmOutputLayer();

            p_nol->input_size = semantic_layer_size;
            p_nol->output_size = 1;
            // the missing tag does NOT get an output (never is the target)
            p_nol->target_cardinality = vocabulary_size-1;
            p_nol->sigma2min = gaussian_model_sigma2_min;
            p_nol->context_cardinality = vocabulary_size;
            p_nol->dl_start_learning_rate = 0.0001;
            //TODO Set cost and learning 
            //int gaussian_model_cost;
            //int gaussian_model_learning;

            output_modules[0] = p_nol;
            output_modules[0]->build();

        } else {

            output_nmodules = 2;
            output_modules.resize( output_nmodules );

            // *** GradNNetLayer ***
            // *** GradNNetLayer ***
            PP< GradNNetLayerModule > p_sm_nnl = new GradNNetLayerModule();

            p_sm_nnl->input_size = semantic_layer_size;  
            // the missing tag does NOT get an output (never is the target)
            p_sm_nnl->output_size = vocabulary_size-1;

            p_sm_nnl->start_learning_rate = sm_slr;
            p_sm_nnl->decrease_constant = sm_dc;
            p_sm_nnl->L1_penalty_factor = sm_wd_l1;
            p_sm_nnl->L2_penalty_factor = sm_wd_l2;
            p_sm_nnl->init_weights_random_scale=3.0*sqrt(p_sm_nnl->input_size);
            p_sm_nnl->random_gen = random_gen;

            output_modules[0] = p_sm_nnl;
            output_modules[0]->build();


            // *** Softmax ***
            output_modules[1] = new NLLErrModule();
            // the missing tag does NOT get an output (never is the target)
            output_modules[1]->input_size = vocabulary_size-1;
            output_modules[1]->output_size = 1;

            output_modules[1]->build();

            //
            output_values.resize( 1 );
            output_gradients.resize( 1 );
            // TODO should improve this
            // +1 so we can add the target in the last spot
            output_values[0].resize( vocabulary_size );
            output_gradients[0].resize( vocabulary_size-1 );
        }
    } 

    // ***  Check on layer size compatibilities, resize values and gradients, and build ***
    // ***  Check on layer size compatibilities, resize values and gradients, and build ***
    // TODO Right now we simply check up to the semantic layer. And we don't check compatibility
    // with context_size and word_representation_size and semantic_layer_size.

    // variables
    values.resize( nmodules+1 );
    gradients.resize( nmodules+1 );

    // first values will be "input" values
    int size = context_size;
    values[0].resize( size );
    gradients[0].resize( size );

    for( int i=0 ; i<nmodules ; i++ )
    {
        PP<OnlineLearningModule> p_module = modules[i];

        if( p_module->input_size != size )
        {
            PLWARNING( "NnlmOnlineLearner::buildLayers(): module '%d'\n"
                       "has an input size of '%d', but previous layer's output"
                       " size\n"
                       "is '%d'. Resizing module '%d'.\n",
                       i, p_module->input_size, size, i);
            p_module->input_size = size;
        }

        p_module->estimate_simpler_diag_hessian = true;

        p_module->build();

        size = p_module->output_size;
        values[i+1].resize( size );
        gradients[i+1].resize( size );
    }

    // *** Gaussian Model ***
    // *** Gaussian Model ***

    if( model_type == MODEL_TYPE_GAUSSIAN )  {

        // * Build candidates
        if( gaussian_model_cost == GAUSSIAN_COST_APPROX_DISCR )  {
            buildCandidates();
        }

        // * Set 
        PP<NnlmOutputLayer> p_nol;
        if( !(p_nol = dynamic_cast<NnlmOutputLayer*>( (OnlineLearningModule*) output_modules[0] ) ) )
        {
            PLERROR("NnlmOnlineLearner::build_() - MODEL_TYPE_GAUSSIAN but output_modules[0] is not an NnlmOutputLayer");
        }

        // TODO clean this
        // point to the same place
        p_nol->shared_candidates = shared_candidates;
        p_nol->candidates = candidates;

        // TODO Set learning method - discriminant or non-discriminant
        p_nol->setLearning(gaussian_model_learning);

        // Set Cost 
        if( gaussian_model_cost == GAUSSIAN_COST_APPROX_DISCR ) {
            p_nol->setCost(GAUSSIAN_COST_APPROX_DISCR);
        } else if( gaussian_model_cost == GAUSSIAN_COST_NON_DISCR ) {
            p_nol->setCost(GAUSSIAN_COST_NON_DISCR);
        } else { //GAUSSIAN_COST_DISCR
            p_nol->setCost(GAUSSIAN_COST_DISCR);
        }

        //evaluateGaussianCounts();
        //reevaluateGaussianParameters();
        // * 

        // Not here, because forget will be called after and it resets mus and sigmas
        // Initialize mus and sigmas using 1 pass
        //reevaluateGaussianParameters();


        // ### Should only be evaluated once
        //p_nol->sumI << p_nol->test_sumI;
        //p_nol->s_sumI = p_nol->test_s_sumI;

    }


}

Here is the call graph for this function:

Here is the caller graph for this function:

string PLearn::NnlmOnlineLearner::classname ( ) const [virtual]

Reimplemented from PLearn::Object.

Definition at line 63 of file NnlmOnlineLearner.cc.

{
void PLearn::NnlmOnlineLearner::computeCostsFromOutputs ( const Vec input,
const Vec output,
const Vec target,
Vec costs 
) const [virtual]

Computes the costs from already computed output.

Computes the test costs, ie the full disriminant cost (NLL).

See about how to include/print the perplexity.

Implements PLearn::PLearner.

Definition at line 1377 of file NnlmOnlineLearner.cc.

References GAUSSIAN_COST_APPROX_DISCR, GAUSSIAN_COST_DISCR, GAUSSIAN_COST_NON_DISCR, gaussian_model_cost, model_type, MODEL_TYPE_GAUSSIAN, output_modules, output_values, PLERROR, PLearn::TVec< T >::subVec(), and vocabulary_size.

{
    if( model_type == MODEL_TYPE_GAUSSIAN )  {

        PP<NnlmOutputLayer> p_nol;
        if( !(p_nol = dynamic_cast<NnlmOutputLayer*>( (OnlineLearningModule*) output_modules[0] ) ) )
        {
            PLERROR("NnlmOnlineLearner::computeCostsFromOutputs - MODEL_TYPE_GAUSSIAN but output_modules[0] is not an NnlmOutputLayer");
        }

        p_nol->setCost(GAUSSIAN_COST_DISCR);
        p_nol->setTarget( (int)target[0] );
        p_nol->fprop( output, costs);

        // Re-Set Cost 
        if( gaussian_model_cost == GAUSSIAN_COST_APPROX_DISCR ) {
            p_nol->setCost(GAUSSIAN_COST_APPROX_DISCR);
        } else  { //GAUSSIAN_COST_NON_DISCR
            p_nol->setCost(GAUSSIAN_COST_NON_DISCR);
        }

    } else  {
        Vec example_cost(1);

        Vec bob(vocabulary_size-1);

        output_modules[0]->fprop( output, bob );
        output_values[0].subVec( 0, vocabulary_size-1 ) << bob;
        // output_values[0][vocabulary_size-1] contains the target index from myGetExample
        output_modules[1]->fprop( output_values[0], example_cost);

        costs[0] = example_cost[0];
    }

}

Here is the call graph for this function:

void PLearn::NnlmOnlineLearner::computeOutput ( const Vec input,
Vec output 
) const [virtual]

Computes the output from the input.

Reimplemented from PLearn::PLearner.

Definition at line 1301 of file NnlmOnlineLearner.cc.

References i, modules, nmodules, outputsize(), PLearn::TVec< T >::resize(), and values.

Referenced by reevaluateGaussianParameters(), and train().

{
//cout << "************************************" << endl;

    // fprop
    values[0] << input;
    for( int i=0 ; i<nmodules ; i++ ) {
        modules[i]->fprop( values[i], values[i+1] );

//cout << "-= " << i << " =-" << endl;
//cout << values[i] << endl;        
    }
//cout << "-= " << nmodules << " =-" << endl;
//cout <<values[ nmodules ] << endl;

    // 
    output.resize( outputsize() );
    output << values[ nmodules ];

}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::NnlmOnlineLearner::computeTrainCostsFromOutputs ( const Vec input,
const Vec output,
const Vec target,
Vec costs 
) const [virtual]

Definition at line 1327 of file NnlmOnlineLearner.cc.

References PLearn::PLearner::inputsize(), model_type, MODEL_TYPE_GAUSSIAN, output_modules, output_values, PLERROR, PLearn::TVec< T >::subVec(), and vocabulary_size.

Referenced by train().

{

    if( model_type == MODEL_TYPE_GAUSSIAN )  {

        PP<NnlmOutputLayer> p_nol;
        if( !(p_nol = dynamic_cast<NnlmOutputLayer*>( (OnlineLearningModule*) output_modules[0] ) ) )
        {
            PLERROR("NnlmOnlineLearner::computeTrainCostsAndGradientsFromOutputs - MODEL_TYPE_GAUSSIAN but output_modules[0] is not an NnlmOutputLayer");
        }

        p_nol->setTarget( (int) target[0] );
        p_nol->setContext( (int) input[ (int) (inputsize()-2) ] );

        p_nol->fprop( output, costs );

    } else  {
        Vec example_cost(1);
        // don't give the target to the gradnnetlayermodule
        Vec bob(vocabulary_size-1);
/*
    Vec out_tgt = output.copy();
    out_tgt.append( target );
    for( int i=0 ; i<ncosts ; i++ )
    {
        Vec cost(1);
        cost_modules[i]->fprop( out_tgt, cost );
        costs[i] = cost[0];
    }

*/
        //output_modules[0]->fprop( output, output_values[0].subVec( 0, vocabulary_size-1 ) );

        // output_values[0][vocabulary_size-1] contains the target index myGetExample
        output_modules[0]->fprop( output, bob );
        output_values[0].subVec( 0, vocabulary_size-1 ) << bob;
        output_modules[1]->fprop( output_values[0], example_cost);

        costs[0] = example_cost[0];
    }


}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::NnlmOnlineLearner::declareOptions ( OptionList ol) [static, protected]

Declares the class options.

Reimplemented from PLearn::PLearner.

Definition at line 127 of file NnlmOnlineLearner.cc.

References PLearn::OptionBase::buildoption, PLearn::declareOption(), PLearn::PLearner::declareOptions(), gaussian_model_dl_dc, gaussian_model_dl_slr, gaussian_model_sigma2_min, modules, ngram_candidates_size, ngram_train_set, output_modules, self_candidates_size, semantic_layer_size, shared_candidates_size, sl_dc, sl_slr, sl_wd_l1, sl_wd_l2, sm_dc, sm_slr, sm_wd_l1, sm_wd_l2, str_gaussian_model_learning, str_gaussian_model_train_cost, str_input_model, str_output_model, word_representation_size, wrl_dc, wrl_slr, wrl_wd_l1, and wrl_wd_l2.

{

    // *** Build Options *** 

    // * Model type * 
    declareOption(ol, "str_input_model",
                  &NnlmOnlineLearner::str_input_model,
                  OptionBase::buildoption,
                  "Specifies what's used as input layer: wrl (default - word representation layer) or gnnl (gradnnetlayer).");
    declareOption(ol, "str_output_model",
                  &NnlmOnlineLearner::str_output_model,
                  OptionBase::buildoption,
                  "Specifies what's used on top of the semantic layer: 'softmax' or 'gaussian'(default).");

    // * Model size * 
    declareOption(ol, "word_representation_size",
                  &NnlmOnlineLearner::word_representation_size,
                  OptionBase::buildoption,
                  "Size of the real distributed word representation.");

    declareOption(ol, "semantic_layer_size",
                  &NnlmOnlineLearner::semantic_layer_size,
                  OptionBase::buildoption,
                  "Size of the semantic layer.");

    // * Same part parameters
    declareOption(ol, "wrl_slr",
                  &NnlmOnlineLearner::wrl_slr,
                  OptionBase::buildoption,
                  "Word representation layer start learning rate.");
    declareOption(ol, "wrl_dc",
                  &NnlmOnlineLearner::wrl_dc,
                  OptionBase::buildoption,
                  "Word representation layer decrease constant.");
    declareOption(ol, "wrl_wd_l1",
                  &NnlmOnlineLearner::wrl_wd_l1,
                  OptionBase::buildoption,
                  "Word representation layer L1 penalty factor.");
    declareOption(ol, "wrl_wd_l2",
                  &NnlmOnlineLearner::wrl_wd_l2,
                  OptionBase::buildoption,
                  "Word representation layer L2 penalty factor.");
    declareOption(ol, "sl_slr",
                  &NnlmOnlineLearner::sl_slr,
                  OptionBase::buildoption,
                  "Semantic layer start learning rate.");
    declareOption(ol, "sl_dc",
                  &NnlmOnlineLearner::sl_dc,
                  OptionBase::buildoption,
                  "Semantic layer decrease constant.");
    declareOption(ol, "sl_wd_l1",
                  &NnlmOnlineLearner::sl_wd_l1,
                  OptionBase::buildoption,
                  "Semantic layer L1 penalty factor.");
    declareOption(ol, "sl_wd_l2",
                  &NnlmOnlineLearner::sl_wd_l2,
                  OptionBase::buildoption,
                  "Semantic layer L2 penalty factor.");


    // * Gaussian model specific

    // - model behavior
    // TODO how about combining the two costs: maybe jumpstart with one
    declareOption(ol, "str_gaussian_model_train_cost",
                  &NnlmOnlineLearner::str_gaussian_model_train_cost,
                  OptionBase::buildoption,
                  "In case of a gaussian output module, specifies the cost used for training (i a word, r a semantic layer representation) : 'discriminant' (computes p(i|r) exactly, with full computation of normalizer), 'approx_discriminant' (default - uses some candidate words for normalization) or 'non_discriminant' (uses p(r|i)).");

    declareOption(ol, "str_gaussian_model_learning",
                  &NnlmOnlineLearner::str_gaussian_model_learning,
                  OptionBase::buildoption,
                  "In case of a gaussian output module, specifies the learning technique: 'discriminant' or 'non_discriminant' (default - evaluates empirical mu and sigma).");

    declareOption(ol, "gaussian_model_sigma2_min",
                  &NnlmOnlineLearner::gaussian_model_sigma2_min,
                  OptionBase::buildoption,
                  "In case of a gaussian output module, specifies the minimal sigma^2.");

    declareOption(ol, "gaussian_model_dl_slr",
                  &NnlmOnlineLearner::gaussian_model_dl_slr,
                  OptionBase::buildoption,
                  "In case of a gaussian output module with discriminant learning, this specifies the starting learning rate.");

    declareOption(ol, "gaussian_model_dl_dc",
                  &NnlmOnlineLearner::gaussian_model_dl_dc,
                  OptionBase::buildoption,
                  "In case of a gaussian output module with discriminant learning, this specifies the decrease constant.");

    // - Candidate set sizes
    declareOption(ol, "shared_candidates_size",
                  &NnlmOnlineLearner::shared_candidates_size,
                  OptionBase::buildoption,
                  "Number of candidates drawn from frequent words in aproximate discriminant cost evaluation.");

    declareOption(ol, "ngram_candidates_size",
                  &NnlmOnlineLearner::ngram_candidates_size,
                  OptionBase::buildoption,
                  "Number of candidates drawn from the context (using bigram) in aproximated discriminant cost evaluation.");

    declareOption(ol, "self_candidates_size",
                  &NnlmOnlineLearner::self_candidates_size,
                  OptionBase::buildoption,
                  "Number of candidates drawn from the nnlm in aproximated discriminant cost evaluation  (evaluated periodically). NOT IMPLEMENTED!!");

    // - Ngram (for evaluating ngram candidates) train set
    declareOption(ol, "ngram_train_set",
                  &NnlmOnlineLearner::ngram_train_set,
                  OptionBase::buildoption,
                  "Train set used for training the bigram used in the evaluation of the set of candidate words used for normalization   in the evaluated discriminant cost (ProcessSymbolicSequenceVMatrix) (ONLY BIGRAMS).");

    // * Softmax specific

    declareOption(ol, "sm_slr",
                  &NnlmOnlineLearner::sm_slr,
                  OptionBase::buildoption,
                  "Softmax layer start learning rate.");
    declareOption(ol, "sm_dc",
                  &NnlmOnlineLearner::sm_dc,
                  OptionBase::buildoption,
                  "Softmax layer decrease constant.");
    declareOption(ol, "sm_wd_l1",
                  &NnlmOnlineLearner::sm_wd_l1,
                  OptionBase::buildoption,
                  "Softmax layer L1 penalty factor.");
    declareOption(ol, "sm_wd_l2",
                  &NnlmOnlineLearner::sm_wd_l2,
                  OptionBase::buildoption,
                  "Softmax layer L2 penalty factor.");


    // *** Learnt Options *** 

    declareOption(ol, "modules", &NnlmOnlineLearner::modules,
                  OptionBase::buildoption,
                  "Layers of the learner");

    declareOption(ol, "output_modules", &NnlmOnlineLearner::output_modules,
                  OptionBase::buildoption,
                  "Output layers");

    // TODO Are there missing things here?

    // Now call the parent class' declareOptions
    inherited::declareOptions(ol);
}

Here is the call graph for this function:

static const PPath& PLearn::NnlmOnlineLearner::declaringFile ( ) [inline, static]

Reimplemented from PLearn::PLearner.

Definition at line 221 of file NnlmOnlineLearner.h.

:
    //#####  Protected Options  ###############################################
NnlmOnlineLearner * PLearn::NnlmOnlineLearner::deepCopy ( CopiesMap copies) const [virtual]

Reimplemented from PLearn::PLearner.

Definition at line 63 of file NnlmOnlineLearner.cc.

{
void PLearn::NnlmOnlineLearner::forget ( ) [virtual]

(Re-)initializes the PLearner in its fresh state (that state may depend on the 'seed' option) and sets 'stage' back to 0 (this is the stage of a fresh learner!).

Reimplemented from PLearn::PLearner.

Definition at line 939 of file NnlmOnlineLearner.cc.

References PLearn::TVec< T >::clear(), PLearn::PLearner::forget(), gradients, i, model_type, MODEL_TYPE_SOFTMAX, modules, nmodules, output_gradients, output_modules, output_nmodules, output_values, PLearn::PLearner::stage, and values.

{
    inherited::forget();

    // reset inputs
    values[0].clear();
    gradients[0].clear();
    // reset modules and outputs
    for( int i=0 ; i<nmodules ; i++ )
    {
        modules[i]->forget();
        values[i+1].clear();
        gradients[i+1].clear();
    }

    if( model_type == MODEL_TYPE_SOFTMAX )  {
        output_values[0].clear();
        output_gradients[0].clear();
    }
    for( int i=0 ; i<output_nmodules; i++ )
    {
        output_modules[i]->forget();
    }

    stage = 0;
}

Here is the call graph for this function:

OptionList & PLearn::NnlmOnlineLearner::getOptionList ( ) const [virtual]

Reimplemented from PLearn::Object.

Definition at line 63 of file NnlmOnlineLearner.cc.

{
OptionMap & PLearn::NnlmOnlineLearner::getOptionMap ( ) const [virtual]

Reimplemented from PLearn::Object.

Definition at line 63 of file NnlmOnlineLearner.cc.

{
RemoteMethodMap & PLearn::NnlmOnlineLearner::getRemoteMethodMap ( ) const [virtual]

Reimplemented from PLearn::Object.

Definition at line 63 of file NnlmOnlineLearner.cc.

{
TVec< string > PLearn::NnlmOnlineLearner::getTestCostNames ( ) const [virtual]

Returns the names of the costs computed by computeCostsFromOutpus (and thus the test method).

Implements PLearn::PLearner.

Definition at line 1418 of file NnlmOnlineLearner.cc.

References PLearn::TVec< T >::resize().

Referenced by test().

{
    // Return the names of the costs computed by computeCostsFromOutputs
    // (these may or may not be exactly the same as what's returned by
    // getTrainCostNames).
    TVec<string> ret;
    ret.resize(1);
    ret[0] = "NLL";
    return ret;
}

Here is the call graph for this function:

Here is the caller graph for this function:

TVec< string > PLearn::NnlmOnlineLearner::getTrainCostNames ( ) const [virtual]

Returns the names of the objective costs that the train method computes and for which it updates the VecStatsCollector train_stats.

Implements PLearn::PLearner.

Definition at line 1433 of file NnlmOnlineLearner.cc.

References model_type, MODEL_TYPE_GAUSSIAN, and PLearn::TVec< T >::resize().

Referenced by train().

{
    // Return the names of the objective costs that the train method computes
    // and for which it updates the VecStatsCollector train_stats
    // (these may or may not be exactly the same as what's returned by
    // getTestCostNames).
    TVec<string> ret;

    if( model_type == MODEL_TYPE_GAUSSIAN )  {
        ret.resize(2);
        ret[0] = "non_discriminant";
        ret[1] = "approx_discriminant";
    } else  {
        ret.resize(1);
        ret[0] = "NLL";
    }

    return ret;
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::NnlmOnlineLearner::makeDeepCopyFromShallowCopy ( CopiesMap copies) [virtual]

Transforms a shallow copy into a deep copy.

Reimplemented from PLearn::PLearner.

Definition at line 899 of file NnlmOnlineLearner.cc.

References PLearn::deepCopyField(), gradients, PLearn::PLearner::makeDeepCopyFromShallowCopy(), modules, output_gradients, output_modules, output_values, and values.

{
    inherited::makeDeepCopyFromShallowCopy(copies);

    deepCopyField(modules, copies);
    deepCopyField(values, copies);
    deepCopyField(gradients, copies);

    deepCopyField(output_modules, copies);
    deepCopyField(output_values, copies);
    deepCopyField(output_gradients, copies);

    // ### How about these?
    //ngram_train_set
    //theNGram
    //shared_candidates
    //candidates

}

Here is the call graph for this function:

void PLearn::NnlmOnlineLearner::myGetExample ( const VMat example_set,
int sample,
Vec input,
Vec target,
real weight 
) const

Interfaces with the ProcessSymbolicSequenceVMatrix's getRow()

Definition at line 975 of file NnlmOnlineLearner.cc.

References i, PLearn::PLearner::inputsize(), PLearn::is_missing(), model_type, MODEL_TYPE_SOFTMAX, output_values, PLearn::TVec< T >::resize(), PLearn::TVec< T >::subVec(), vocabulary_size, and PLearn::PLearner::weightsize().

Referenced by reevaluateGaussianParameters(), test(), and train().

{
    static Vec row;
    // the actual inputsize is (inputsize()-1) and targetsize() is 1
    row.resize( inputsize() + weightsize() );

    example_set->getRow( sample, row);

    input << row.subVec( 0, inputsize()-1 );
    target << row.subVec( inputsize()-1, 1 );
    weight = 1.0;
    if( weightsize() )  {
        weight = row[ inputsize() ];
    }

    // *** SHOULD BE DONE IN PRETREATMENT!!! -> but we have a ProcessSymbolicSequenceVMatrix...
    // * Replace nan in input by '(train_set->getDictionary(0))->size()+1', 
    // the missing value tag
    for( int i=0 ; i < inputsize()-1 ; i++ ) {
      if( is_missing(input[i]) )  {
        input[i] = vocabulary_size - 1;
      }
    }
    // * Replace a 'nan' in the target by OOV
    // this nan should not be missing data (seeing the train_set is a
    // ProcessSymbolicSequenceVMatrix)
    // but the word "nan", ie "Mrs Nan said she would blabla"
    // *** Problem however - current vocabulary is full for train_set,
    // ie we train OOV on nan-word instances.
    // DO a pretreatment to replace Nan by *Nan* or something like it
    if( is_missing(target[0]) ) {
        target[0] = 0;
    }
    // *** SHOULD BE DONE IN PRETREATMENT!!!

    // set target for the nllerrmodule
    if( model_type == MODEL_TYPE_SOFTMAX )  {
        output_values[0][vocabulary_size-1]=target[0];
    }

}

Here is the call graph for this function:

Here is the caller graph for this function:

int PLearn::NnlmOnlineLearner::outputsize ( ) const [virtual]

Returns the size of this learner's output, (which typically may depend on its inputsize(), targetsize() and set options).

This is the output size of the layer before the cost layer.

Implements PLearn::PLearner.

Definition at line 924 of file NnlmOnlineLearner.cc.

References PLearn::TVec< T >::length(), nmodules, and values.

Referenced by computeOutput(), reevaluateGaussianParameters(), test(), and train().

{
    if( nmodules < 0 || values.length() <= nmodules )
        return -1;
    else
        return values[ nmodules ].length();
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::NnlmOnlineLearner::reevaluateGaussianParameters ( ) const

Reevaluates "fresh" gaussian mus and sigmas - make sure you want to do this.

Definition at line 851 of file NnlmOnlineLearner.cc.

References computeOutput(), PLearn::endl(), PLearn::PLearner::inputsize(), PLearn::VMat::length(), model_type, MODEL_TYPE_GAUSSIAN, myGetExample(), output_modules, outputsize(), PLERROR, PLWARNING, PLearn::sample(), and PLearn::PLearner::train_set.

Referenced by test(), and train().

{
    cout << "Evaluating gaussian parameters..." << endl;

    if( model_type != MODEL_TYPE_GAUSSIAN )  {
        PLWARNING( "NnlmOnlineLearner::reevaluateGaussianParameters(): not a gaussian model. Ignoring call.\n");
        return;
    }

    Vec input( inputsize()-1 );
    Vec target( 1 );
    real weight;
    Vec output( outputsize() );   // the output of the semantic layer
    int nsamples = train_set->length();

    PP<NnlmOutputLayer> p_nol;
    if( !(p_nol = dynamic_cast<NnlmOutputLayer*>( (OnlineLearningModule*) output_modules[0] ) ) )
    {
        PLERROR("NnlmOnlineLearner::reevaluateGaussianParameters() - output_modules[0] is not an NnlmOutputLayer");
    }

    p_nol->resetAllClassVars();

    // * Compute stats
    for( int sample=0 ; sample < nsamples ; sample++ )
    {
        myGetExample(train_set, sample, input, target, weight );

        // * fprop
        computeOutput(input, output);

        //p_nol->setTarget( (int) target[0]);
        //p_nol->setContext( (int) input[ (inputsize()-2) ] );

        p_nol->updateClassVars((int) target[0], output);
    }

    // * Apply values 
    p_nol->applyAllClassVars();


}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::NnlmOnlineLearner::test ( VMat  testset,
PP< VecStatsCollector test_stats,
VMat  testoutputs,
VMat  testcosts 
) const [virtual]

Performs test on testset, updating test cost statistics, and optionally filling testoutputs and testcosts.

The default version repeatedly calls computeOutputAndCosts or computeCostsOnly. Note that neither test_stats->forget() nor test_stats->finalize() is called, so that you should call them yourself (respectively before and after calling this method) if you don't plan to accumulate statistics.

Reimplemented from PLearn::PLearner.

Definition at line 1190 of file NnlmOnlineLearner.cc.

References candidates, PLearn::PLearner::computeOutputAndCosts(), PLearn::endl(), PLearn::entropy(), PLearn::TVec< T >::fill(), getTestCostNames(), i, PLearn::PLearner::inputsize(), PLearn::VMat::length(), model_type, MODEL_TYPE_GAUSSIAN, myGetExample(), output_modules, outputsize(), PLERROR, reevaluateGaussianParameters(), PLearn::PLearner::report_progress, PLearn::safeexp(), PLearn::sample(), PLearn::TVec< T >::size(), PLearn::PLearner::stage, and vocabulary_size.

{

    Vec input( inputsize()-1 );
    Vec output( outputsize() );
    Vec target( 1 );
    real weight;
    Vec test_costs( getTestCostNames().length() );
    real entropy = 0.0;
    real perplexity = 0.0;
    int nsamples = testset->length();


    // * Empty test set: we give -1 cost arbitrarily.
    if (nsamples == 0) {
        test_costs.fill(-1);
        test_stats->update(test_costs);
    }

    if( stage == 0 )  {
        // Initialize mus and sigmas using 1 pass
        reevaluateGaussianParameters();
    }


    // * TODO Should we do this?
    //reevaluateGaussianParameters();

    if( model_type == MODEL_TYPE_GAUSSIAN )  {

        PP<NnlmOutputLayer> p_nol;
        if( !(p_nol = dynamic_cast<NnlmOutputLayer*>( (OnlineLearningModule*) output_modules[0] ) ) )
        {
            PLERROR("NnlmOnlineLearner::train() - MODEL_TYPE_GAUSSIAN but output_modules[0] is not an NnlmOutputLayer");
        }

        cout << "global_mu " << p_nol->global_mu << endl;
        cout << "global_sigma2 " << p_nol->global_sigma2 << endl;
    }



    PP<ProgressBar> pb;
    if(report_progress)
        pb = new ProgressBar("Testing learner",nsamples);

    for( int sample=0 ; sample < nsamples ; sample++ )
    {
        myGetExample(testset, sample, input, target, weight );

        // Always call computeOutputAndCosts, since this is better
        // behaved with stateful learners
        computeOutputAndCosts(input, target, output, test_costs);

        if(testoutputs)
            testoutputs->putOrAppendRow(sample,output);

        if(testcosts)
            testcosts->putOrAppendRow(sample, test_costs);

        if(test_stats)
            test_stats->update(test_costs,weight);

        if(report_progress)
            pb->update(sample);

        entropy += test_costs[0];

        // Do some outputing
        // Do some outputing
        if( sample < 50 ) {
            cout << "---> ";
            for( int i=0; i<inputsize()-1; i++)  {
                if( (int)input[i] == vocabulary_size - 1) {
                    cout << "\\missing\\ ";
                } else  {
                    cout << (testset->getDictionary(0))->getSymbol( (int)input[i] ) << " ";
                }
            }
            cout << "\t\t " << (testset->getDictionary(0))->getSymbol( (int)target[0] ) << " p(t|r) " << safeexp( - test_costs[0] ) << endl;

            if( model_type == MODEL_TYPE_GAUSSIAN )  {
                PP<NnlmOutputLayer> p_nol;
                if( !(p_nol = dynamic_cast<NnlmOutputLayer*>( (OnlineLearningModule*) output_modules[0] ) ) )
                {
                    PLERROR("NnlmOnlineLearner::test - MODEL_TYPE_GAUSSIAN but output_modules[0] is not an NnlmOutputLayer");
                }
                Vec candidates, probabilities;
                p_nol->getBestCandidates(output, candidates, probabilities);
                for(int i=0; i<candidates.size(); i++)  {
                    cout << "\t" << (testset->getDictionary(0))->getSymbol( (int)candidates[i] ) << " " << probabilities[i] << endl;
                }
            }
        }
        // Do some outputing - END
        // Do some outputing - END

    }

    entropy /= nsamples;
    perplexity = safeexp(entropy);

    cout << "entropy: " << entropy << " perplexity " << perplexity << endl;

}

Here is the call graph for this function:

void PLearn::NnlmOnlineLearner::train ( ) [virtual]

The role of the train method is to bring the learner up to stage==nstages, updating the train_stats collector with training costs measured on-line in the process.

Implements PLearn::PLearner.

Definition at line 1024 of file NnlmOnlineLearner.cc.

References PLearn::OnlineLearningModule::bpropUpdate(), computeOutput(), computeTrainCostsFromOutputs(), PLearn::endl(), GAUSSIAN_COST_APPROX_DISCR, gaussian_model_cost, getTrainCostNames(), gradients, i, PLearn::PLearner::initTrain(), PLearn::PLearner::inputsize(), PLearn::VMat::length(), model_type, MODEL_TYPE_GAUSSIAN, modules, myGetExample(), nmodules, PLearn::PLearner::nstages, output_gradients, output_modules, output_values, outputsize(), PLERROR, reevaluateGaussianParameters(), PLearn::PLearner::report_progress, PLearn::sample(), semantic_layer_size, PLearn::PLearner::stage, PLearn::TVec< T >::subVec(), PLearn::PLearner::train_set, PLearn::PLearner::train_stats, values, and vocabulary_size.

{
    if (!initTrain())
        return;

    Vec input( inputsize()-1 );
    Vec target( 1 );
    real weight;
    Vec output( outputsize() );   // the output of the semantic layer
    Vec train_costs( getTrainCostNames().length() );
    Vec out_gradient(1,1); // the gradient wrt the cost is '1'
    Vec gradient( semantic_layer_size );
    int nsamples = train_set->length();

    // Initialize mus and sigmas using 1 pass
    reevaluateGaussianParameters();

    if(stage==0)  {
        PP<NnlmOutputLayer> p_nol;
        if( !(p_nol = dynamic_cast<NnlmOutputLayer*>( (OnlineLearningModule*) output_modules[0] ) ) )
        {
            PLERROR("NnlmOnlineLearner::train() - MODEL_TYPE_GAUSSIAN but output_modules[0] is not an NnlmOutputLayer");
        }

        p_nol->computeEmpiricalLearningRateParameters();
    }

    if( model_type == MODEL_TYPE_GAUSSIAN )  {

        PP<NnlmOutputLayer> p_nol;
        if( !(p_nol = dynamic_cast<NnlmOutputLayer*>( (OnlineLearningModule*) output_modules[0] ) ) )
        {
            PLERROR("NnlmOnlineLearner::train() - MODEL_TYPE_GAUSSIAN but output_modules[0] is not an NnlmOutputLayer");
        }

        p_nol->is_learning = true;

    }


//---------------
/*    PP<GradNNetLayerModule> p_gnn;
    if( !(p_gnn = dynamic_cast<GradNNetLayerModule*>( (OnlineLearningModule*) modules[1] ) ) )
    {
        PLERROR("NnlmOnlineLearner::train - modules[1] is not a GradNNetLayerModule");
    }
    p_gnn->printVariance();*/
//---------------

    PP<ProgressBar> pb;
    if(report_progress) {
        pb = new ProgressBar("Training", nsamples);
    }

    // *** For stages ***
    for( ; stage < nstages ; stage++ )
    {

        if(report_progress) {
            cout << "*** Stage " << stage << " ***" << endl;
            //cout << "uniform_mixture_coeff " << output_modules[0]->umc << " " << 1 - output_modules[0]->umc<< endl;
        }



        if( model_type == MODEL_TYPE_GAUSSIAN )  {

            PP<NnlmOutputLayer> p_nol;
            if( !(p_nol = dynamic_cast<NnlmOutputLayer*>( (OnlineLearningModule*) output_modules[0] ) ) )
            {
                PLERROR("NnlmOnlineLearner::train() - MODEL_TYPE_GAUSSIAN but output_modules[0] is not an NnlmOutputLayer");
            }

            cout << "global_mu " << p_nol->global_mu << endl;
            cout << "global_sigma2 " << p_nol->global_sigma2 << endl;
        }







        // * clear stats of previous epoch *
        train_stats->forget();

        // * for examples *
        for( int sample=0 ; sample < nsamples ; sample++ )
        {

            if(report_progress)
                pb->update(sample);

            // - Get example -
            myGetExample(train_set, sample, input, target, weight );

            // - Fixed part fprop -
            computeOutput(input, output);

            // - Variable part fprop - cost and gradient for this part -
            // (we don't want to duplicate some computations in gaussian model gradient evaluation)
            // In gaussian case, gradients[nmodules] is computed here.
            computeTrainCostsFromOutputs(input, output, target, train_costs );

            // - bpropUpdate -

            // Variable part
            if( model_type == MODEL_TYPE_GAUSSIAN )  {

                PP<NnlmOutputLayer> p_nol;
                if( !(p_nol = dynamic_cast<NnlmOutputLayer*>( (OnlineLearningModule*) output_modules[0] ) ) )
                {
                    PLERROR("NnlmOnlineLearner::train() - MODEL_TYPE_GAUSSIAN but output_modules[0] is not an NnlmOutputLayer");
                }

                if( gaussian_model_cost == GAUSSIAN_COST_APPROX_DISCR )  {
                    output_modules[0]->bpropUpdate( output, train_costs.subVec(1,1), out_gradient );
                    gradients[nmodules] << p_nol->ad_gradient;
                } else {  //if( gaussian_model_cost == GAUSSIAN_COST_NON_DISCR )
                    output_modules[0]->bpropUpdate( output, train_costs.subVec(0,1), out_gradient );
                    gradients[nmodules] << p_nol->nd_gradient;
                }

            } else  {
                output_modules[1]->bpropUpdate( output_values[0], train_costs, output_gradients[0], out_gradient );
                output_modules[0]->bpropUpdate( output, output_values[0].subVec( 0, vocabulary_size-1 ), gradients[nmodules], output_gradients[0] );
            }

            // Fixed (common to both models) part
            for( int i=nmodules-1 ; i>0 ; i-- ) {
                modules[i]->bpropUpdate( values[i], values[i+1], gradients[i], gradients[i+1] );
            }
            modules[0]->bpropUpdate( values[0], values[1], gradients[1] );


            // - Update stats -
            train_stats->update( train_costs );

        }// * for examples - END

        train_stats->finalize(); // finalize statistics for this epoch


    // Initialize mus and sigmas using 1 pass
    reevaluateGaussianParameters();


    }// *** For stages - END

    if( model_type == MODEL_TYPE_GAUSSIAN )  {

        PP<NnlmOutputLayer> p_nol;
        if( !(p_nol = dynamic_cast<NnlmOutputLayer*>( (OnlineLearningModule*) output_modules[0] ) ) )
        {
            PLERROR("NnlmOnlineLearner::train() - MODEL_TYPE_GAUSSIAN but output_modules[0] is not an NnlmOutputLayer");
        }

        p_nol->is_learning = false;
    }

}

Here is the call graph for this function:


Member Data Documentation

Reimplemented from PLearn::PLearner.

Definition at line 221 of file NnlmOnlineLearner.h.

Definition at line 144 of file NnlmOnlineLearner.h.

Referenced by buildCandidates(), buildLayers(), and test().

Definition at line 132 of file NnlmOnlineLearner.h.

Referenced by build_(), and buildLayers().

--- Gaussian output model specific stuff ------------------------------

Definition at line 271 of file NnlmOnlineLearner.h.

Referenced by build_(), buildLayers(), computeCostsFromOutputs(), and train().

Definition at line 98 of file NnlmOnlineLearner.h.

Referenced by declareOptions().

Definition at line 97 of file NnlmOnlineLearner.h.

Referenced by declareOptions().

Definition at line 272 of file NnlmOnlineLearner.h.

Referenced by build_(), and buildLayers().

Definition at line 96 of file NnlmOnlineLearner.h.

Referenced by buildLayers(), and declareOptions().

stores the gradients

Definition at line 240 of file NnlmOnlineLearner.h.

Referenced by buildLayers(), forget(), makeDeepCopyFromShallowCopy(), and train().

Layers of the learner Separated between the fixed part which computes up to the "semantic layer" and the variable part (gaussian or softmax)

Definition at line 125 of file NnlmOnlineLearner.h.

Referenced by buildLayers(), computeOutput(), declareOptions(), forget(), makeDeepCopyFromShallowCopy(), and train().

Definition at line 104 of file NnlmOnlineLearner.h.

Referenced by buildCandidates(), and declareOptions().

Used in determining the C sets of candidate words for normalization in the evaluated discriminant cost.

Definition at line 110 of file NnlmOnlineLearner.h.

Referenced by build_(), buildCandidates(), and declareOptions().

Used for loops.

Definition at line 264 of file NnlmOnlineLearner.h.

Referenced by buildLayers(), computeOutput(), forget(), outputsize(), and train().

Definition at line 245 of file NnlmOnlineLearner.h.

Referenced by buildLayers(), forget(), makeDeepCopyFromShallowCopy(), and train().

Definition at line 265 of file NnlmOnlineLearner.h.

Referenced by buildLayers(), and forget().

for the second variable part of the model (starts from 'r', the semantic layer, on)

Definition at line 244 of file NnlmOnlineLearner.h.

Referenced by buildLayers(), computeCostsFromOutputs(), computeTrainCostsFromOutputs(), forget(), makeDeepCopyFromShallowCopy(), myGetExample(), and train().

Definition at line 105 of file NnlmOnlineLearner.h.

Referenced by declareOptions().

Size of the semantic layer.

Definition at line 79 of file NnlmOnlineLearner.h.

Referenced by buildLayers(), declareOptions(), and train().

Holds candidates.

Definition at line 143 of file NnlmOnlineLearner.h.

Referenced by buildCandidates(), and buildLayers().

Number of candidates to use from different sources in the gaussian model when we use the approx_discriminant cost.

Definition at line 103 of file NnlmOnlineLearner.h.

Referenced by buildCandidates(), and declareOptions().

Definition at line 87 of file NnlmOnlineLearner.h.

Referenced by buildLayers(), and declareOptions().

Definition at line 86 of file NnlmOnlineLearner.h.

Referenced by buildLayers(), and declareOptions().

Definition at line 88 of file NnlmOnlineLearner.h.

Referenced by buildLayers(), and declareOptions().

Definition at line 89 of file NnlmOnlineLearner.h.

Referenced by buildLayers(), and declareOptions().

Definition at line 115 of file NnlmOnlineLearner.h.

Referenced by buildLayers(), and declareOptions().

--- Softmax output model specific stuff -------------------------------

Definition at line 114 of file NnlmOnlineLearner.h.

Referenced by buildLayers(), and declareOptions().

Definition at line 116 of file NnlmOnlineLearner.h.

Referenced by buildLayers(), and declareOptions().

Definition at line 117 of file NnlmOnlineLearner.h.

Referenced by buildLayers(), and declareOptions().

Definition at line 95 of file NnlmOnlineLearner.h.

Referenced by build_(), and declareOptions().

Define behavior.

--- Gaussian output model specific stuff ------------------------------

Definition at line 94 of file NnlmOnlineLearner.h.

Referenced by build_(), and declareOptions().

Defines which model is used.

Definition at line 69 of file NnlmOnlineLearner.h.

Referenced by buildLayers(), and declareOptions().

Definition at line 70 of file NnlmOnlineLearner.h.

Referenced by build_(), and declareOptions().

Used in determining the C sets of candidate words for normalization in the evaluated discriminant cost.

--- Gaussian output model specific stuff ------------------------------

Definition at line 139 of file NnlmOnlineLearner.h.

Referenced by buildCandidates().

stores the input and output values of the functions

Definition at line 238 of file NnlmOnlineLearner.h.

Referenced by buildLayers(), computeOutput(), forget(), makeDeepCopyFromShallowCopy(), outputsize(), and train().

NNLM related - determined from train_set.

Definition at line 131 of file NnlmOnlineLearner.h.

Referenced by build_(), buildCandidates(), buildLayers(), computeCostsFromOutputs(), computeTrainCostsFromOutputs(), myGetExample(), test(), and train().

Size of the real ditributed word representations.

--- Fixed (same in both output models) part ----------------------------------

Definition at line 76 of file NnlmOnlineLearner.h.

Referenced by buildLayers(), and declareOptions().

Definition at line 83 of file NnlmOnlineLearner.h.

Referenced by buildLayers(), and declareOptions().

Neural part parameters.

Definition at line 82 of file NnlmOnlineLearner.h.

Referenced by buildLayers(), and declareOptions().

Definition at line 84 of file NnlmOnlineLearner.h.

Referenced by buildLayers(), and declareOptions().

Definition at line 85 of file NnlmOnlineLearner.h.

Referenced by buildLayers(), and declareOptions().


The documentation for this class was generated from the following files:
 All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Friends Defines