PLearn 0.1
Public Member Functions | Static Public Member Functions | Public Attributes | Static Public Attributes | Protected Member Functions | Static Protected Member Functions | Private Types | Private Member Functions | Private Attributes
PLearn::NnlmWordRepresentationLayer Class Reference

Implements the word representation layer for the online NNLM. More...

#include <NnlmWordRepresentationLayer.h>

Inheritance diagram for PLearn::NnlmWordRepresentationLayer:
Inheritance graph
[legend]
Collaboration diagram for PLearn::NnlmWordRepresentationLayer:
Collaboration graph
[legend]

List of all members.

Public Member Functions

 NnlmWordRepresentationLayer ()
 Default constructor.
virtual void fprop (const Vec &input, Vec &output) const
 given the input, compute the output (possibly resize it appropriately)
virtual void bpropUpdate (const Vec &input, const Vec &output, const Vec &output_gradient)
 Adapt based on the output gradient: this method should only be called just after a corresponding fprop; it should be called with the same arguments as fprop for the first two arguments (and output should not have been modified since then).
virtual void forget ()
 this version allows to obtain the input gradient as well N.B.
virtual string classname () const
virtual OptionListgetOptionList () const
virtual OptionMapgetOptionMap () const
virtual RemoteMethodMapgetRemoteMethodMap () const
virtual
NnlmWordRepresentationLayer
deepCopy (CopiesMap &copies) const
virtual void build ()
 Post-constructor.
virtual void makeDeepCopyFromShallowCopy (CopiesMap &copies)
 Transforms a shallow copy into a deep copy.

Static Public Member Functions

static string _classname_ ()
 optionally perform some processing after training, or after a series of fprop/bpropUpdate calls to prepare the model for truly out-of-sample operation.
static OptionList_getOptionList_ ()
static RemoteMethodMap_getRemoteMethodMap_ ()
static Object_new_instance_for_typemap_ ()
static bool _isa_ (const Object *o)
static void _static_initialize_ ()
static const PPathdeclaringFile ()

Public Attributes

int vocabulary_size
int word_representation_size
int context_size
real start_learning_rate
 learning_rate = start_learning_rate / (1 + decrease_constant*t), where t is the number of updates since the beginning
real decrease_constant

Static Public Attributes

static StaticInitializer _static_initializer_

Protected Member Functions

virtual void resetWeights ()

Static Protected Member Functions

static void declareOptions (OptionList &ol)
 Declares the class options.

Private Types

typedef OnlineLearningModule inherited

Private Member Functions

void build_ ()
 This does the actual building.

Private Attributes

int step_number
real learning_rate
Mat weights

Detailed Description

Implements the word representation layer for the online NNLM.

Best explained as a onehot GradNNetLayer repeated 'context size' (ie input_size) times.

Some variables explained: input_size -> 'context size' word_representation_size -> size of the real distributed word representation output_size -> word representation size * input_size virtual_input_size -> input_size * vocabulary_size

Todo:
Write class to-do's here if there are any.
Deprecated:

Definition at line 66 of file NnlmWordRepresentationLayer.h.


Member Typedef Documentation

Reimplemented from PLearn::OnlineLearningModule.

Definition at line 68 of file NnlmWordRepresentationLayer.h.


Constructor & Destructor Documentation

PLearn::NnlmWordRepresentationLayer::NnlmWordRepresentationLayer ( )

Default constructor.

Definition at line 51 of file NnlmWordRepresentationLayer.cc.

                                                         :
    OnlineLearningModule(),
    vocabulary_size( -1 ),
    word_representation_size( -1 ),
    context_size( -1 ),
    start_learning_rate( 0.001 ),
    decrease_constant( 0 ),
    step_number( 0 ),
    learning_rate( 0.0 )    
{
    // ### You may (or not) want to call build_() to finish building the object
    // ### (doing so assumes the parent classes' build_() have been called too
    // ### in the parent classes' constructors, something that you must ensure)
}

Member Function Documentation

string PLearn::NnlmWordRepresentationLayer::_classname_ ( ) [static]

optionally perform some processing after training, or after a series of fprop/bpropUpdate calls to prepare the model for truly out-of-sample operation.

THE DEFAULT IMPLEMENTATION PROVIDED IN THE SUPER-CLASS DOES NOT DO ANYTHING. in case bpropUpdate does not do anything, make it known THE DEFAULT IMPLEMENTATION PROVIDED IN THE SUPER-CLASS RETURNS false;

Reimplemented from PLearn::OnlineLearningModule.

Definition at line 49 of file NnlmWordRepresentationLayer.cc.

OptionList & PLearn::NnlmWordRepresentationLayer::_getOptionList_ ( ) [static]

Reimplemented from PLearn::OnlineLearningModule.

Definition at line 49 of file NnlmWordRepresentationLayer.cc.

RemoteMethodMap & PLearn::NnlmWordRepresentationLayer::_getRemoteMethodMap_ ( ) [static]

Reimplemented from PLearn::OnlineLearningModule.

Definition at line 49 of file NnlmWordRepresentationLayer.cc.

bool PLearn::NnlmWordRepresentationLayer::_isa_ ( const Object o) [static]

Reimplemented from PLearn::OnlineLearningModule.

Definition at line 49 of file NnlmWordRepresentationLayer.cc.

Object * PLearn::NnlmWordRepresentationLayer::_new_instance_for_typemap_ ( ) [static]

Reimplemented from PLearn::Object.

Definition at line 49 of file NnlmWordRepresentationLayer.cc.

StaticInitializer NnlmWordRepresentationLayer::_static_initializer_ & PLearn::NnlmWordRepresentationLayer::_static_initialize_ ( ) [static]

Reimplemented from PLearn::OnlineLearningModule.

Definition at line 49 of file NnlmWordRepresentationLayer.cc.

void PLearn::NnlmWordRepresentationLayer::bpropUpdate ( const Vec input,
const Vec output,
const Vec output_gradient 
) [virtual]

Adapt based on the output gradient: this method should only be called just after a corresponding fprop; it should be called with the same arguments as fprop for the first two arguments (and output should not have been modified since then).

Since sub-classes are supposed to learn ONLINE, the object is 'ready-to-be-used' just after any bpropUpdate. N.B. A DEFAULT IMPLEMENTATION IS PROVIDED IN THE SUPER-CLASS, WHICH JUST CALLS bpropUpdate(input, output, input_gradient, output_gradient) AND IGNORES INPUT GRADIENT.

Reimplemented from PLearn::OnlineLearningModule.

Definition at line 213 of file NnlmWordRepresentationLayer.cc.

References decrease_constant, i, PLearn::OnlineLearningModule::input_size, j, learning_rate, PLearn::OnlineLearningModule::output_size, PLERROR, PLearn::TVec< T >::size(), start_learning_rate, step_number, vocabulary_size, weights, and word_representation_size.

{

    int in_size = input.size();
    int out_size = output.size();
    int og_size = output_gradient.size();

    // size check
    if( in_size != input_size )
    {
        PLERROR("NnlmWordRepresentationLayer::bpropUpdate: 'input.size()' should be equal\n"
                " to 'input_size' (%i != %i)\n", in_size, input_size);
    }
    if( out_size != output_size )
    {
        PLERROR("NnlmWordRepresentationLayer::bpropUpdate: 'output.size()' should be"
                " equal\n"
                " to 'output_size' (%i != %i)\n", out_size, output_size);
    }
    if( og_size != output_size )
    {
        PLERROR("NnlmWordRepresentationLayer::bpropUpdate: 'output_gradient.size()'"
                " should\n"
                " be equal to 'output_size' (%i != %i)\n",
                og_size, output_size);
    }


    learning_rate = start_learning_rate / ( 1.0 + decrease_constant * step_number);

    //cout << "NnlmWordRepresentationLayer::bpropUpdate -> output_gradient is " << output_gradient << endl;

    // magnitude of index check and update
    for( int i=0; i<input_size; i++)  {
      if( input[i] >= vocabulary_size  || input[i] < 0 )
      {
          PLERROR("NnlmWordRepresentationLayer::bpropUpdate: 'input[%i]' should be smaller\n"
                  " than 'vocabulary_size' (%i !< %i)\n",
          i, input[i], vocabulary_size);
      }
// MISTAKE???????
// MISTAKE???????
      /*for(int j=0; j < output_size; j++)  {
          weights( (int) input[i], j%word_representation_size) -= learning_rate * output_gradient[j];
      }*/

//cout << "word rep gradient ";
      for(int j=0; j < word_representation_size; j++)  {
          weights( (int) input[i], j) -= learning_rate * output_gradient[j+i*word_representation_size];
//cout << - learning_rate * output_gradient[j+i*word_representation_size] << " ";
      }
//cout << endl;

    }

    step_number++;

}

Here is the call graph for this function:

void PLearn::NnlmWordRepresentationLayer::build ( ) [virtual]

Post-constructor.

The normal implementation should call simply inherited::build(), then this class's build_(). This method should be callable again at later times, after modifying some option fields to change the "architecture" of the object.

Reimplemented from PLearn::OnlineLearningModule.

Definition at line 146 of file NnlmWordRepresentationLayer.cc.

References PLearn::OnlineLearningModule::build(), and build_().

Here is the call graph for this function:

void PLearn::NnlmWordRepresentationLayer::build_ ( ) [private]

This does the actual building.

Reimplemented from PLearn::OnlineLearningModule.

Definition at line 110 of file NnlmWordRepresentationLayer.cc.

References context_size, forget(), PLearn::OnlineLearningModule::input_size, PLearn::OnlineLearningModule::output_size, PLERROR, PLearn::TMat< T >::size(), vocabulary_size, weights, and word_representation_size.

Referenced by build().

{

    // *** Some variables are connected... 
    // for now we overwrite these
    input_size = context_size;
    output_size = context_size * word_representation_size;


    // *** A few sanity checks
    if( input_size <= 0 )
    {
        PLERROR("NnlmWordRepresentationLayer::build_: 'input_size' <= 0 (%i).\n"
                "You should set it to a positive integer.\n", input_size);
    }
    else if( word_representation_size * context_size != output_size )
    {
        PLERROR("NnlmWordRepresentationLayer::build_: 'output_size' inconsistent with\n"
                  " 'word_representation_size * input_size': %i != ( %i * %i)\n"
                  , output_size, word_representation_size, input_size);
    }
    else if( vocabulary_size <= 0 )
    {
        PLERROR("NnlmWordRepresentationLayer::build_: 'vocabulary_size' <= 0(%i).\n"
                  , vocabulary_size);
    }


    // *** Initialize weights if not loaded
    if( weights.size() == 0 )   {
                                forget();
    }

}

Here is the call graph for this function:

Here is the caller graph for this function:

string PLearn::NnlmWordRepresentationLayer::classname ( ) const [virtual]

Reimplemented from PLearn::Object.

Definition at line 49 of file NnlmWordRepresentationLayer.cc.

void PLearn::NnlmWordRepresentationLayer::declareOptions ( OptionList ol) [static, protected]

Declares the class options.

Reimplemented from PLearn::OnlineLearningModule.

Definition at line 66 of file NnlmWordRepresentationLayer.cc.

References PLearn::OptionBase::buildoption, context_size, PLearn::declareOption(), PLearn::OnlineLearningModule::declareOptions(), decrease_constant, PLearn::OptionBase::learntoption, start_learning_rate, step_number, vocabulary_size, weights, and word_representation_size.

{

    declareOption(ol, "vocabulary_size",
                  &NnlmWordRepresentationLayer::vocabulary_size,
                  OptionBase::buildoption,
                  "size of vocabulary used - defines the virtual input size");

    declareOption(ol, "word_representation_size",
                  &NnlmWordRepresentationLayer::word_representation_size,
                  OptionBase::buildoption,
                  "size of the real distributed word representation");

    declareOption(ol, "context_size",
                  &NnlmWordRepresentationLayer::context_size,
                  OptionBase::buildoption,
                  "size of word context");

    declareOption(ol, "start_learning_rate",
                  &NnlmWordRepresentationLayer::start_learning_rate,
                  OptionBase::buildoption,
                  "Learning-rate of stochastic gradient optimization");

    declareOption(ol, "decrease_constant",
                  &NnlmWordRepresentationLayer::decrease_constant,
                  OptionBase::buildoption,
                  "Decrease constant of stochastic gradient optimization");

    // * Learnt

    declareOption(ol, "step_number", &NnlmWordRepresentationLayer::step_number,
                  OptionBase::learntoption,
                  "The step number, incremented after each update.");

    declareOption(ol, "weights", &NnlmWordRepresentationLayer::weights,
                  OptionBase::learntoption,
                  "Input weights of the neurons (one row per neuron, no bias).");


    // Now call the parent class' declareOptions
    inherited::declareOptions(ol);

}

Here is the call graph for this function:

static const PPath& PLearn::NnlmWordRepresentationLayer::declaringFile ( ) [inline, static]

Reimplemented from PLearn::OnlineLearningModule.

Definition at line 157 of file NnlmWordRepresentationLayer.h.

:
    //#####  Protected Options  ###############################################
NnlmWordRepresentationLayer * PLearn::NnlmWordRepresentationLayer::deepCopy ( CopiesMap copies) const [virtual]

Reimplemented from PLearn::OnlineLearningModule.

Definition at line 49 of file NnlmWordRepresentationLayer.cc.

void PLearn::NnlmWordRepresentationLayer::forget ( ) [virtual]

this version allows to obtain the input gradient as well N.B.

reset the parameters to the state they would be BEFORE starting training.

THE DEFAULT IMPLEMENTATION IN SUPER-CLASS JUST RAISES A PLERROR. Similar to bpropUpdate, but adapt based also on the estimation of the diagonal of the Hessian matrix, and propagates this back. If these methods are defined, you can use them INSTEAD of bpropUpdate(...) N.B. A DEFAULT IMPLEMENTATION IS PROVIDED IN THE SUPER-CLASS, WHICH JUST CALLS bbpropUpdate(input, output, input_gradient, output_gradient, out_hess, in_hess) AND IGNORES INPUT HESSIAN AND INPUT GRADIENT. this version allows to obtain the input gradient and diag_hessian N.B. A DEFAULT IMPLEMENTATION IS PROVIDED IN THE SUPER-CLASS, WHICH RAISES A PLERROR. reset the parameters to the state they would be BEFORE starting training. Note that this method is necessarily called from build().

Note that this method is necessarily called from build().

Implements PLearn::OnlineLearningModule.

Definition at line 286 of file NnlmWordRepresentationLayer.cc.

References PLearn::OnlineLearningModule::random_gen, resetWeights(), step_number, and weights.

Referenced by build_().

{

    // *** Weights

    resetWeights();

   // TODO add an option for the seed
    if( !random_gen )   {
        random_gen = new PRandom( 1 );
    }

    //real r = 1.0 / sqrt(input_size);
    //random_gen->fill_random_uniform(weights,-r,r);
                random_gen->fill_random_uniform(weights,-1.0,1.0);
        
    // *** 
    step_number = 0;


}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::NnlmWordRepresentationLayer::fprop ( const Vec input,
Vec output 
) const [virtual]

given the input, compute the output (possibly resize it appropriately)

given the input, compute the output (possibly resize it appropriately) In our case, we just do a lookup in the weights matrix, for each word in the context.

Reimplemented from PLearn::OnlineLearningModule.

Definition at line 166 of file NnlmWordRepresentationLayer.cc.

References i, PLearn::OnlineLearningModule::input_size, PLearn::OnlineLearningModule::output_size, PLERROR, PLearn::TVec< T >::size(), PLearn::TVec< T >::subVec(), vocabulary_size, weights, and word_representation_size.

{

    // TODO only do these in debug
    // *** Sanity checks 

    // check the input holds input_size hot indexes
    int in_size = input.size();
    if( in_size != input_size )
    {
        PLERROR("NnlmWordRepresentationLayer::fprop: 'input.size()' should be equal\n"
                " to 'input_size' (%i != %i)\n", in_size, input_size);
    }
    //
    int out_size = output.size();
    if( out_size != output_size )
    {
        PLERROR("NnlmWordRepresentationLayer::fprop: 'output.size()' should be equal\n"
                " to 'output_size' (%i != %i)\n", out_size, output_size);
    }


    // magnitude of index check
    for( int i=0; i<input_size; i++)  {
      if( input[i] >= vocabulary_size || input[i] < 0 )
      {
          PLERROR("NnlmWordRepresentationLayer::fprop: 'input[%i]' should be smaller\n"
                  " than 'vocabulary_size' (%i !< %i)\n",
          i, input[i], vocabulary_size);
      }

      output.subVec( i*word_representation_size, word_representation_size ) << weights( (int) input[i] );
    }


}

Here is the call graph for this function:

OptionList & PLearn::NnlmWordRepresentationLayer::getOptionList ( ) const [virtual]

Reimplemented from PLearn::Object.

Definition at line 49 of file NnlmWordRepresentationLayer.cc.

OptionMap & PLearn::NnlmWordRepresentationLayer::getOptionMap ( ) const [virtual]

Reimplemented from PLearn::Object.

Definition at line 49 of file NnlmWordRepresentationLayer.cc.

RemoteMethodMap & PLearn::NnlmWordRepresentationLayer::getRemoteMethodMap ( ) const [virtual]

Reimplemented from PLearn::Object.

Definition at line 49 of file NnlmWordRepresentationLayer.cc.

void PLearn::NnlmWordRepresentationLayer::makeDeepCopyFromShallowCopy ( CopiesMap copies) [virtual]

Transforms a shallow copy into a deep copy.

Reimplemented from PLearn::OnlineLearningModule.

Definition at line 153 of file NnlmWordRepresentationLayer.cc.

References PLearn::deepCopyField(), PLearn::OnlineLearningModule::makeDeepCopyFromShallowCopy(), and weights.

{
    inherited::makeDeepCopyFromShallowCopy(copies);

    deepCopyField(weights, copies);

    // ### Remove this line when you have fully implemented this method.
    //PLERROR("NnlmWordRepresentationLayer::makeDeepCopyFromShallowCopy not fully (correctly) implemented yet!");
}

Here is the call graph for this function:

void PLearn::NnlmWordRepresentationLayer::resetWeights ( ) [protected, virtual]

Definition at line 359 of file NnlmWordRepresentationLayer.cc.

References PLearn::TMat< T >::fill(), PLearn::TMat< T >::resize(), vocabulary_size, weights, and word_representation_size.

Referenced by forget().

Here is the call graph for this function:

Here is the caller graph for this function:


Member Data Documentation

Reimplemented from PLearn::OnlineLearningModule.

Definition at line 157 of file NnlmWordRepresentationLayer.h.

Definition at line 76 of file NnlmWordRepresentationLayer.h.

Referenced by build_(), and declareOptions().

Definition at line 81 of file NnlmWordRepresentationLayer.h.

Referenced by bpropUpdate(), and declareOptions().

Definition at line 185 of file NnlmWordRepresentationLayer.h.

Referenced by bpropUpdate().

learning_rate = start_learning_rate / (1 + decrease_constant*t), where t is the number of updates since the beginning

Definition at line 80 of file NnlmWordRepresentationLayer.h.

Referenced by bpropUpdate(), and declareOptions().

Definition at line 184 of file NnlmWordRepresentationLayer.h.

Referenced by bpropUpdate(), declareOptions(), and forget().


The documentation for this class was generated from the following files:
 All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Friends Defines