PLearn 0.1
Public Types | Public Member Functions | Static Public Member Functions | Public Attributes | Static Public Attributes | Static Protected Member Functions | Private Types | Private Member Functions
PLearn::NnlmOutputLayer Class Reference

Implements a gaussian-based output layer for the Neural Network Language Model. More...

#include <NnlmOutputLayer.h>

Inheritance diagram for PLearn::NnlmOutputLayer:
Inheritance graph
[legend]
Collaboration diagram for PLearn::NnlmOutputLayer:
Collaboration graph
[legend]

List of all members.

Public Types

enum  { COST_DISCR = 0, COST_APPROX_DISCR = 1, COST_NON_DISCR = 2 }
enum  { LEARNING_DISCRIMINANT = 0, LEARNING_EMPIRICAL = 1 }

Public Member Functions

 NnlmOutputLayer ()
 Default constructor.
void resetParameters ()
 Resizes variables and sets pretty much everything back to a 'zero' value.
void resetAllClassVars ()
 Used for initializing s_sumI, sumI, sumR, sumR2, as well as pi, mu and sigma2 to the max likelihood values.
void updateClassVars (const int the_target, const Vec &the_input)
void applyAllClassVars ()
void computeEmpiricalLearningRateParameters ()
 Computes the word specific empirical learning rates MUST be called after a valid call to applyClassCounts()
void setTarget (int the_target) const
 Sets t, the target.
void setContext (int the_context) const
 Sets the context. The Candidates set of the approximated discriminant cost is determined from the context.
void setCost (int the_cost)
 Sets the cost used in the fprop()
void setLearning (int the_learning)
virtual void fprop (const Vec &input, Vec &output) const
 Computes 'cost' for the 'target' given the input, compute the output (possibly resize it appropriately)
virtual void bpropUpdate (const Vec &input, const Vec &output, Vec &input_gradient, const Vec &output_gradient)
 Adapt based on the output gradient: this method should only be called just after a corresponding fprop; it should be called with the same arguments as fprop for the first two arguments (and output should not have been modified since then).
virtual void forget ()
 reset the parameters to the state they would be BEFORE starting training.
void compute_nl_p_rt (const Vec &input, Vec &output) const
 optionally perform some processing after training, or after a series of fprop/bpropUpdate calls to prepare the model for truly out-of-sample operation.
void compute_approx_nl_p_t_r (const Vec &input, Vec &output) const
 Computes the approximation -log( p(t|r) ) using only some candidates for normalization.
void compute_nl_p_t_r (const Vec &input, Vec &output) const
 Computes -log( p(t|r) )
void getBestCandidates (const Vec &input, Vec &candidate_tags, Vec &probabilities) const
 returns best candidates according to compute_nl_p_t_r
void computeNonDiscriminantGradient () const
 Gradients with respect to input.
void computeApproxDiscriminantGradient () const
 MUST be called after the corresponding fprop.
void computeDiscriminantGradient () const
 MUST be called after the corresponding fprop.
void addCandidateContribution (int c) const
void applyMuAndSigmaEmpiricalUpdate (const Vec &input) const
 mu and sigma updates
void applyMuGradient () const
 Compute and apply gradients of different costs with respect to mus.
void applyMuTargetGradient () const
 MUST be called after the corresponding fprop.
void applyMuCandidateGradient (int c) const
 MUST be called after the corresponding fprop.
void applySigmaGradient () const
 Compute and apply gradients of different costs with respect to sigmas.
void applySigmaTargetGradient () const
void applySigmaCandidateGradient (int c) const
virtual string classname () const
virtual OptionListgetOptionList () const
virtual OptionMapgetOptionMap () const
virtual RemoteMethodMapgetRemoteMethodMap () const
virtual NnlmOutputLayerdeepCopy (CopiesMap &copies) const
virtual void build ()
 Post-constructor.
virtual void makeDeepCopyFromShallowCopy (CopiesMap &copies)
 Transforms a shallow copy into a deep copy.

Static Public Member Functions

static string _classname_ ()
static OptionList_getOptionList_ ()
static RemoteMethodMap_getRemoteMethodMap_ ()
static Object_new_instance_for_typemap_ ()
static bool _isa_ (const Object *o)
static void _static_initialize_ ()
static const PPathdeclaringFile ()

Public Attributes

int target_cardinality
 specifies the range of the values of 'target'
int context_cardinality
 specifies the range of the values of 'context' (ex: + 'missing' tag)
real sigma2min
 minimal value $ \sigma^2 $ can have
real dl_start_learning_rate
 Discriminant learning (of $ \mu_u $ and $ \Sigma_u $) - dl.
real dl_decrease_constant
real el_start_discount_factor
 Empirical learning (of $ \mu_u $ and $ \Sigma_u $) - el.
int step_number
 keeps track of updates
real umc
 We use a mixture with a uniform to prevent negligeable probabilities which cause gradient explosions.
Vec pi
 pi(i) = empirical mean when of c==i, ie p(c)
Mat mu
 Gaussian parameters - p_g(r|c)
Mat sigma2
Vec global_mu
Vec global_sigma2
int s_sumI
 EMPIRICAL LEARNING Intermediaries.
TVec< intsumI
Mat sumR
Mat sumR2
Vec global_sumR
Vec global_sumR2
TVec< intshared_candidates
 Holds candidates.
TVec< TVec< int > > candidates
int learning
int cost
 Must be set before calling fprop.
int target
 the current word -> we use its parameters to compute output
int the_real_target
int context
real s
real g_exponent
real log_g_det_covariance
real log_g_normalization
Vec vec_log_p_rg_t
Vec vec_log_p_r_t
Vec vec_log_p_rt
real log_sum_p_ru
Mat beta
Vec nd_gradient
Vec ad_gradient
Vec fd_gradient
Vec bill
Vec bob
Vec gradient_log_tmp
Vec gradient_log_tmp_pos
Vec gradient_log_tmp_neg
Vec el_start_learning_rate
Vec el_decrease_constant
Vec el_last_update
real el_dr
 The original way of computing the mus and sigmas (ex.
real dl_lr
bool is_learning

Static Public Attributes

static StaticInitializer _static_initializer_

Static Protected Member Functions

static void declareOptions (OptionList &ol)
 Declares the class options.

Private Types

typedef OnlineLearningModule inherited

Private Member Functions

void build_ ()
 This does the actual building.

Detailed Description

Implements a gaussian-based output layer for the Neural Network Language Model.

Given 'r' the output of the previous layer (the representation of the input), and 't' the target class, this module models p(r|t) as a mixture between a gaussian model and a uniform: p(r|t) = umc * p_g(r|t) + (1-umc) p_u(r|t). We have p_u(r|c) = 1.0 / 2^input_size

The output is then computed from p(r,t) = p(r|t) * p(t):

Learning of and can be:

Todo:
Deprecated:

Definition at line 70 of file NnlmOutputLayer.h.


Member Typedef Documentation

Reimplemented from PLearn::OnlineLearningModule.

Definition at line 72 of file NnlmOutputLayer.h.


Member Enumeration Documentation

anonymous enum
Enumerator:
COST_DISCR 
COST_APPROX_DISCR 
COST_NON_DISCR 

Definition at line 271 of file NnlmOutputLayer.h.

{COST_DISCR=0, COST_APPROX_DISCR=1, COST_NON_DISCR=2};  // ### Watchout... also defined in NnlmOnlineLearner.
anonymous enum
Enumerator:
LEARNING_DISCRIMINANT 
LEARNING_EMPIRICAL 

Definition at line 272 of file NnlmOutputLayer.h.

{LEARNING_DISCRIMINANT=0, LEARNING_EMPIRICAL=1};        // Granted, this is not good.

Constructor & Destructor Documentation

PLearn::NnlmOutputLayer::NnlmOutputLayer ( )

Default constructor.

Definition at line 70 of file NnlmOutputLayer.cc.

                                 :
    OnlineLearningModule(),
    target_cardinality( -1 ),
    context_cardinality( -1 ),
    sigma2min( 0.001 ), // ### VERY IMPORTANT!!!
    dl_start_learning_rate( 0.0 ),
    dl_decrease_constant( 0.0 ),
    el_start_discount_factor( 0.01 ), // ### VERY IMPORTANT!!!
    step_number( 0 ),
    umc( 0.999999 ), // ###
    learning( LEARNING_DISCRIMINANT ),
    cost( COST_DISCR ),
    target( -1 ),
    the_real_target( -1 ),
    context( -1 ),
    s( 0.0 ),
    g_exponent( 0.0 ),
    log_g_det_covariance( -REAL_MAX ),
    log_g_normalization( -REAL_MAX ),
    log_sum_p_ru( -REAL_MAX ),
    is_learning( false )
{
    // ### You may (or not) want to call build_() to finish building the object
    // ### (doing so assumes the parent classes' build_() have been called too
    // ### in the parent classes' constructors, something that you must ensure)
}

Member Function Documentation

string PLearn::NnlmOutputLayer::_classname_ ( ) [static]

Reimplemented from PLearn::OnlineLearningModule.

Definition at line 49 of file NnlmOutputLayer.cc.

{
OptionList & PLearn::NnlmOutputLayer::_getOptionList_ ( ) [static]

Reimplemented from PLearn::OnlineLearningModule.

Definition at line 49 of file NnlmOutputLayer.cc.

{
RemoteMethodMap & PLearn::NnlmOutputLayer::_getRemoteMethodMap_ ( ) [static]

Reimplemented from PLearn::OnlineLearningModule.

Definition at line 49 of file NnlmOutputLayer.cc.

{
bool PLearn::NnlmOutputLayer::_isa_ ( const Object o) [static]

Reimplemented from PLearn::OnlineLearningModule.

Definition at line 49 of file NnlmOutputLayer.cc.

{
Object * PLearn::NnlmOutputLayer::_new_instance_for_typemap_ ( ) [static]

Reimplemented from PLearn::Object.

Definition at line 49 of file NnlmOutputLayer.cc.

{
StaticInitializer NnlmOutputLayer::_static_initializer_ & PLearn::NnlmOutputLayer::_static_initialize_ ( ) [static]

Reimplemented from PLearn::OnlineLearningModule.

Definition at line 49 of file NnlmOutputLayer.cc.

{
void PLearn::NnlmOutputLayer::addCandidateContribution ( int  c) const

Definition at line 870 of file NnlmOutputLayer.cc.

References beta, gradient_log_tmp_neg, gradient_log_tmp_pos, i, PLearn::OnlineLearningModule::input_size, PLearn::logadd(), pi, PLERROR, PLearn::safelog(), and vec_log_p_rg_t.

Referenced by computeApproxDiscriminantGradient(), and computeDiscriminantGradient().

{
    for(int i=0; i<input_size; i++) {
        if( beta(c,i) > 0)  {
            gradient_log_tmp_pos[i] = logadd( gradient_log_tmp_pos[i], 
                    vec_log_p_rg_t[c] + safelog( beta(c,i) ) +  safelog( pi[c] ) );
        } else  {
            gradient_log_tmp_neg[i] = logadd( gradient_log_tmp_neg[i], 
                    vec_log_p_rg_t[c] + safelog( -beta(c,i) ) +  safelog( pi[c] ) );
        }

        #ifdef BOUNDCHECK
        if( isnan(gradient_log_tmp_pos[i]) || isnan(gradient_log_tmp_neg[i]) ) {
          PLERROR("NnlmOutputLayer::computeApproxDiscriminantGradient - gradient_log_tmp_pos or gradient_log_tmp_neg is NAN.\n");
        }
        #endif
    }
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::NnlmOutputLayer::applyAllClassVars ( )

Definition at line 356 of file NnlmOutputLayer.cc.

References PLearn::endl(), global_mu, global_sigma2, global_sumR, global_sumR2, i, PLearn::OnlineLearningModule::input_size, mu, pi, PLERROR, s_sumI, sigma2, sigma2min, sumI, sumR, and target_cardinality.

{



    // ### global values
    for(int i=0; i<input_size; i++) {
        global_mu[i] = global_sumR[i] / (real) s_sumI;

        // Diviser par (n-1) au lieu de n
        global_sigma2[i] = ( (real) s_sumI * global_mu[i] * global_mu[i] + 
                  global_sumR2[i] - 2.0 * global_mu[i] * global_sumR[i]  ) / (s_sumI - 1);

        if(global_sigma2[i]<sigma2min) {
            cout << "NnlmOutputLayer::applyAllClassVars() -> global_sigma2[i]<sigma2min" << endl;
            global_sigma2[i] = sigma2min;
        }

    } // for input_size
    // ### global values



    for( int t=0; t<target_cardinality; t++ ) {

        #ifdef BOUNDCHECK
        if( sumI[ t ] <= 1 )  {
            PLERROR("NnlmOutputLayer::applyAllClassVars - sumI[ %i ] <= 1\n", t);
        }
        #endif

        for(int i=0; i<input_size; i++) {
            pi[t] = (real) sumI[ t ] / s_sumI;
            mu( t, i ) = sumR( t, i ) / (real) sumI[ t ];

    // ### global values
/*
            // Diviser par (n-1) au lieu de n
            sigma2( t, i ) = ( sumI[ t ] * mu(t, i) * mu(t, i) + 
                     sumR2(t, i) - 2.0 * mu(t, i) * sumR(t, i)  ) / (sumI[ t ] - 1);

            if(sigma2( t, i )<sigma2min) {
                //cout << "***" << t << "***" << sumI[ t ] << " sur " << s_sumI << endl;
                //cout << "NnlmOutputLayer::applyAllClassVars() -> sigma2(" << t << "," << i <<") "
                //    << sigma2(t, i) <<" < sigma2min(" << sigma2min <<")! Setting to sigma2min." <<endl;

                sigma2( t, i ) = sigma2min;
            }
*/
            sigma2( t, i ) = global_sigma2[i];


    // ### global values

        } // for input_size

/*        cout << "***" << t << "***" << sumI[ t ] << " sur " << s_sumI << endl;
        cout << mu( t ) << endl;
        cout << sigma2( t ) << endl;*/

    } // for target_cardinality



}

Here is the call graph for this function:

void PLearn::NnlmOutputLayer::applyMuAndSigmaEmpiricalUpdate ( const Vec input) const

mu and sigma updates

empirical

Definition at line 988 of file NnlmOutputLayer.cc.

References el_last_update, PLearn::endl(), global_mu, global_sigma2, global_sumR, global_sumR2, i, PLearn::OnlineLearningModule::input_size, mu, PLERROR, s_sumI, sigma2, sigma2min, sumI, sumR, sumR2, and target.

Referenced by fprop().

{
    // *** Update counts *** 
    for(int i=0; i<input_size; i++) {
        s_sumI++;
        sumI[ target ]++;
        sumR( target, i ) += input[i];
        sumR2( target, i ) += input[i]*input[i];

        // ### for a global_sigma2
        global_sumR[i] += input[i];
        global_sumR2[i] += input[i]*input[i];
        // ### for a global_sigma2

    }

    // *** Intermediate values ***
    int n_ex_since_last_update = s_sumI - (int)el_last_update[target];
    Vec old_mu;
    old_mu << mu(target);
    el_last_update[target] = sumI[ target ];


    // *** Compute learning rate ***
    //real el_lr = el_start_learning_rate[target] / ( 1.0 + sumI[target] * el_decrease_constant[target] );
    //cout << "el_lr " << el_lr << endl;

    // *** Update mu ***
    for(int i=0; i<input_size; i++) {
        mu( target, i ) = sumR( target, i ) / sumI[ target ];
        //mu( target, i ) = (1.0-el_lr) * mu( target, i ) + el_lr * input[i];

        // ### for a global_sigma2
        global_mu[i] = global_sumR[i] / (real) s_sumI;
    }

    // *** Update sigma ***
    for(int i=0; i<input_size; i++) {

        // ### for a global_sigma2
        // Diviser par (n-1) au lieu de n
        global_sigma2[i] = ( (real) s_sumI * global_mu[i] * global_mu[i] + 
                  global_sumR2[i] - 2.0 * global_mu[i] * global_sumR[i]  ) / (s_sumI - 1);

/*        sigma2( target, i ) = (sumI[target]*mu(target, i)*mu(target, i) + sumR2(target,i) -2.0 * mu(target, i) * sumR(target, i) ) / 
                                (sumI[target]-1);
*/
          sigma2( target, i ) = global_sigma2[i];


      // ### for a global_sigma2


        // Add reguralizer to compensate for the frequency at which the word is seen
        // TODO
        // old_mu

        // Enforce minimal sigma
        if(sigma2( target, i )<sigma2min) {
            cout << "<sigma2min!" << endl;
            sigma2( target, i ) = sigma2min;
        }

        if( isnan( sigma2( target, i ) ) ) {
          PLERROR( "NnlmOutputLayer::applyMuAndSigmaEmpiricalUpdate - isnan( sigma2( target, i ) )!\n" );
        }
    }

    // Update uniform mixture coefficient
    //sum_log_p_g_r = logadd( sum_log_p_g_r, log_p_g_r );
    //umc = safeexp( sum_log_p_g_r ) / s_sumI;
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::NnlmOutputLayer::applyMuCandidateGradient ( int  c) const

MUST be called after the corresponding fprop.

Definition at line 1160 of file NnlmOutputLayer.cc.

References beta, dl_lr, i, PLearn::OnlineLearningModule::input_size, log_sum_p_ru, mu, pi, PLearn::safeexp(), PLearn::safelog(), and vec_log_p_rg_t.

Referenced by applyMuGradient().

{
//    Vec bill( input_size );

    Vec mu_gradient(input_size);

    for( int i=0; i<input_size; i++ ) {
        if( beta(c,i) > 0.0 ) {
            mu_gradient[i] = safeexp( 
                safelog( pi[c] ) + vec_log_p_rg_t[c] + safelog( beta(c,i) ) - log_sum_p_ru );
        } else  {
            mu_gradient[i] = - safeexp( 
                safelog( pi[c] ) + vec_log_p_rg_t[c] + safelog( -beta(c,i) ) - log_sum_p_ru );
        }
        mu(c,i) -= dl_lr * mu_gradient[i];

//bill[i] = - dl_lr * mu_gradient[i];
    }
//cout << "MU candidate GRADIENT " << bill << endl;
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::NnlmOutputLayer::applyMuGradient ( ) const

Compute and apply gradients of different costs with respect to mus.

MUST be called after the corresponding fprop Computes gradients of the non discriminant cost with respect to mu and sigma.

Definition at line 1066 of file NnlmOutputLayer.cc.

References applyMuCandidateGradient(), applyMuTargetGradient(), c, candidates, context, cost, COST_APPROX_DISCR, COST_DISCR, COST_NON_DISCR, dl_decrease_constant, dl_lr, dl_start_learning_rate, i, PLearn::OnlineLearningModule::input_size, PLearn::TVec< T >::length(), mu, nd_gradient, PLERROR, shared_candidates, step_number, target_cardinality, the_real_target, and u.

Referenced by bpropUpdate().

{
    dl_lr = dl_start_learning_rate / ( 1.0 + dl_decrease_constant * step_number);


    if( cost == COST_NON_DISCR ) {
        Vec mu_gradient( input_size );
        mu_gradient << nd_gradient;
        for( int i=0; i<input_size; i++ ) {
            mu_gradient[i] = - mu_gradient[i];
            mu(the_real_target,i) -= dl_lr * mu_gradient[i];
        }
    }


    else if( cost == COST_APPROX_DISCR )  {

        // for the target
        applyMuTargetGradient();

        // --- for the others ---
        int c;
        // shared candidates
        for( int i=0; i< shared_candidates.length(); i++ )
        {
            c = shared_candidates[i];
            if( c != the_real_target )  {
                applyMuCandidateGradient(c);
            }
        }

        // context candidates 
        for( int i=0; i< candidates[ context ].length(); i++ )
        {
            c = candidates[ context ][i];
            if( c != the_real_target )  {
                applyMuCandidateGradient(c);
            }
        }

    }


    else if( cost == COST_DISCR )  {
        applyMuTargetGradient();
        for( int u=0; u< target_cardinality; u++ )  {
            if( u != the_real_target )  {
                applyMuCandidateGradient(u);
            }
        }



    }
    else  {
        PLERROR("NnlmOutputLayer::applyMuGradient - invalid cost\n");
    }

}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::NnlmOutputLayer::applyMuTargetGradient ( ) const

MUST be called after the corresponding fprop.

Definition at line 1130 of file NnlmOutputLayer.cc.

References beta, dl_lr, i, PLearn::OnlineLearningModule::input_size, log_sum_p_ru, mu, nd_gradient, pi, PLearn::safeexp(), PLearn::safelog(), the_real_target, and vec_log_p_rg_t.

Referenced by applyMuGradient().

{
//    Vec bill( input_size );


    Vec mu_gradient( input_size );
    mu_gradient << nd_gradient;
    for( int i=0; i<input_size; i++ ) {
        mu_gradient[i] = - mu_gradient[i];

        if( beta(the_real_target,i) > 0.0 ) {
            mu_gradient[i] += safeexp( 
                safelog( pi[the_real_target] ) + vec_log_p_rg_t[the_real_target] + safelog( beta(the_real_target,i) ) - log_sum_p_ru );
        } else  {
            mu_gradient[i] -= safeexp( 
                safelog( pi[the_real_target] ) + vec_log_p_rg_t[the_real_target] + safelog( -beta(the_real_target,i) ) - log_sum_p_ru );
        }

        mu(the_real_target,i) -= dl_lr * mu_gradient[i];

//bill[i] = mu_gradient[i];
    }
//cout << "MU target GRADIENT " << bill << endl;

}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::NnlmOutputLayer::applySigmaCandidateGradient ( int  c) const

Definition at line 1272 of file NnlmOutputLayer.cc.

References beta, c, dl_lr, i, PLearn::OnlineLearningModule::input_size, log_sum_p_ru, pi, PLearn::safeexp(), sigma2, sigma2min, and vec_log_p_rg_t.

Referenced by applySigmaGradient().

{
//    Vec bob( input_size );

    Vec sigma2_gradient( input_size );

    real tmp2 = 0.5 * pi[c] * safeexp( vec_log_p_rg_t[ c ] - log_sum_p_ru );
    real tmp3;

    for( int i=0; i<input_size; i++ ) {
        tmp3 = beta(c,i) * beta(c,i) - 1.0/sigma2(c,i);
        sigma2_gradient[i] = tmp2 * tmp3;
        sigma2(c,i) -= dl_lr * sigma2_gradient[i];

            // Enforce minimal sigma
            if(sigma2( c, i )<sigma2min) {
                sigma2( c, i ) = sigma2min;
            }
//bob[i] = - dl_lr * sigma2_gradient[i];

    }
//cout << "SIGMA candidate GRADIENT " << bob << endl;
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::NnlmOutputLayer::applySigmaGradient ( ) const

Compute and apply gradients of different costs with respect to sigmas.

Definition at line 1184 of file NnlmOutputLayer.cc.

References applySigmaCandidateGradient(), applySigmaTargetGradient(), beta, c, candidates, context, cost, COST_APPROX_DISCR, COST_DISCR, COST_NON_DISCR, dl_decrease_constant, dl_lr, dl_start_learning_rate, i, PLearn::OnlineLearningModule::input_size, PLearn::TVec< T >::length(), PLERROR, PLearn::safeexp(), shared_candidates, sigma2, step_number, target_cardinality, the_real_target, u, vec_log_p_r_t, and vec_log_p_rg_t.

Referenced by bpropUpdate().

{
    dl_lr = dl_start_learning_rate / ( 1.0 + dl_decrease_constant * step_number);

    Vec sigma2_gradient( input_size );


    if( cost == COST_NON_DISCR ) {

        real tmp = -0.5 * safeexp( vec_log_p_rg_t[ the_real_target ] - vec_log_p_r_t[ the_real_target ] );

        for( int i=0; i<input_size; i++ ) {
            sigma2_gradient[i] = tmp * ( beta(the_real_target,i) * beta(the_real_target,i) - 1.0/sigma2(the_real_target,i) );
            sigma2(the_real_target,i) -= dl_lr * sigma2_gradient[i];
        }

    }


    else if( cost == COST_APPROX_DISCR )  {
        applySigmaTargetGradient();

        // --- for the others ---
        int c;
        // shared candidates
        for( int i=0; i< shared_candidates.length(); i++ )
        {
            c = shared_candidates[i];
            if( c != the_real_target )  {
                applySigmaCandidateGradient(c);
            }
        }

        // context candidates 
        for( int i=0; i< candidates[ context ].length(); i++ )
        {
            c = candidates[ context ][i];
            if( c != the_real_target )  {
                applySigmaCandidateGradient(c);
            }
        }

    }


    else if( cost == COST_DISCR )  {
        applySigmaTargetGradient();
        for( int u=0; u< target_cardinality; u++ )  {
            if( u != the_real_target )  {
                applySigmaCandidateGradient(u);
            }
        }

    }
    else  {
        PLERROR("NnlmOutputLayer::applySigmaGradient - invalid cost\n");
    }


}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::NnlmOutputLayer::applySigmaTargetGradient ( ) const

Definition at line 1244 of file NnlmOutputLayer.cc.

References beta, dl_lr, i, PLearn::OnlineLearningModule::input_size, log_sum_p_ru, pi, PLearn::safeexp(), sigma2, sigma2min, the_real_target, vec_log_p_r_t, and vec_log_p_rg_t.

Referenced by applySigmaGradient().

{
  //  Vec bob( input_size );

    Vec sigma2_gradient( input_size );

    real tmp = -0.5 * safeexp( vec_log_p_rg_t[ the_real_target ] - vec_log_p_r_t[ the_real_target ] );
    real tmp2 = 0.5 * pi[the_real_target] * safeexp( vec_log_p_rg_t[ the_real_target ] - log_sum_p_ru );
    real tmp3;

    for( int i=0; i<input_size; i++ ) {
        tmp3 = beta(the_real_target,i) * beta(the_real_target,i) - 1.0/sigma2(the_real_target,i);
        sigma2_gradient[i] = tmp * tmp3;
        sigma2_gradient[i] += tmp2 * tmp3;
        sigma2(the_real_target,i) -= dl_lr * sigma2_gradient[i];

            // Enforce minimal sigma
            if(sigma2( the_real_target, i )<sigma2min) {
                sigma2( the_real_target, i ) = sigma2min;
            }

//bob[i] = sigma2_gradient[i];
    }
//cout << "SIGMA target GRADIENT " << bob << endl;

}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::NnlmOutputLayer::bpropUpdate ( const Vec input,
const Vec output,
Vec input_gradient,
const Vec output_gradient 
) [virtual]

Adapt based on the output gradient: this method should only be called just after a corresponding fprop; it should be called with the same arguments as fprop for the first two arguments (and output should not have been modified since then).

Since sub-classes are supposed to learn ONLINE, the object is 'ready-to-be-used' just after any bpropUpdate. NOT IMPLEMENTED - GRADIENT COMPUTED IN NnlmOnlineLearner And I'm not sure why... TODO find out this version allows to obtain the input gradient as well N.B. THE DEFAULT IMPLEMENTATION IN SUPER-CLASS JUST RAISES A PLERROR.

Since sub-classes are supposed to learn ONLINE, the object is 'ready-to-be-used' just after any bpropUpdate. N.B. A DEFAULT IMPLEMENTATION IS PROVIDED IN THE SUPER-CLASS, WHICH JUST CALLS bpropUpdate(input, output, input_gradient, output_gradient) AND IGNORES INPUT GRADIENT. this version allows to obtain the input gradient as well N.B. THE DEFAULT IMPLEMENTATION IN SUPER-CLASS JUST RAISES A PLERROR.

Definition at line 909 of file NnlmOutputLayer.cc.

References ad_gradient, applyMuGradient(), applySigmaGradient(), computeApproxDiscriminantGradient(), computeDiscriminantGradient(), computeNonDiscriminantGradient(), cost, COST_APPROX_DISCR, COST_DISCR, COST_NON_DISCR, fd_gradient, i, PLearn::OnlineLearningModule::input_size, learning, LEARNING_DISCRIMINANT, nd_gradient, PLearn::OnlineLearningModule::output_size, PLERROR, and PLearn::TVec< T >::size().

{

    int in_size = input.size();
    int out_size = output.size();
    int og_size = output_gradient.size();

    // *** Sanity checks
    if( in_size != input_size ) {
        PLERROR("NnlmOutputLayer::bpropUpdate:'input.size()' should be equal\n"
                " to 'input_size' (%i != %i)\n", in_size, input_size);
    }  else if( out_size != output_size )  {
        PLERROR("NnlmOutputLayer::bpropUpdate:'output.size()' should be"
                " equal\n"
                " to 'output_size' (%i != %i)\n", out_size, output_size);
    }  else if( og_size != output_size )  {
        PLERROR("NnlmOutputLayer::bpropUpdate:'output_gradient.size()'"
                " should\n"
                " be equal to 'output_size' (%i != %i)\n",
                og_size, output_size);
    }

    // *** Compute input_gradient ***
    // *** Compute input_gradient ***

    if( cost == COST_NON_DISCR ) {
        computeNonDiscriminantGradient();
        input_gradient << nd_gradient;
    }
    else if( cost == COST_APPROX_DISCR )  {
        computeApproxDiscriminantGradient();
        input_gradient << ad_gradient;
    }

    else if( cost == COST_DISCR )  {
        computeDiscriminantGradient();
        input_gradient << fd_gradient;
    }
    else  {
        PLERROR("NnlmOutputLayer::bpropUpdate - invalid cost\n");
    }

//    cout << "NnlmOutputLayer::bpropUpdate -> input_gradient " << input_gradient << endl; 

    #ifdef BOUNDCHECK
    for(int i=0; i<input_size; i++) {
        if( isnan(input_gradient[i]) ) {
          PLERROR( "NnlmOutputLayer::bpropUpdate - isnan(input_gradient[i]) true.\n" );
        }
    }
    #endif



    // *** Discriminant learning of mu and sigma ***
    // *** Discriminant learning of mu and sigma ***

    if( learning == LEARNING_DISCRIMINANT )  {
        applyMuGradient();
        applySigmaGradient();
    }
    // *** Empirical learning of mu and sigma ***
    // *** Empirical learning of mu and sigma ***

    //if( learning == LEARNING_EMPIRICAL )  {
    //    applyMuAndSigmaEmpirical();
    //}


}

Here is the call graph for this function:

void PLearn::NnlmOutputLayer::build ( ) [virtual]

Post-constructor.

The normal implementation should call simply inherited::build(), then this class's build_(). This method should be callable again at later times, after modifying some option fields to change the "architecture" of the object.

Reimplemented from PLearn::OnlineLearningModule.

Definition at line 180 of file NnlmOutputLayer.cc.

References PLearn::OnlineLearningModule::build(), and build_().

Here is the call graph for this function:

void PLearn::NnlmOutputLayer::build_ ( ) [private]

This does the actual building.

Reimplemented from PLearn::OnlineLearningModule.

Definition at line 189 of file NnlmOutputLayer.cc.

References PLearn::OnlineLearningModule::input_size, mu, PLearn::OnlineLearningModule::output_size, PLERROR, resetParameters(), and PLearn::TMat< T >::size().

Referenced by build().

{

    // *** Sanity checks ***
    if( input_size <= 0 )  {
        PLERROR("NnlmOutputLayer::build_: 'input_size' <= 0 (%i).\n"
                "You should set it to a positive integer.\n", input_size);
    }  else if( output_size != 1 )  {
        PLERROR("NnlmOutputLayer::build_: 'output_size'(=%i) != 1\n"
                  , output_size);
    }

    // *** Parameters not initialized ***
    if( mu.size() == 0 )   {
        resetParameters();
    }

}

Here is the call graph for this function:

Here is the caller graph for this function:

string PLearn::NnlmOutputLayer::classname ( ) const [virtual]

Reimplemented from PLearn::Object.

Definition at line 49 of file NnlmOutputLayer.cc.

{
void PLearn::NnlmOutputLayer::compute_approx_nl_p_t_r ( const Vec input,
Vec output 
) const

Computes the approximation -log( p(t|r) ) using only some candidates for normalization.

Computes the approximate discriminant cost.

Definition at line 716 of file NnlmOutputLayer.cc.

References c, candidates, compute_nl_p_rt(), context, i, PLearn::TVec< T >::length(), log_sum_p_ru, PLearn::logadd(), PLERROR, PLearn::TVec< T >::resize(), setTarget(), shared_candidates, the_real_target, and vec_log_p_rt.

Referenced by fprop().

{
    // *** Compute for the target ***
    Vec vec_nd_cost(1);
    compute_nl_p_rt(input, vec_nd_cost);

//nd_cost = -log_p_rt;

    // *** Compute for the normalization candidates ***
    Vec nl_p_ru;
    nl_p_ru.resize( 1 );
    log_sum_p_ru = vec_log_p_rt[the_real_target];
    int c;

    // shared candidates
    for( int i=0; i< shared_candidates.length(); i++ )
    {
        c = shared_candidates[i];
        if( c!=the_real_target )  {
            setTarget( c );
            compute_nl_p_rt( input, nl_p_ru );
            log_sum_p_ru = logadd(log_sum_p_ru, -nl_p_ru[0]);
        }
    }

    // context candidates 
    for( int i=0; i< candidates[ context ].length(); i++ )
    {
        c = candidates[ context ][i];
        if( c!=the_real_target )  {
            setTarget( c );
            compute_nl_p_rt( input, nl_p_ru );
            log_sum_p_ru = logadd(log_sum_p_ru, -nl_p_ru[0]);
        }
    }

    // *** The approximate discriminant cost ***
    output[0] = vec_nd_cost[0] + log_sum_p_ru;

#ifdef BOUNDCHECK
    if( isnan(output[0]) ) {
      PLERROR( "NnlmOutputLayer::compute_approx_nl_p_t_r - NAN present.\n" );
    }
#endif

}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::NnlmOutputLayer::compute_nl_p_rt ( const Vec input,
Vec output 
) const

optionally perform some processing after training, or after a series of fprop/bpropUpdate calls to prepare the model for truly out-of-sample operation.

Computes -log( p(r,t) ) = -log( ( umc p_gaussian(r|t) + (1-umc) p_uniform(r|t) ) p(t) )

THE DEFAULT IMPLEMENTATION PROVIDED IN THE SUPER-CLASS DOES NOT DO ANYTHING. Computes -log( p(r,t) )

Definition at line 558 of file NnlmOutputLayer.cc.

References beta, g_exponent, i, PLearn::OnlineLearningModule::input_size, log_g_det_covariance, log_g_normalization, PLearn::logadd(), mu, pi, Pi, PLERROR, s, PLearn::safelog(), sigma2, PLearn::TVec< T >::size(), target, umc, vec_log_p_r_t, vec_log_p_rg_t, and vec_log_p_rt.

Referenced by compute_approx_nl_p_t_r(), compute_nl_p_t_r(), fprop(), and getBestCandidates().

{

    // *** Sanity check ***
    int in_size = input.size();
    if( in_size != input_size ) {
        PLERROR("NnlmOutputLayer::compute_nl_p_rt: 'input.size()' should be equal\n"
                " to 'input_size' (%i != %i)\n", in_size, input_size);
    }

    // *** Compute gaussian's exponent - 'g' means gaussian ***
    // NOTE \Sigma is a diagonal matrix, ie det() = \Prod and inverse is 1/...

    g_exponent = 0.0;
    log_g_det_covariance = 0.0;

    //cout << "**** s ";

    for(int i=0; i<input_size; i++) {
      //cout << "g_exponent " << g_exponent << endl;
      // s = r[i] - mu_t[i]
      s = input[i] - mu(target, i);

      //cout << s ;

      // memorize this calculation for gradients computation
      beta(target, i) = s / sigma2(target, i);

      g_exponent += s * beta(target, i);

      // determinant of covariance matrix
      log_g_det_covariance += safelog( sigma2(target, i) );
    }
    //cout << endl;

    g_exponent *= -0.5;

    // ### Should we use logs here?
    //cout << "g_exponent " << g_exponent << " log_g_det_covariance " << log_g_det_covariance << endl;

#ifdef BOUNDCHECK
    if( isnan(g_exponent) || isnan(log_g_det_covariance) ) {
      PLERROR( "NnlmOutputLayer::compute_nl_p_rt - NAN present.\n" );
    }
#endif

    // * Compute normalizing factor
    log_g_normalization = - 0.5 * ( (input_size) * safelog(2.0 * Pi) + log_g_det_covariance );

    //cout << "log_g_normalization " << log_g_normalization << endl;

    // * Compute log p(r,g|t) = log( p(r|t,g) p(g) ) = log( umc p_gaussian(r|t) )
    vec_log_p_rg_t[target] = safelog(umc) + g_exponent + log_g_normalization;

    //cout << "p(r,g|t) " << safeexp( vec_log_p_rg_t[target] ) << endl;

    // * Compute log p(r|t) = log( umc p_g(r|t) + (1-umc) p_u(r|t) )
    vec_log_p_r_t[target] = logadd( vec_log_p_rg_t[target] , safelog(1.0-umc) - (input_size) * safelog(2.0));

    //cout << "p_u " << safeexp( safelog(1.0-umc) - (input_size) * safelog(2.0) ) << endl;

    // * Compute log p(r,t)
    vec_log_p_rt[target] = safelog(pi[target]) + vec_log_p_r_t[target];

    // * Compute output
    output[0] = - vec_log_p_rt[target];

    //cout << "safeexp( vec_log_p_rt[target] ) " << safeexp( vec_log_p_rt[target] ) << endl;

#ifdef BOUNDCHECK
    if( isnan(vec_log_p_rt[target]) ) {
      PLERROR( "NnlmOutputLayer::compute_nl_p_rt - NAN present.\n" );
    }
#endif

    // * Compute posterior for coeff_class_conditional_uniform_mixture evaluation in the bpropUpdate
    // p(generated by gaussian| r) = a p_g(r|i) / p(r|i)
    //log_p_g_r = safelog(umc) + g_exponent + log_g_normalization - log_p_r_i;

}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::NnlmOutputLayer::compute_nl_p_t_r ( const Vec input,
Vec output 
) const

Computes -log( p(t|r) )

Computes -log( p(t|r) ) = -log( p(r,t) / p(r,u) )

Definition at line 643 of file NnlmOutputLayer.cc.

References compute_nl_p_rt(), log_sum_p_ru, PLearn::logadd(), PLERROR, PLearn::TVec< T >::resize(), setTarget(), target_cardinality, and u.

Referenced by fprop().

{
    Vec nl_p_rt;
    Vec nl_p_ru;

    nl_p_rt.resize( 1 );
    nl_p_ru.resize( 1 );


    // * Compute numerator
    compute_nl_p_rt( input, nl_p_rt );

    // * Compute denominator
    // Normalize over whole vocabulary

    log_sum_p_ru = -REAL_MAX;

    for(int u=0; u<target_cardinality; u++)  {
        setTarget( u );
        compute_nl_p_rt( input, nl_p_ru );
        log_sum_p_ru = logadd(log_sum_p_ru, -nl_p_ru[0]);
    }

    //cout << "log_p_rt[0] " << -nl_p_rt[0] << " log_sum_p_ru " << log_sum_p_ru << endl;

    output[0] = nl_p_rt[0] + log_sum_p_ru;

    //cout << "p_t_r " << safeexp( - output[0] ) << endl;

#ifdef BOUNDCHECK
    if( isnan(output[0]) ) {
      PLERROR( "NnlmOutputLayer::compute_nl_p_t_r - NAN present.\n" );
    }
#endif

}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::NnlmOutputLayer::computeApproxDiscriminantGradient ( ) const

MUST be called after the corresponding fprop.

Definition at line 786 of file NnlmOutputLayer.cc.

References ad_gradient, addCandidateContribution(), c, candidates, computeNonDiscriminantGradient(), context, PLearn::TVec< T >::fill(), gradient_log_tmp, gradient_log_tmp_neg, gradient_log_tmp_pos, i, PLearn::OnlineLearningModule::input_size, j, PLearn::TVec< T >::length(), log_sum_p_ru, PLearn::logsub(), nd_gradient, PLearn::safeexp(), shared_candidates, and the_real_target.

Referenced by bpropUpdate().

{
    gradient_log_tmp.fill(-REAL_MAX);
    gradient_log_tmp_pos.fill(-REAL_MAX);
    gradient_log_tmp_neg.fill(-REAL_MAX);

    // * Compute nd gradient
    computeNonDiscriminantGradient();

    // * Compute ad specific term
    int c;

    // target
    addCandidateContribution( the_real_target );

    // shared candidates
    for( int i=0; i< shared_candidates.length(); i++ )
    {
        c = shared_candidates[i];
        if( c != the_real_target )
            addCandidateContribution( c );
    }

    // context candidates 
    for( int i=0; i< candidates[ context ].length(); i++ )
    {
        c = candidates[ context ][i];
        if( c != the_real_target )
            addCandidateContribution( c );
    }


    // *** The corresponding approx gradient ***
    for(int j=0; j<input_size; j++) {
        if( gradient_log_tmp_pos[j] > gradient_log_tmp_neg[j] ) {
            gradient_log_tmp[j] = logsub( gradient_log_tmp_pos[j], gradient_log_tmp_neg[j] );
            ad_gradient[j] = nd_gradient[j] - safeexp( gradient_log_tmp[j] - log_sum_p_ru);
        } else  {
            gradient_log_tmp[j] = logsub( gradient_log_tmp_neg[j], gradient_log_tmp_pos[j] );
            ad_gradient[j] = nd_gradient[j] + safeexp( gradient_log_tmp[j] - log_sum_p_ru);
        }
    }

}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::NnlmOutputLayer::computeDiscriminantGradient ( ) const

MUST be called after the corresponding fprop.

Definition at line 835 of file NnlmOutputLayer.cc.

References addCandidateContribution(), computeNonDiscriminantGradient(), fd_gradient, PLearn::TVec< T >::fill(), gradient_log_tmp, gradient_log_tmp_neg, gradient_log_tmp_pos, PLearn::OnlineLearningModule::input_size, j, log_sum_p_ru, PLearn::logsub(), nd_gradient, PLearn::safeexp(), target_cardinality, and u.

Referenced by bpropUpdate().

{
    gradient_log_tmp.fill(-REAL_MAX);
    gradient_log_tmp_pos.fill(-REAL_MAX);
    gradient_log_tmp_neg.fill(-REAL_MAX);

    // * Compute nd gradient
    computeNonDiscriminantGradient();

    // * Compute ad specific term
    for( int u=0; u< target_cardinality; u++ )
    {
        addCandidateContribution( u );
    }


    // *** The corresponding approx gradient ***
    for(int j=0; j<input_size; j++) {
        if( gradient_log_tmp_pos[j] > gradient_log_tmp_neg[j] ) {
            gradient_log_tmp[j] = logsub( gradient_log_tmp_pos[j], gradient_log_tmp_neg[j] );
            fd_gradient[j] = nd_gradient[j] - safeexp( gradient_log_tmp[j] - log_sum_p_ru);
        } else  {
            gradient_log_tmp[j] = logsub( gradient_log_tmp_neg[j], gradient_log_tmp_pos[j] );
            fd_gradient[j] = nd_gradient[j] + safeexp( gradient_log_tmp[j] - log_sum_p_ru);
        }
    }

//cout << "===nd_gradient " << nd_gradient << endl;
//cout << "---fd_gradient " << nd_gradient << endl;

}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::NnlmOutputLayer::computeEmpiricalLearningRateParameters ( )

Computes the word specific empirical learning rates MUST be called after a valid call to applyClassCounts()

Definition at line 431 of file NnlmOutputLayer.cc.

References el_decrease_constant, el_last_update, el_start_discount_factor, el_start_learning_rate, PLearn::TVec< T >::fill(), i, PLearn::pow(), PLearn::TVec< T >::resize(), s_sumI, sumI, and target_cardinality.

{
    // *** Start learning rate *** 
    // (1-slr)^n = el_start_discount_factor -> slr = 1 - (el_start_discount_factor)^{1/n}
    el_start_learning_rate.resize(target_cardinality);
    el_start_learning_rate.fill(1.0);
    for(int i=0; i<target_cardinality; i++) {
        el_start_learning_rate[i] -= pow( el_start_discount_factor, 1.0/sumI[i] );
    }

    // *** Decrease constant *** 
    el_decrease_constant.resize(target_cardinality);
    el_decrease_constant.fill(0.0);

    // *** To memorize the step of the last update to the word ***
    el_last_update.resize(target_cardinality);
    el_last_update.fill(s_sumI);

}

Here is the call graph for this function:

void PLearn::NnlmOutputLayer::computeNonDiscriminantGradient ( ) const

Gradients with respect to input.

MUST be called after the corresponding fprop.

Definition at line 769 of file NnlmOutputLayer.cc.

References beta, i, PLearn::OnlineLearningModule::input_size, nd_gradient, PLearn::safeexp(), the_real_target, vec_log_p_r_t, and vec_log_p_rg_t.

Referenced by bpropUpdate(), computeApproxDiscriminantGradient(), and computeDiscriminantGradient().

{
    //cout << "vec_log_p_rg_t[the_real_target] " << vec_log_p_rg_t[the_real_target] << " vec_log_p_r_t[the_real_target] " << vec_log_p_r_t[the_real_target] << endl;

    real tmp = safeexp( vec_log_p_rg_t[the_real_target] - vec_log_p_r_t[the_real_target] );

    for(int i=0; i<input_size; i++) {
        nd_gradient[i] = beta( the_real_target, i) * tmp;
    }

}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::NnlmOutputLayer::declareOptions ( OptionList ol) [static, protected]

Declares the class options.

Reimplemented from PLearn::OnlineLearningModule.

Definition at line 100 of file NnlmOutputLayer.cc.

References PLearn::OptionBase::buildoption, context_cardinality, PLearn::declareOption(), PLearn::OnlineLearningModule::declareOptions(), dl_decrease_constant, dl_start_learning_rate, el_start_discount_factor, PLearn::OptionBase::learntoption, mu, pi, s_sumI, sigma2, sigma2min, step_number, sumI, sumR, sumR2, target_cardinality, and umc.

{
    // * Build Options *
    // * Build Options *
    declareOption(ol, "target_cardinality",
                  &NnlmOutputLayer::target_cardinality,
                  OptionBase::buildoption,
                  "Number of target tags.");

    declareOption(ol, "context_cardinality",
                  &NnlmOutputLayer::context_cardinality,
                  OptionBase::buildoption,
                  "Number of context tags (usually, there will be the additional 'missing' tag).");

    declareOption(ol, "sigma2min",
                  &NnlmOutputLayer::sigma2min,
                  OptionBase::buildoption,
                  "Minimal value for the diagonal covariance matrix.");

    declareOption(ol, "dl_start_learning_rate",
                  &NnlmOutputLayer::dl_start_learning_rate,
                  OptionBase::buildoption,
                  "Discriminant learning start learning rate.");
    declareOption(ol, "dl_decrease_constant",
                  &NnlmOutputLayer::dl_decrease_constant,
                  OptionBase::buildoption,
                  "Discriminant learning decrease constant.");

    declareOption(ol, "el_start_discount_factor",
                  &NnlmOutputLayer::el_start_discount_factor,
                  OptionBase::buildoption,
                  "How much weight is given to the first example of a given word with respect to the last, ex 0,2.");
/*    declareOption(ol, "el_decrease_constant",
                  &NnlmOutputLayer::el_decrease_constant,
                  OptionBase::buildoption,
                  "Empirical learning decrease constant of gaussian parameters discount rate.");
*/

    // * Learnt Options *
    // * Learnt Options *
    declareOption(ol, "step_number", &NnlmOutputLayer::step_number,
                  OptionBase::learntoption,
                  "The step number, incremented after each update.");

    declareOption(ol, "umc", &NnlmOutputLayer::umc,
                  OptionBase::learntoption,
                  "The uniform mixture coefficient. p(r|i) = umc p_gauss + (1-umc) p_uniform");

    declareOption(ol, "pi", &NnlmOutputLayer::pi,
                  OptionBase::learntoption,
                  "pi[t] -> moyenne empirique de y==t" );
    declareOption(ol, "mu", &NnlmOutputLayer::mu,
                  OptionBase::learntoption,
                  "mu(t) -> moyenne empirique des r quand y==t" );
    declareOption(ol, "sigma2", &NnlmOutputLayer::sigma2,
                  OptionBase::learntoption,
                  "sigma2(t) -> variance empirique des r quand y==t" );

    declareOption(ol, "sumR", &NnlmOutputLayer::sumR,
                  OptionBase::learntoption,
                  "sumR(i) -> sum_t r_t 1_{y==i}" );
    declareOption(ol, "sumR2", &NnlmOutputLayer::sumR2,
                  OptionBase::learntoption,
                  "sumR2(i) -> sum_t r_t^2 1_{y==i}" );
    declareOption(ol, "sumI", &NnlmOutputLayer::sumI,
                  OptionBase::learntoption,
                  "sumI(i) -> sum_t 1_{y==i}" );
    declareOption(ol, "s_sumI", &NnlmOutputLayer::s_sumI,
                  OptionBase::learntoption,
                  "sum_t 1" );

    // ### other?

    // Now call the parent class' declareOptions
    inherited::declareOptions(ol);
}

Here is the call graph for this function:

static const PPath& PLearn::NnlmOutputLayer::declaringFile ( ) [inline, static]

Reimplemented from PLearn::OnlineLearningModule.

Definition at line 201 of file NnlmOutputLayer.h.

:
    //#####  Protected Member Functions  ######################################
NnlmOutputLayer * PLearn::NnlmOutputLayer::deepCopy ( CopiesMap copies) const [virtual]

Reimplemented from PLearn::OnlineLearningModule.

Definition at line 49 of file NnlmOutputLayer.cc.

{
void PLearn::NnlmOutputLayer::forget ( ) [virtual]

reset the parameters to the state they would be BEFORE starting training.

Note that this method is necessarily called from build().

Implements PLearn::OnlineLearningModule.

Definition at line 1298 of file NnlmOutputLayer.cc.

References PLearn::endl(), and resetParameters().

{
    cout << "NnlmOutputLayer::forget()" << endl;
    resetParameters();
}

Here is the call graph for this function:

void PLearn::NnlmOutputLayer::fprop ( const Vec input,
Vec output 
) const [virtual]

Computes 'cost' for the 'target' given the input, compute the output (possibly resize it appropriately)

given the input, compute the output (possibly resize it appropriately) The output is then computed from p(r,c) = p(r|c) * p(c):

  • cost = DISCRIMINANT: output is NL of p(c|r) = p(r,c) / sum_{c'=0}^{target_cardinality} p(r,c')
  • cost = DISCRIMINANT APPROXIMATED: output is NL of p(c|r)_approx = p(r,c) / sum_{c' Candidates} p(r,c')
  • cost = NON DISCRIMINANT: output is NL of p(r,c) p(r|i) = umc * p_gaussian(r|c) + (1-umc) / 2^n

Reimplemented from PLearn::OnlineLearningModule.

Definition at line 524 of file NnlmOutputLayer.cc.

References applyMuAndSigmaEmpiricalUpdate(), compute_approx_nl_p_t_r(), compute_nl_p_rt(), compute_nl_p_t_r(), cost, COST_APPROX_DISCR, COST_DISCR, COST_NON_DISCR, is_learning, learning, LEARNING_EMPIRICAL, PLERROR, target, and the_real_target.

{

    the_real_target = target;


    // *** In the case of empirical (max likelihood) learning of mu and sigma ***
    // we can update mu and sigma before computing the cost and backpropagating.
    if( (learning==LEARNING_EMPIRICAL) && is_learning )  {
        applyMuAndSigmaEmpiricalUpdate(input);
    }

    // *** Non-discriminant cost: -log( p(r,t) ) ***
    if( cost == COST_NON_DISCR ) {
        compute_nl_p_rt( input, output );
    }
    // *** Approx-discriminant cost ***
    else if( cost == COST_APPROX_DISCR )  {
        compute_approx_nl_p_t_r( input, output );
    }
    // *** Discriminant cost: -log( p(t|r) ) ***
    else if( cost == COST_DISCR )  {
        compute_nl_p_t_r( input, output );
    }
    else  {
        PLERROR("NnlmOutputLayer::fprop - invalid cost\n");
    }

}

Here is the call graph for this function:

void PLearn::NnlmOutputLayer::getBestCandidates ( const Vec input,
Vec candidate_tags,
Vec probabilities 
) const

returns best candidates according to compute_nl_p_t_r

May be called after compute_nl_p_t_r to find out which words get highest probability according to the model.

Definition at line 682 of file NnlmOutputLayer.cc.

References PLearn::TVec< T >::clear(), compute_nl_p_rt(), i, log_sum_p_ru, PLearn::TVec< T >::resize(), PLearn::safeexp(), setTarget(), target_cardinality, u, and PLearn::wordAndProbGT().

{
                candidate_tags.resize(10);
                probabilities.resize(10);

    std::vector< wordAndProb > tmp;
                Vec nl_p_ru(1);

    for(int u=0; u<target_cardinality; u++)  {  
        setTarget( u );
        compute_nl_p_rt( input, nl_p_ru );

        tmp.push_back( wordAndProb( u, safeexp( - (nl_p_ru[0] + log_sum_p_ru) ) ) );
    }

    std::sort(tmp.begin(), tmp.end(), wordAndProbGT);

    // HACK we don't check if itr has hit the end... unlikely target_cardinality is smaller than 10
    std::vector< wordAndProb >::iterator itr_vec;
    itr_vec=tmp.begin();
    for(int i=0; i<10; i++) {
                candidate_tags[i] = itr_vec->wordtag;
                probabilities[i] = itr_vec->probability;
        itr_vec++;
    }

    tmp.clear();
}

Here is the call graph for this function:

OptionList & PLearn::NnlmOutputLayer::getOptionList ( ) const [virtual]

Reimplemented from PLearn::Object.

Definition at line 49 of file NnlmOutputLayer.cc.

{
OptionMap & PLearn::NnlmOutputLayer::getOptionMap ( ) const [virtual]

Reimplemented from PLearn::Object.

Definition at line 49 of file NnlmOutputLayer.cc.

{
RemoteMethodMap & PLearn::NnlmOutputLayer::getRemoteMethodMap ( ) const [virtual]

Reimplemented from PLearn::Object.

Definition at line 49 of file NnlmOutputLayer.cc.

{
void PLearn::NnlmOutputLayer::makeDeepCopyFromShallowCopy ( CopiesMap copies) [virtual]
void PLearn::NnlmOutputLayer::resetAllClassVars ( )

Used for initializing s_sumI, sumI, sumR, sumR2, as well as pi, mu and sigma2 to the max likelihood values.

Definition at line 306 of file NnlmOutputLayer.cc.

References PLearn::endl(), PLearn::TVec< T >::fill(), PLearn::TMat< T >::fill(), global_sumR, global_sumR2, PLearn::OnlineLearningModule::input_size, PLearn::TMat< T >::resize(), PLearn::TVec< T >::resize(), s_sumI, sumI, sumR, sumR2, and target_cardinality.

Referenced by resetParameters().

                                        {

    cout << "NnlmOutputLayer::resetAllClassVars()" << endl;

    s_sumI = 0;
    sumI.resize( target_cardinality );
    sumI.fill( 0 );
    sumR.resize( target_cardinality, input_size);
    sumR.fill( 0.0 );
    sumR2.resize( target_cardinality, input_size);
    sumR2.fill( 0.0 );

    // ### for a global_sigma2
    global_sumR.resize(input_size);
    global_sumR.fill( 0.0 );
    global_sumR2.resize(input_size);
    global_sumR2.fill( 0.0 );
    // ### for a global_sigma2
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::NnlmOutputLayer::resetParameters ( )

Resizes variables and sets pretty much everything back to a 'zero' value.

Definition at line 248 of file NnlmOutputLayer.cc.

References ad_gradient, beta, bill, bob, PLearn::endl(), fd_gradient, PLearn::TVec< T >::fill(), PLearn::TMat< T >::fill(), global_mu, global_sigma2, gradient_log_tmp, gradient_log_tmp_neg, gradient_log_tmp_pos, PLearn::OnlineLearningModule::input_size, mu, nd_gradient, pi, resetAllClassVars(), PLearn::TMat< T >::resize(), PLearn::TVec< T >::resize(), sigma2, step_number, target_cardinality, umc, vec_log_p_r_t, vec_log_p_rg_t, and vec_log_p_rt.

Referenced by build_(), and forget().

{

    cout << "NnlmOutputLayer::resetParameters()" << endl;

    step_number = 0;
    umc = 0.999999; // ###

    pi.resize( target_cardinality );
    pi.fill( 0.0 );
    mu.resize( target_cardinality, input_size);
    mu.fill( 0.0 );
    sigma2.resize( target_cardinality, input_size);
    sigma2.fill( 0.0 );

    // ### for a global_sigma2
    global_mu.resize(input_size);
    global_mu.fill( 0.0 );
    global_sigma2.resize(input_size);
    global_sigma2.fill( 0.0 );
    // ### for a global_sigma2

    resetAllClassVars();

    vec_log_p_rg_t.resize( target_cardinality );
    vec_log_p_r_t.resize( target_cardinality );
    vec_log_p_rt.resize( target_cardinality );
    beta.resize( target_cardinality, input_size );

    nd_gradient.resize( input_size );
    nd_gradient.fill( 0.0 );
    ad_gradient.resize( input_size );
    ad_gradient.fill( 0.0 );
    fd_gradient.resize( input_size );
    fd_gradient.fill( 0.0 );

    bill.resize( input_size );
    bill.fill( 0.0 );
    bob.resize( input_size );
    bob.fill( 0.0 );

    gradient_log_tmp.resize( input_size );
    gradient_log_tmp.fill( 0.0 );
    gradient_log_tmp_pos.resize( input_size );
    gradient_log_tmp_pos.fill( 0.0 );
    gradient_log_tmp_neg.resize( input_size );
    gradient_log_tmp_neg.fill( 0.0 );

    //log_p_g_r = safelog( 0.9 );
    //sum_log_p_g_r = -REAL_MAX;

}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::NnlmOutputLayer::setContext ( int  the_context) const

Sets the context. The Candidates set of the approximated discriminant cost is determined from the context.

Definition at line 469 of file NnlmOutputLayer.cc.

References context, context_cardinality, and PLERROR.

{
#ifdef BOUNDCHECK
    if( the_context >= context_cardinality )  {
        PLERROR("NnlmOutputLayer::setContext:'the_context'(=%i) >= 'context_cardinality'(=%i)\n"
                  , the_context, context_cardinality);
    }
#endif

    context = the_context;
}
void PLearn::NnlmOutputLayer::setCost ( int  the_cost)

Sets the cost used in the fprop()

Definition at line 485 of file NnlmOutputLayer.cc.

References cost, and PLERROR.

{
#ifdef BOUNDCHECK
    if( the_cost > 2 || the_cost < 0 )  {
        PLERROR("NnlmOutputLayer::setCost:'the_cost'(=%i) > '2' or < '0'\n"
                  , the_cost);
    }
#endif

    cost = the_cost;
}
void PLearn::NnlmOutputLayer::setLearning ( int  the_learning)

Definition at line 501 of file NnlmOutputLayer.cc.

References learning, and PLERROR.

{
#ifdef BOUNDCHECK
    if( the_learning > 1 || the_learning < 0 )  {
        PLERROR("NnlmOutputLayer::setLearning:'the_learning'(=%i) > '1' or < '0'\n"
                  , the_learning);
    }
#endif

    learning = the_learning;
}
void PLearn::NnlmOutputLayer::setTarget ( int  the_target) const

Sets t, the target.

Definition at line 454 of file NnlmOutputLayer.cc.

References PLERROR, target, and target_cardinality.

Referenced by compute_approx_nl_p_t_r(), compute_nl_p_t_r(), and getBestCandidates().

{
#ifdef BOUNDCHECK
    if( the_target >= target_cardinality )  {
        PLERROR("NnlmOutputLayer::setTarget:'the_target'(=%i) >= 'target_cardinality'(=%i)\n",
                   the_target, target_cardinality);
    }
#endif

    target = the_target;
}

Here is the caller graph for this function:

void PLearn::NnlmOutputLayer::updateClassVars ( const int  the_target,
const Vec the_input 
)

Definition at line 330 of file NnlmOutputLayer.cc.

References global_sumR, global_sumR2, i, PLearn::OnlineLearningModule::input_size, PLERROR, s_sumI, sumI, sumR, sumR2, and target_cardinality.

{
    #ifdef BOUNDCHECK
    if( the_target >= target_cardinality )  {
        PLERROR("NnlmOutputLayer::updateClassVars:'the_target'(=%i) >= 'target_cardinality'(=%i)\n",
                   the_target, target_cardinality);
    }
    #endif

    s_sumI++;
    sumI[the_target]++;
    for(int i=0; i<input_size; i++) {
      sumR( the_target, i ) += the_input[i];
      sumR2( the_target, i ) += the_input[i]*the_input[i];

      // ### for a global_sigma2
      global_sumR[i] += the_input[i];
      global_sumR2[i] += the_input[i]*the_input[i];
      // ### for a global_sigma2
    }

}

Member Data Documentation

Reimplemented from PLearn::OnlineLearningModule.

Definition at line 201 of file NnlmOutputLayer.h.

Definition at line 310 of file NnlmOutputLayer.h.

Referenced by resetParameters().

Definition at line 311 of file NnlmOutputLayer.h.

Referenced by resetParameters().

specifies the range of the values of 'context' (ex: + 'missing' tag)

Definition at line 81 of file NnlmOutputLayer.h.

Referenced by declareOptions(), and setContext().

Must be set before calling fprop.

the cost

Definition at line 280 of file NnlmOutputLayer.h.

Referenced by applyMuGradient(), applySigmaGradient(), bpropUpdate(), fprop(), and setCost().

Definition at line 88 of file NnlmOutputLayer.h.

Referenced by applyMuGradient(), applySigmaGradient(), and declareOptions().

Discriminant learning (of $ \mu_u $ and $ \Sigma_u $) - dl.

Definition at line 87 of file NnlmOutputLayer.h.

Referenced by applyMuGradient(), applySigmaGradient(), and declareOptions().

The original way of computing the mus and sigmas (ex.

mu memorize r and then divide) had the effect learning slowed down with time. We use this discount rate now. TODO validate computation of mus and sigmas gaussian_learning_discount_rate

Definition at line 326 of file NnlmOutputLayer.h.

Empirical learning (of $ \mu_u $ and $ \Sigma_u $) - el.

Definition at line 94 of file NnlmOutputLayer.h.

Referenced by computeEmpiricalLearningRateParameters(), and declareOptions().

Definition at line 291 of file NnlmOutputLayer.h.

Referenced by compute_nl_p_rt().

Definition at line 329 of file NnlmOutputLayer.h.

Referenced by fprop().

Definition at line 275 of file NnlmOutputLayer.h.

Referenced by bpropUpdate(), fprop(), and setLearning().

Definition at line 292 of file NnlmOutputLayer.h.

Referenced by compute_nl_p_rt().

Definition at line 293 of file NnlmOutputLayer.h.

Referenced by compute_nl_p_rt().

Definition at line 290 of file NnlmOutputLayer.h.

Referenced by compute_nl_p_rt().

keeps track of updates

Definition at line 230 of file NnlmOutputLayer.h.

Referenced by applyMuGradient(), applySigmaGradient(), declareOptions(), and resetParameters().

the current word -> we use its parameters to compute output

Definition at line 283 of file NnlmOutputLayer.h.

Referenced by applyMuAndSigmaEmpiricalUpdate(), compute_nl_p_rt(), fprop(), and setTarget().

We use a mixture with a uniform to prevent negligeable probabilities which cause gradient explosions.

Should be learned as mean of p(g|r) (probability that gaussian is responsible for observation, given r) uniform mixture coefficient

Definition at line 237 of file NnlmOutputLayer.h.

Referenced by compute_nl_p_rt(), declareOptions(), and resetParameters().


The documentation for this class was generated from the following files:
 All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Friends Defines